The "G" in "GUI" stands for "graphical". Some people use "TUI" to mean "Terminal User Interface". It therefore follows that someone using merely "UI" has not specified the type of interface.
Probably only if this implements a similar roaming concept as mosh by using a nonce or something along that line vs. ssh session keys that are mapped to an IP. Mosh was designed by MIT folks not just for high latency but also mobile networks which can change IP and lose connectivity frequently. Perhaps the code author is here on HN?
Mosh was designed for a great number of things, including what you mentioned.
To me, mosh is "careful and impressively pedantically correct UTF-8 terminal emulator" + "careful and impressively pedantically correct state synchronization protocol". :)
Any similarity with tmux is purely because it's connectionless. It's mostly certainly more like ssh than it is like tmux. Both are remote shell protocols, while tmux is simply a terminal multiplexer.
Mosh is not at all like ssh, tries hard to avoid duplicating ssh functionality, and this is by design.
If it were like ssh the authors would have had to then handle security/authentication and all that jazz perfectly. It was written to "stand on the shoulders" of ssh, except handle terminal emulation and UTF-8 correctly. By using ssh, you don't have to trust "some new thing" as long as you trust ssh.
It's not even a shell protocol if you think about it!
Mosh bidirectionally synchronizes the state between the terminal window on the client and the virtual terminal on the server. It runs at a frame rate, which results in not filling intermediate network queues, which is where the low latency comes from. :)
> It's not even a shell protocol if you think about it!
Nor is SSH. Feel free to take a read over the RFCs. There's next to no 'sh' in SSH as a protocol. What mosh adds to SSH is the ability to securely resume a session. SSH is fundamentally protocol for setting up a secure connection and then ping-pong messages back and forth.
tmux runs locally. It's not a network daemon. The complexion of a daemon and protocol meant to operate over a network is very, very different from one intended to run over Unix domain sockets or on the loopback interface.
Mosh is a terminal emulator itself and sends the diff of the output from the server to the client. Ssh just passes the terminal data through. I think that is why mosh is like tmux, because tmux is also a terminal emulator.
When people talk about containers they almost always mean Linux containers, i.e. a collection of features in the Linux kernel that allow creating the illusion of containers.
Needless to say, this doesn't make any sense on macOS because there's no Linux kernel. Therefore you need a Linux VM to run Linux containers on macOS.
Generally this is not what Docker containers are 'for', right? Docker is intended as a system for 'application containers', which yes, may spawn child processes, but are generally not expected to include a process supervisor or service management layer internally.
Contrast this to 'operating system containers' which are designed to run an init system, so individual containers are intentionally multiprocess. Many virtual private servers run in containers of this kind, e.g., by OpenVZ.
On that note, how come you've gone with Docker for this rather than something like LXC or OpenVZ, which are ostensibly designed for the large, multiprocess container use case?
Yes I run an init system that starts multiple processes and services. All in a Docker container.
I would be very keen to understand why LXC is better than Docker for this. I've also read this kind of "marketing" around so called "operating system containers" but I am still clueless why they are better. Concretely, what are the benefits?
Well, they could also just mean their service is a multi-process program (as opposed to threads, which...are also processes), but it's not super clear.
I would pay attention to ChatGPT et al. Maybe that's killing SEO a little bit, no? You can't quite optimize for queries to a static model that has already been trained. Perhaps long term you can influence the next model release by carefully injecting content on the internet to be crawled by GPT models, but nothing you can do short term.
CoW is a strategy where you don't actually copy memory until you write to it. So, when the 10GB process spawns a child process, that child process also has 10GB of virtual memory, but both processes are backed by the same pages. It's only when one of them writes to a page that a copy happens. When you fork+exec you never actually touch most of those pages, so you never actually pay for them.
(Obviously, that's the super-simplified version, and I don't fully understand the subtleties involved, but that's exactly what GP means: it's harder to analyse)
To make it slightly more complicated: you don't pay for the 10 GB directly, but you still pay for setting up the metadata, and that scales with the amount of virtual memory used.
> How would you make a distinction between my Python code making a request from your API endpoint, and a GPT-controlled Python program making the same request?
Your API endpoint would see a bimodal distribution of latencies. The higher latency group are the GPT-controlled programs.
I think what OP needs is a way to know the repository size BEFORE cloning it first. In that case I think you need to query the git server's API, hopefully some GitHub or GitLab with nice APIs to give you that information.
without needing to run mosh on the server