Some years ago I picked up an old HP thin client that was essentially one of these mini boxes, popped in a mini-IDE SSD and maxed out the processor and memory. Set it up to try out CentOS and ended up using it as a small home server for a few months. I love this form factor, and the next time I'm in the market for a home server it's going to be my first choice.
STH has a bunch of videos on youtube reviewing different machines of this sort. Seems like there is a lot to recommend using these as homelab systems or just to add a little extra horsepower for builds and testing.
The only annoying part is that so many variations exist and stuff on ebay isn't always well marked.
ServeTheHome has a long history of spilling the beans on the best value secrets on the planet, on knowing where to look for the best second-hand gear. A couple years ago the Dell & Supermicro FatTwin2's were all the rage... I may have been one of the folks doing that too. There was no place else like it, reviewing & looking not only at new products, but the most sensible purchases.
It's a treasure of a site, such a rare realistic home-operator wallet-conscious view of things. Although a lot of the coverage now is more "regular" these days, there are reviews of all sorts of low & mid range products, & things like add-on drive bays that the at home systems cobbler would be interested in. Sometimes things like this Project TinyMiniMicro 1L mini-pc series come along & write up all the really smart really sharp things folks have been doing for a while time, really show/reveal the secret undercurrent that pulls so many.
I sometimes worry that Patrick is spilling the beans, that he'll turn too many people on to the great deals & they'll stop being so great. The good thing about these business mini-pc's is that they are used in countless businesses, which is why they are so radically much better value than the imo way way over priced barebones mini-pc's that I used to oggle. It's also nice to see the range expand- new Ryzen offerings are exciting (if expensive), and higher wattages (more oomph) are both new.
I use an Intel NUC as my main home computer. I bought a fanless case [1] and VESA-mounted it to the back of a 4K screen.
It has a good SSD, 64GiB RAM and Kubuntu.
I had to buy a heatsink for the SSD to avoid it overheating without any fans around.
> every time I try to use [Linux] on desktop … - I get a frustration since it has all the similar issues I was facing many years ago, which require some time to fix and I am way too lazy for that already
I don't understand this. He fixed the CPU scaling issue, and there don't seem to be any others.
Instead of the technically rather outdated NUC models, I would suggest anyone interested in looking into Ryzen 4xxx/5xxx-based systems (laptop chipsets). The price/performance is way better, and the embedded Vega 7/8 blows any Intel out of the water - they can power 4x4K@60Hz or 1x8K@60Hz. Some example models are the Asus PN50, Gigabyte Brix.
I've used 2 different generations of Intel NUCs as home servers for running VMs and a media server. This time around I got an Asus PN50 and I'm quite happy with it.
I’d love to get away from Intel. However their proprietary Quicksync allows very efficient video transcoding and is basically the only feature that keeps me with Intel.
I've been interested in doing something like this ever since VSCode remote work has become stable. My main side project right now is a Kotlin backend, so I'm waiting for IntelliJ's very-very-new remote features to get a bit more robust (they just added some abilities for WSL and run targets on remote platforms in early access, but I think you still can't e.g. run your IDE's analysis engine on a remote box and avoid local builds entirely yet). That said, if you're able to live entirely in VSCode & command line, you'd be all set here.
It really is wild how the VSCode language server architecture enables all this, btw. I'm not sure whether this was an intentional goal of the LSP when they started working on it, or if it was originally just to keep the language analysis in a separate process and this wound up being a nice benefit, but being able to run all of your editor's language features on another box and just use the editor as a dumb client for them is brilliant.
I do this with a Linode machine. My development is done in vim through mosh and the latency surprisingly isn’t an issue.
It’s amazing how many problems just vanish with this setup. Backups are totally automated and a restore is a single click. No fussing with network tunnels for webhooks from Stripe. Downloading dependencies or uploading artefacts is super fast because I’m on a data center connection to the internet. When I’m accessing my dev environment it’s from a browser on my thin client so I never hit those weird CORS differences between localhost and a real domain.
Network latency would be a dealbreaker for me here. I’ve tried this style of development but it’s tough to keep a local and remote directory sync’d at a high frequency with low latency.
I do think this should be the goal of all of us though. I’m slowly shifting more workloads off my Mac and to local lab boxes.
I’d actually like to build a product in this space but I’m still trying to crack the latency nut.
In my case my nuc and my “terminal” are connected via 1Gbps Ethernet. Vim in tmux was terrible; line by line scrolling was very jittery. Witching off font ligatures meant the GPU was able to handle rendering and now everything feels really smooth with multiple vim panes open.
Prior to moving back to terminal based editing I was very much in the same spot as the OP, using VSCode - I didn’t notice any form of latency with that setup.
I was really impressed with mosh when I used it a few years ago. In this case SSH is going 20ft or so over ethernet, so the aspects that mosh pushes hasn't been necessary.
If your laptop and your NUC are on a local area network, the latency doesn't have to be so bad.
I have an Intel NUC and I bought a Thunderbolt 3<->10gbps Ethernet adapter. Then I bought another adapter to plug into my MacBook Pro and a 10gbps desktop switch to connect them.
This is the way - however the network side of it could easily end up costing more than the Nuc, as 10gbs equipment (switch in particular) is expensive.
Works out to $571.13 total. It might not be worth it to just connect two machines, but in my case it was worth it because I was building a 10-machine cluster.
Have you got a link? I can't find anything for less than about US$1000. It doesn't help that I'm full Unifi, but a small 10gb switch that didn't break the bank would be so excellent.
Microtik has several small switches with 2, 4, or 8 SFP+ 10Gbps ports for slightly less than the Ubiquiti gear, but I’ve also been happy with Ubiquiti in general so I’d go that route.
You can use copper DAC cables directly for short runs (in a rack or on a desk) which are cheap on EBay or for a little more money get SFP+ twisted pair or fiber adapters (and fiber is a lot cheaper than you probably worry as well).
That unit is a beast. I have one that is similarly spec.
The NUC8 i7 is probably a better machine, less cores but considerably more capable with its igpu. However as a headless server, the NUC10 has faster RAM and 2 more cores (which likely throttle to make the performance gap smaller).
Well, not replacement fans, but a replacement case: I bought the Akasa Turing fanless case [0] for my NUC and couldn't be happier. I like the new looks, too.
You want to set the cpu total power draw a bit lower, the fan onset temp lower (i know, sounds paradoxical) and the ramp rate in %fan/degree lower.
This means:
- cpu runs less hot at some performance hit. You'd have to be able to live with that. Maybe this works without reducing power draw (I'm neurotical about components getting too hot so I happily accept that)
- fan starts spinning sooner, but slowly, before things get too hot
- fan doesn't abruptly start making noise upon heat increase
If you're in the mood, setting up a custom built PC and throwing it in the closet labeled as 'server' is a far better option as that gives you access to total customization and max performance. SSH into via VSCode and you're flying into the future.
I love the Intel NUCs - I have a couple. But the graphics driver +Windows 10 has been horrible with high-res monitors. On sleep all the windows get resized and moved over to the corner.
Win10 "feature". I have two DP monitors, if I turn one off the desktop reorganises itself as though it's a completely dynamic configuration.
Why turn off? Because sleep is unstable and blue screens about 1 time in 10 with some power watchdog timeout nonsense.
In the background I run Slackware on a dual-HDMI NUC7, same screens (VESA mounted on the back of one) - no problems there.
Intel NUCs are quite overpriced, and many have the loud fan issue other users have mentioned (and the OP alluded to).
That post says they went with NUC over other options - integrated SFF PCs from other vendors or build-your-own - because it "looked nice" and was available.
It cost him 1,016 USD to get an i7 gen-10 NUC, memory, and an SSD. Quite pricey IMHO.
So I'd say this doesn't make much of a case for NUCs (pun not intended) in particular; just for using SFF PCs.
I have a NUC that I run proxmox on and have several machines spun up inside of it, connected to a synology NAS.
I think I spent 1500 bucks on the system total, and for an equivalently priced AWS system would have been hundreds of dollars a month, so I feel like it's paying for itself.
Only downside is the local internet provider only offers 5Meg upload, so I'm not going to be serving any high traffic websites, but for everything else is great.
I'm a fan of these NUCs too and I use one with Kodi for my media setup. The only thing I'd mention that I didn't see in the article is that not all NUCs support linux out of the box. In fact, I had to update the Bios due to a bug where linux would periodically freeze. Afterwards, it works great though.
I use this setup for work but "use laptop as a thin client" is where it all falls apart. I have never got remote code editing to work reliably. VSCode's SSH plugin works great until it doesn't, and you lose work.
I just remote into it and edit all files locally, and deal with the latency and screen painting.
As other commenter said, you have to build this - but there is a huge amount of options in terms of cases and available chipsets. But what do you need the GPU for? If it's not mid-high end gaming (and with 1050 I guess not) then you're better off with embedded GPU anyway.
Look into the latest APUs built in graphics. They are actually very good. Totally different from the old intel integrated graphics which are garbage. On AMD integrated graphics is actually useful these days.
Why stop there? I run a Hades Canyon NUC that costs about as much as a laptop and is strapped to the back of one of my monitors. It's by far the most stable development machine I've ever used and comes with more resources than average machines of the same cost.
Thanks to the author who took the time to write up his/her experience with this way of working. My macbook pro 2015 struggles to keep up with docker, pycharm, zoom, outlook, and slack during the workday.
Maybe it is time to burn a weekend to trying out a setup like this.
This is exactly how I found myself building a homelab! I first upgraded the SSD in my MBP-2015 and that gave me a bit of relief, but I soon started running into RAM limits and swapping.
So I bought a beefier NUC (Ghost Canyon) and run ProxMox on it. I setup a NFS server in an LXC container, and then a typical VM for development[1]. In all it took me a couple of weekends to really get things dialed in nicely.
ProxMox is great, I really love that I can provision and throw away instances quickly! It makes for less variance in trying to establish what made thing X work suddenly.
[1]: I need kernel modules and the guest to have access to USB for my dev
Compute Element is basically the main reason KFC 'console' is even a thing and it keeps flying under the radar, even this blogpost is about old-ish NUCs as I understand it.
I've got a bundle of NUCs. I love and hate them. I have mostly replaced them with Raspberry Pis, which are cheaper and have a number of other advantages.
The problems I've had are: over time, the fan decouples from the processor and the machine runs super hot. Ubuntu always enables wifi power saving,w hich causes huge ssh latency problems. The SD card won't boot. The cheaper ones are slow, to get good performance you have to spend at least $500.
One interesting note: although they come with 19V power supply, they work fine on 12V.
I don't have any SD card problems with pis any more. That is, I haven't seen any SD corruption in over a year in ~5 Pis, and other things about the pis have failed more frequently.