> The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data
How does this affect luminance perception for deuteranopes? (Since their color blindness is caused by a deficiency of the cones that detect green wavelengths)
Protanopia and protanomaly shift luminance perception away from the longest wavelengths of visible light, which causes highly-saturated red colours to appear dark or black. Deuteranopia and deuteranomaly don't have this effect. [1]
Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.
I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.
The cones are the colour sensitive portion of the retina, but only make up a small percent of all the light detecting cells. The rods (more or less the brightness detecting cells) would still function in a deuteranopic person, so their luminance perception would basically be unaffected.
Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…
The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)
This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.
It’s not that their M-cones (middle, i.e. green) don’t work at all, their M-cones responsivity curve is just shifted to be less distinguishable from their L-cones curve, so they effectively have double (or more) the “red sensors”.
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
The first NVMe over PCIe consumer drive was launched a decade ago.
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
It's admittedly been harder than it used to be... I've been less inclined to buy CDs over just using streaming audio, since I pay for YouTube to go ad free, I use the music streaming kind of as a bonus.
On the Blu-ray front, I've tended to buy Blu-ray where available, but have bought DVD sets as well. There's also the high seas, so to speak for content that is not available for purchase/rent. I'd actually pay for a good AI upscaler software for DVD content if it worked under Linux (natively or via WINE). I left Windows outside of work a few years ago and not going back... I'm perfectly happy to pay for good, useful software even if I'm more inclined to look for open-source solutions first.
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots.
You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2.
Getting great speeds from that little device.
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.
USB, even 3.2 doesnt support DMA mastering thus is bad for anything requiring performance.
USB4 is just passing PCIE traffic and should be fine, but at that point you are paying >$150 per usb4 hub (because mobos have two at most) and >$50 per m.2 converter.
As @wtallis already said, a lot of external USB stuff is just unreliable.
Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).
Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.
SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.
I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?
Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?
I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.
> I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling
Honestly no idea. Should be doable but with personal computing being attacked every year, I would not hold my breath.
> Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right?
Sure, but then you have to budget your PCIe lanes. And once you get to a certain scale (a very small one in fact) then you have to consider getting a Threadripper board + CPU, and that increases the expense anywhere from 3x to 8x.
I thought about it lately and honestly it's either a Threadripper workstation with all the huge expenses that entails, or I'd probably just settle for an ITX form factor, cram it with 2-3 huge NVMe SSDs (8TB each), have a really good GPU and quiet cooling... and just expand horizontally if I ever need anything else (and make VERY sure it has at least two USB 4 / Thunderbolt ports that don't gimp the bandwidth to your SSDs or GPU so the expansion would be at 100% capacity).
Meaning that going for a classic PC does not makes sense if you want an internally expandable workstation. What's the point in a consumer board + a Ryzen 9950X and a big normal PC case if I can't put more than two old-school HDDs in there? Just to have a better airflow? Meh. I can put 2-3 Noctua coolers in an ITX case and it might even be quieter.
In the report they tested samples of the part and found that they actually had glass transition temperatures of 52.8°C, and 54.0°C... so sounds like the owner fell victim to false advertising.
ddg (python lib) is free and I'd say good enough for most tasks. (I think the endpoint is unofficial, but from what I've heard it's fine for typical usage.)
There's also google, which gives you 100 requests a day or something.
Here's the search.py I use
import os
import json
from req import get
# https://programmablesearchengine.google.com/controlpanel/create
GOOGLE_SEARCH_API_KEY = os.getenv('GOOGLE_SEARCH_API_KEY')
GOOGLE_SEARCH_API_ID = os.getenv('GOOGLE_SEARCH_API_ID')
url = "https://customsearch.googleapis.com/customsearch/v1"
def search(query):
data = {
"q": query,
"cx": GOOGLE_SEARCH_API_ID,
"key": GOOGLE_SEARCH_API_KEY,
}
results_json = get(url, data)
results = json.loads(results_json)
results = results["items"]
return results
if __name__ == "__main__":
while True:
query = input('query: ')
results = search(query)
print(results)
just setup searxng yesterday and mcp for it in lm studio to be able to search the net for answers to simple queries. small ibm granite worked surpassingly well, while oss20b seemed to be looping searches.
It's only criminal if they aren't provided with the education/information they need to live healthy lives (which is possible with the right diet/supplements).
Dark skinned people do not produce enough vitamin D in northern latitudes because of melanin. If you’re black and in Minnesota you probably need supplementation.
Minnesota? Minnesota isn't particularly dark. Minneapolis is apparently on the same latitude as Venice, Italy, and I don't think of Venice as particularly dark or gloomy (to be fair, they probably have better weather).
But yeah. Low vitamin D levels are common even with lily white people in Northern Europe, and at least here in Norway everyone with dark skin knows that they need vitamin D supplements. Traditionally, public health recommendation (for everyone) was to take cod liver oil regularly for every month with an R in it.
I’m painfully aware of that being dark skinned myself. That doesn’t mean that Minnesota is inhospitable though (or that it would be criminal to send me there). It just means that they’d need to know that they need vitamin D supplements and perhaps regular blood screens. Idk if that happens though.
There are towns in Canada that have heated hallways that go between buildings so you can’t get completely snowed in during the winter. Maybe they should build those. Or the underground walkways they have in a couple of the cities.
How did you arrive at the decision of not putting the GPU machines in the colo? Were the power costs going to be too high? Or do you just expect to need more physical access to the GPU machines vs the storage ones?
When I was working at sfcompute prior to this we saw multiple datacenters literally catch on fire bc the industry was not experienced with the power density of h100s. Our training chips just aren't a standard package in the way JBODs are.
My info may be dated, but power density has gone up a ton over time. I'd expect a lot of datacenters to have plenty of space, but not much power. You can only retrofit so much additional power distribution and cooling into a building designed for much less power density.