I've found that what is more important than making a good/bad decision, is sticking with the decision I make, with the possibility of mitigation measures, if it turns out to have been a bad decision. Sometimes, I can pre-plan mitigation measures, or I research them quickly, when it becomes clear that I made a mistake.
Jamming on the parking brake, when going 90, down the highway, is a bad idea.
I sometimes miss a turn, or don't plan well enough to be in the correct lane, when I arrive at the intersection.
What I do, is go "D'oh!", continue to the next intersection, then either make a U-turn (if legal), or turn onto a side street, with the intention of recovering my intended direction.
What I often see people in the same situation do, is jam on the accelerator, swerve across six lanes of traffic, and screech into their turn.
That may get them where they are going, but it also has a very real chance of earning them a ticket or a stay in hospital.
My way takes a bit longer, but no ticket, no accident.
> I believe that ineffectual as it was, the reputational attack on me would be effective today against the right person. Another generation or two down the line, it will be a serious threat against our social order.
Damn straight.
Remember that every time we query an LLM, we're giving it ammo.
It won't take long for LLMs to have very intimate dossiers on every user, and I'm wondering what kinds of firewalls will be in place to keep one agent from accessing dossiers held by other agents.
Kompromat people must be having wet dreams over this.
Someone would have noticed if all the phones on their network started streaming audio whenever a conversation happened.
It would be really expensive to send, transcribe and then analyze every single human on earth. Even if you were able to do it for insanely cheap ($0.02/hr) every device is gonna be sending hours of talking per day. Then you have to somehow identify "who" is talking because TV and strangers and everything else is getting sent, so you would need specific transcribers trained for each human that can identify not just that the word "coca-cola" was said, but that it was said by a specific person.
So yeah if you managed to train specific transcribers that can identify their unique users output and then you were willing to spend the ~0.10 per person to transcribe all the audio they produce for the day you could potentially listen to and then run some kind of processing over what they say. I suppose it is possible but I don't think it would be worth it.
> Google agreed to pay $68m to settle a lawsuit claiming that its voice-activated assistant spied inappropriately on smartphone users, violating their privacy.
No corporate body ever admits wrongdoing and that's part of the problem. Even when a company loses its appeals, it's virtually unheard of for them to apologize, usually you just get a mealy mouthed 'we respect the court's decision although it did not go the way we hoped.' Accordingly, I don't give denials of wrongdoing any weight at all. I don't assume random accusations are true, but even when they are corporations and their officers/spokespersons are incentivized to lie.
>I keep seeing folks float this as some admission of wrongdoing but it is not.
It absolutely is.
If they knew without a doubt their equipment (that they produce) doesn't eavesdrop, then why would they be concerned about "risk [...] and uncertainty of litigation"?
It is not. The belief that it does is just a comforting delusion people believe to avoid reality. Large companies often forgo fighting cases that will result in a Pyrrhic victory.
Also people already believe google (and every other company) eavesdrops on them, going to trail and winning the case people would not change that.
The next sentence under the headline is "Tech company denied illegally recording and circulating private conversations to send phone users targeted ads".
> settling a lawsuit in this way is also a worthless indicator of wrongdoing
Only if you use a very narrow criteria that a verdict was reached. However, that's impractical as 95% of civil cases resolve without a trial verdict.
Compare this to someone who got the case dismissed 6 years ago and didn't pay out tens of millions of real dollars to settle. It's not a verdict, but it's dishonest to say the plaintiff's case had zero merit of wrongdoing based on the settlement and survival of the plaintiff's case.
> Someone would have noticed if all the phones on their network started streaming audio whenever a conversation happened.
You don't have to stream the audio. You can transcribe it locally. And it doesn't have to be 100% accurate. As for user identify, people have mentioned it on their phones which almost always have a one-to-one relationship between user and phone, and their smart devices, which are designed to do this sort of distinguishing.
Transcribing locally isn't free though, it should result in a noticeable increase in battery usage. Inspecting the processes running on the phone would show something using considerable CPU. After transcribing the data would still need to be sent somewhere, which could be seen by inspecting network traffic.
If this really is something that is happening, I am just very surprised that there is no hard evidence of it.
With their assumptions, you can log the entire globe for $1.6 billion/day (= $0.02/hr * 16 awake hours * 5 billion unique smartphone users). This is the upper end.
I have a weird and unscientific test, and at the very least it is a great potential prank.
At one point I had the misfortune to be the target audience for a particular stomach churning ear wax removal add.
I felt that suffering shared is suffering halved, so decided to test this in a park with 2 friends. They pulled out their phones (an Android and a IPhone) and I proceeded to talk about ear wax removal loudly over them.
Sure enough, a day later one of them calls me up, aghast, annoyed and repelled by the add which came up.
This was years ago, and in the UK, so the add may no longer play.
However, more recently I saw an ad for a reusable ear cleaner. (I have no idea why I am plagued by these ads. My ears are fortunately fine. That said, if life gives you lemons)
> At one point I had the misfortune to be the target audience for a particular stomach churning ear wax removal add.
So isn’t it possible that your friend had the same misfortune? I assume you were similar ages, same gender, same rough geolocation, likely similar interests. It wouldn’t be surprising that you’d both see the same targeted ad campaign.
who says you need to transcribe everything you hear? You just need to monitor for certain high-value keywords. 'OK, Google' isnt the only thing a phone is capable of listening for.
You can always tell the facts because they come in the glossiest packaging. That more or less works today, and the packaging is only going to get glossier.
Which makes the odd HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing. There are no controls for AI companies using divulged information. Theres also no regulation around the custodial control of that information either.
The big AI companies have not really demonstrated any interest in ethic or morality. Which means anything they can use against someone will eventually be used against them.
> HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing
> The big AI companies have not really demonstrated any interest in ethic or morality.
You're right, but it tracks that the boosters are on board. The previous generation of golden child tech giants weren't interested in ethics or morality either.
One might be mislead by the fact people at those companies did engage in topics of morality, but it was ragebait wedge issues and largely orthogonal to their employers' business. The executive suite couldn't have designed a better distraction to make them overlook the unscrupulous work they were getting paid to do.
> The previous generation of golden child tech giants weren't interested in ethics or morality either.
The CEOs of pets.com or Beanz weren't creating dystopian panopticons. So they may or may not have had moral or ethical failings but they also weren't gleefully buildings a torment nexus. The blast radius of their failures was less damaging to civilized society much more limited than the eventual implosion of the AI bubble.
Interesting that when Grok was targeting and denuding women, engineers here said nothing, or were just chuckling about "how people don't understand the true purpose of AI"
And now that they themselves are targeted, suddenly they understand why it's a bad thing "to give LLMs ammo"...
Perhaps there is a lesson in empathy to learn? And to start to realize the real impact all this "tech" has on society?
People like Simon Wilinson which seem to have a hard time realizing why most people despise AI will perhaps start to understand that too, with such scenarios, who knows
It's the same how HN mostly reacts with "don't censor AI!" when chat bots dare to add parental controls after they talk teenagers into suicide.
The community is often very selfish and opportunist. I learned that the role of engineers in society is to build tools for others to live their lives better; we provide the substrate on which culture and civilization take place. We should take more responsibility for it and take care of it better, and do far more soul-seeking.
Talking to a chatbot yourself is much different from another person spinning up a (potentially malicious) AI agent and giving it permissions to make PRs and publish blogs. This tracks with the general ethos of self-responsibility that is semi-common on HN.
If the author had configured and launched the AI agent himself we would think it was a funny story of someone misusing a tool.
The author notes in the article that he wants to see the `soul.md` file, probably because if the agent was configured to publish malicious blog posts then he wouldn't really have an issue with the agent, but with the person who created it.
> suddenly they understand why it's a bad thing "to give LLMs ammo"
Be careful what you imply.
It's all bad, to me. I tend to hang with a lot of folks that have suffered quite a bit of harm, from many places. I'm keenly aware of the downsides, and it has been the case for far longer than AI was a broken rubber on the drug store shelf.
Software engineers (US based particularly) were more than happy about software eating the economy when it meant they'd make 10x the yearly salary of someone doing almost any other job; now that AI is eating software it's the end of the world.
Just saying, what you're describing is entirely unsurprising.
oh yeah. I've once bought a $10ish one on Amazon out of curiosity.
There's the yellow composite plug, a 12V input, and a small bit of wire to be cut to rotate image 180 degrees, at the other end of a 30ft cable from the camera. The composite goes into the existing infotainment. There would be a wire from shifter to infotainment that switches the display to external composite video when the gear lever is in reverse. I think it even came with a miniature hole saw in size of the camera module.
$10 and one afternoon later, I could have upgraded a dumb car to have one, complete with auto switch to backup on reverse. No software hacking needed. It's fundamentally an extremely simple thing.
Much as I despise them, I'm not so sure that would be the case. I seem to remember folks saying the same about the Taliban, and the cartels have a lot more money and high-tech kit, than the Taliban.
I don’t think the technology matters nearly as much as the asymmetry. Iraq had better technology than the Taliban and their military didn’t last a week.
True enough, but the cartels are also experts at running what is basically guerrilla warfare, against each other. Not sure if the Mexican Army has ever tried to take them on. A lot of cartel soldiers come from the army.
* A conventional military war, on a battlefield: Neither Saddam Hussein's military nor the cartels nor the Taliban would last long against the US.
* An unconventional insurgency: The Iraqis quickly turned to this approach and it worked very well for them, as it did for the Taliban. The Taliban won, and the Iraqi insurgency almost drove the US out of Iraq and was eventually co-opted.
The cartels of course would choose the latter. They, the Taliban, etc. are not suicidal.
The US decided to leave because staying was not politically popular, and left. They were not beaten by the Taliban, they were beaten by the political climate at home.
If someone is actively kicking your ass, then they decide that you aren't worth the effort to keep hurting and decide to walk away, that doesn't mean you "won" the fight even if you get what you want afterwards.
The Taliban control what they and the US and allies fought for. That's winning. Your personal requirement of how it must be won is not important - nobody cares how it was done and it doesn't change the outcome. The Taliban don't care and the US and its allies don't care.
It's also a perfectly common, expected way to win a war: First, wars always end with political solutions. The most well known principle of warfare is that it is 'politics conducted by other means' (i.e., by violence rather than by law or diplomacy). If there is no political solution, the war never ends. That's why the US didn't win the war in Afghanistan after decades - they couldn't create a stable political solution because they were unable to impose one on the Taliban, who in the end imposed one on the US and its allies.
Victory by outlasting enemy resources, including political will, is fundamental to warfare; wars end when resources to fight (for the political outcome) run out, but few end in total kinetic destruction of those resources - someone runs out of money or political will. It's also the explicit strategy of insurgencies. Enemies of the US know it very well and have used it for generations - that is how North Vietnam won, for example. When the Soviets invaded Afghanistan, the Afghans famously told them, 'you have the clocks (the technology), we have the time'.
Annoying your parents until they give you a cookie is still getting a cookie. Just because you didn't leverage overwhelming military firepower to get the cookie does not mean you aren't holding a cookie
I think the key difference between the Taliban and the cartels is that the Taliban were a bunch of ideologues who actually enjoyed being an insurgency and living under siege in caves, with making money from the drugs trade being a mere means to their real purpose of fighting infidels, whereas the cartel leadership sees wealth and power from controlling the drugs trade as an end, crushing local rivals as a means, and would really rather avoid the sort of conflict that's bad for their medium term business prospects.
I mean, some sort of cartels would bounce back after any "war on drugs" because supply and demand, but the people running them aren't hankering for martyrdom or glory over consolidating their territory and accumulating.
The Taliban was repeatedly crushed. All of the leadership was killed many times over. The problem is the Taliban is an idea that transcends individual human members and it can always be reconstituted. It also benefited from being able to harbor supporters in Pakistan, which is a nuclear power the US was not willing to also invade.
There isn't a real analogy there because cartel leaders have no official state support anywhere, let alone in a bordering nuclear power, but even if they did, it hardly seems reassuring from their perspective to know the drug trade will outlive them after they all get killed. It's different when you're deeply religious and believe what you're doing is worth dying for and the larger arc of history is more important than your own life and wellbeing. I don't think drug lords think that way.
All this is true. Yet the cartels operate like militarized insurgents. Adopting similar tactics seen in Ukraine fighting so it’s interesting to say the least that they might be utilizing drone technology for their purposes.
I didn’t mean to start this giant thread about Mexican Cartels but here we are. Most think it’s just an isolated problem. Others know it’s more widespread. I simply stated that these murderous thugs are out there in full force with technology and armored vehicles. If provoked, they would lash out. It’s ridiculous because of course going up against the US is a losing proposition but each “generation” of cartel leader thinks they can somehow manage it.
I'm sure that image nerds would poke holes in it, but it seems to work pretty much exactly the way it does IRL.
The noise at high ISO is where it can get specific. Some manufacturers make cameras that actually do really well, at high ISO, and high shutter speed. This seems to reproduce a consumer DSLR.
With the disclaimer that I am comparing to the memory of some entry-level cameras, I would still say that it's way too noisy.
Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.
Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!
(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)
Noise depends a lot on the actual amount of light hitting the sensor per unit of time, which is not really a part of the simulation here. ISO 1600 has been quite usable in daylight for a very long time; at night it's a somewhat different story.
The amount and appearance of noise also heavily depends on whether you're looking at a RAW image before noise processing or a cooked JPEG. Noise reduction is really good these days but you might be surprised by what files from even a modern camera look like before any processing.
That said, I do think the simulation here exaggerates the effect of noise for clarity. (It also appears to be about six years old.)
The kind of noise also makes a huge difference. Chroma noise looks like ugly splotches of colour, whereas luma noise can add positively to the character of the image. Fortunately humans are less sensitive to chroma resolution so denoising can be done more aggressively in the ab channels of Lab space.
Yes, this simulation exaggerates a lot. Either that, or contains a tiny crop of a larger image.
Yeah, I don't think that it's easy to reproduce noise (if it was, noise reduction would be even better). Also, bokeh/depth of field. That's not so easy to reproduce (although AI may change that).
I think it is excellent as well—that it also demonstrates aperture and shutter priority is a bonus.
I do feel (image nerding now) that its shutter/ISO visual for showing the image over/under-exposed is not quite correct. It appears they show incorrect exposure by taking the "correct" image and blend (multiply) with either white or blend with black (on the other end of the exposure spectrum) to produce the resulting image.
I suppose I am expecting something more like "levels" that pushes all the pixels to white (or black) until they are forced to clip. (But maybe I am too trained in photo-editing tools and expect the film to behave in the same way.)
No, you're correct. I would have expected the highlights to blow out much sooner (for digital) and the shadows to block up much sooner (for analogue). The simulation doesn't portray this accurately, but it gives the general idea!
Excellent presentation and explanation. I agree with ~90% of it except the small part at 4m54s where he tries to give an answer about the existence of noise. Yes, sensor readout noise and A/D quantization noise exist, but he forgot the big elephant in the room: photon shot noise ( https://en.wikipedia.org/wiki/Shot_noise ). Light is inherently quantum mechanical, and the lower the brightness of a scene, the more that the discrete nature of light shows up in captured images.
Lately I've been researching cameras for astronomy, especially for deep-sky objects (DSOs) like nebulae that require hours of exposure time. The marketing material for these cameras go into a lot of detail: quantum efficiency (the percent chance that a photon converts into an electron), dark noise at different temperatures (fractions of electrons per second), readout noise (usually around 1 electron), and well depth (usually around 10k electrons). Compared to general photography, the astro community much more motivated to explain and keep track of all the sources of noise. Random product example: https://www.zwoastro.com/product/asi585mc-mm-pro/
dpreview is good for that. They shoot a test image of every camera on the market, and you can compare specific iso values on the same subject side by side.
Note that both very high or very low aperture settings also bring their own optical issues. At very low values (big hole) you’re getting hurt by different aberrations (essentially too many paths the same rays can take to the sensor) and at very high values you’re getting hurt by diffraction. At the low end, it’s good to go a little higher than the lens advertises, and at the high end anything over F13-F18 (depending on the gear) is usually quite bad.
To be a little more precise, f is not a camera-specific constant. It's the focal length of the lens. It's a formula that tells you the diameter of the entrance pupil. So at a focal length of 50mm, an aperture value of f/2 means an entrance pupil diameter of 25mm.
But photographers generally just say "f2", meaning an aperture value of two set on the dial of the camera/lens. It's one stop faster (twice as much light) as f/2.8. It'll give you a relatively shallow depth of field, but not as shallow as e.g. f/1.4.
Camera ISO and noise can be really complicated and even contentious topic. One complication is that some cameras are "ISO invariant" and on those cameras afaik it is beneficial to stick to the one or two native ISO values. There is also the whole discussion around ETTR etc
It needs to be updated to do its calculations in linear light, but it's probably useful for getting an intuitive sense of what the different levers of photography do to an image.
I suspect most places that experience regular heavy snow, deal well with it.
I have a friend that went to school in Buffalo, NY. That’s a city that experiences “lake effect” snow, during the winter.
He says all the sidewalks are basically “snow gorges,” but the roads clear quickly, and everyone knows how to dress for the cold.
He tells me a story about visiting northern Quebec, one summer, and seeing houses with a second front door, set on the second floor, and was told they were “snow doors,” for deep winter, so folks can get out, when the snow gets deep.
Oh it is good. It has its drawbacks (like everything else) but it's quite the de facto now and UPI Lite ( a wallet not on individual apps but on UPI/NCPI f/w itself) had made it even better.
There's so much trust / dependency on NPCI at this point but I recently learned that it's not a public entity and thus excluded from the transparency acts such as RTI. I hope the EU does better!
I am not sure which country you are from but is the term “accountability” even relevant in India anymore? It’s been non existent since more than a decade.
I was just commenting on how good, widespread it is and no it is not the doing of the current Govt. It just gained traction during a massive f up of the current Govt.
Jamming on the parking brake, when going 90, down the highway, is a bad idea.
I sometimes miss a turn, or don't plan well enough to be in the correct lane, when I arrive at the intersection.
What I do, is go "D'oh!", continue to the next intersection, then either make a U-turn (if legal), or turn onto a side street, with the intention of recovering my intended direction.
What I often see people in the same situation do, is jam on the accelerator, swerve across six lanes of traffic, and screech into their turn.
That may get them where they are going, but it also has a very real chance of earning them a ticket or a stay in hospital.
My way takes a bit longer, but no ticket, no accident.
reply