No. Which doesn’t prove the technology has not been adopted. The internet also consists of much more than public-facing websites. So what’s your point?
My point is that we're still dependent on IPv4. For all the progress IPv6 has made, no-one is willing to switch IPv4 off yet. Until we do, we're still constrained by all the problems IPv4 has.
Easy: MacBook Air. The friend is asking this question, so that’s what they need. If they needed a MacBook Pro, they wouldn’t be asking this question. If they wanted to spend as little as possible, they would have already bought something cheap, like a PC or Chromebook or now this Neo, so they wouldn’t be asking this question.
However, with the recent Macbook Neo. I actually went ahead and recommended Neo. Especially to a friend of mine whose going into college soon and has asked me what they should buy.
Now the 8gb can be concern to some but not to many IMO. And I am also feeling just a bit optimistic that Apple will realize that the largest criticism of this product can be that it doesn't have 16GB otherwise even more people can buy so in the future, I expect 16 GB to be possible too (When Ram bubble finally bursts)
Few enough differences so that if I could get an old Studio Display at a discount, I would. But right now it seems the old one is still full price where it's available.
Yes, and despite every single one of these world-changing inventions, people in rich countries still go to work every day, even though UBI is generally not a thing. People claim AI will eliminate large numbers of jobs. Maybe it will, just like the tractor did. But new jobs are created. I would never have guessed that “influencer” would be a thing!
This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.
Ex historian here, now engineer. I would gently suggest you’re underestimating the magnitude of some of the transformations wrought by the technologies that OP mentioned for the people that lived through them. Particularly for the steam engine and the broader Industrial Revolution around 1800: not for nothing have historians called that the greatest transformation in human life recorded in written documents.
If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.
I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization created the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.
It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies do transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.
Not at all, I fully appreciate that these inventions transformed life. I’m skeptical because so much of the breathless AI chatter claims AI will eclipse all these inventions. It is the breathless AI commentators, not I, who have lost all perspective on the magnitude and sweep of history.
It’s not AI per se, but rather ai enabled robotics that can change the world in ways that are different in kind, not just degrees, to earlier changes.
No other change has had the potential to generate value for capital without delivering any value whatsoever to the broader world.
Intelligent robotic agents enable an abandonment of traditional economic structures to build empires that are purely extractive and only deliver value to themselves.
They need not manufacture products for sale, and they will not need money. Automated general purpose labor is power, in the same way that commanding the mongol hordes was power. They didn’t need to have customers or the endorsement of governments to project and multiply that power.
Of course commanding robotic hordes is the steelman of this argument, but the fact that a steelman even exists for this argument, and the unique case that it requests and requires actually zero external or internal cooperation from people makes it fundamentally distinct in character.
Humans will always have some kind of economic system, but it very well may become separate from -and competing for resources with- industrial society, in which humans may become a vanishing minority.
You think an artificial intelligence would have less impact on the world than the steam engine?
The AI commentators are not saying that ELIZA will change the world, they’re saying that one of the big companies is moments away from an AGI. Sam Altman called a recent ChatGPT model a “PhD level expert”; wouldn’t infinite PhDs for $20/month or $200/month be transformative?
That is, your objection isn’t the usual “LLMs aren’t going to be AGI”, you’re saying “even if they do, it won’t be a big deal”?
>You think an artificial intelligence would have less impact on the world than the steam engine?
Not op, but yes, 100%. Steam backs nearly all development of technology of the last 150+ years. Where do you think the power come from to make things? More than half of the world's power *still* runs on steam, as will many of the systems running AI.
If steam power never existed, not only would you not exist but there's a good chance the country you live in wouldn't either. If you don't believe the effect is large, go to the farthest uncontacted place on earth and take out a CO2 meter.
It's not that I "don't believe the effect is large", but the changes from pre-intelligence planet Earth to post-intelligence planet Earth are larger because they include the invention of steam, and literally everything else too: language, writing, irrigation, cities, trade, numbers, currency, mathematics, chemistry, engineering, nations, governments, supply chains, steam, etc.
An AGI that can solve the problems we think are solvable, but we can't solve, would be huge. Any sci-fi idea that isn't ruled out by the laws of physics, but that we haven't got the brains to solve, any breakthrough that we think should be there but we haven't found, any problem that requires too much time to learn, or too many parts to hold in one human mind, any coordination that is too big for one team, any funding problem, any scarcity problem, any disease or illness problem, any long timeframe problem, are all on the table as possibilities.
There's potential there (with the pocket-PhDs), the question is whether it'll actually make a measurable difference in the long run. I mean I'm sure it will make a difference, the question is whether it's what they say it will be, and whether it'll be financially viable. At the current burn rate of the AI companies, it isn't - before long the first ones will have to give up. They won't die, they'll be subsumed into their competitors.
Anyway, the challenge is making a difference. Current-day LLMs can, for example, generate stories and books; one tweet said "this can generate 1000 screenplays a day". Which sounds impressive by the numbers, but books, screenplays, etc were never about volume.
Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?
This is the main difference with the industrial revolution - it, for example, introduced machines that turned 10 people jobs into 1 person jobs. I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use.
I don't think anyone ever asked for 1000 screenplays a day, or infinite PhD's for $20. But then, nobody asked for a riderless carriage yet here we are.
> Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?
Yes, there is still a large demand for people with analytical thinking, a deep knowledge base, and good problem-solving skills. This demand shows up broadly across STEM fields, and it's a major reason that these fields pay relatively high.
Even just thinking of R&D, there is an immense amount of work left to be done in basic science. Research is throttled partly by a lack of cheap graduate lab labor. (If that physical + mental labor became much cheaper, the costs of research would shift - what does it take to get reagants? What does it take to build more lab space, and provide water and light? Etc.)
The present issue is that current AI does not really offer the same capabilities as a good grad student or PhD. Not just physically, as in, we don't have good robotics yet, but mentally. LLMs do not exhibit good judgment or problem-solving skills, like a good PhD does. And they don't exhibit continual learning.
No clue on when these will change, but yes, a cheap AI with solid problem-solving skills and good judgment would absolutely upend our economy.
> "I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use."
This is why I said "isn’t the usual “LLMs aren’t going to be AGI”", but you still went straight for "LLMs aren't AGI", which was not in question.
AGI is what OpenAI says they are going for. That's the goal of all this trillion dollar investment, not to output 1000 screenplays a day, but to takeover the world, basically. What would infinite PhDs discover if they could hold all of Arxive in their 'heads' at once and see patterns in every experiment that's ever been done? What could they engineer and manufacture if they could 'concentrate' on millions of steps of a manufacturing process at once without getting fatigued or bored? What ideas could they test if they could be PhD level in a dozen subjects all at once?
A PhD generating knowledge has a cumulative effect that an equivalent intelligence generating prose purely for entertainment does not. And a whole bunch of that work isn’t really about novel insights, it’s about filling in gaps and doing knowledge work that assists people who are capable of having those insights. AI doing this enables them, also making it possible for more people to do the same.
Another interesting thing about the steam engine is much of science in the 1800s was dedicated to figuring out how steam engines actually worked to improve their efficiency. That may be similar for AI, or it may not!
I’d rather talk about the history of steam engines than AI today, so: let’s just say it sounds like at some time in the past you saw a clunky inefficient Newcomen steam engine pumping water out of a coal mine, and you hated it, and now you think that’s all steam engines are or can be or can do: they’re loud and annoying and they’re just for pumping coal mines. Then one day someone tells you they’re powering mechanized looms in cotton mills and you flat out deny it and you don’t even want to go into the mill to take a look, because you hated that first steam engine so much.
It’s right there. You can go and see it any time, doing the things you don’t think it’s capable of doing. Just a little curiosity is all you need.
No no, an intelligent person looking at a crude steam engine could see what potential it has. This is not hindsight.
It is generating large amount of power on demand.
From that one can imagine what it could do. But more importantly in this context, one could also imagine what it could NEVER do. If someone say "Oh, the mighty steam engine! It lets us print 100x more books than we were doing before. Who knows, may be some day it will even start writing new books!"
And at that point, if you understand anything about the steam engine, or writing, you can call bluff. But if you don't understand what the steam engine is doing, and if you don't actually know what it takes to come up with a story, one could take a look at the engine printing the books, and blunder into the conclusion that it printing an entirely new book is only a question of time.
So in short, it is not "hate", just the acknowledgement about what it is not.
It does take a lot of imagination and creativity to come up with new and better ways to use an already existing idea. We're currently just scratching the surface of what LLMs are going to do for us
> The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.
The ancient Greeks surely would have realised that an aeolipile could be used as a source of power, if they'd had abundant combustible fuel, a need for rotary motion, and no better source of it.
Newcomen engines are mere curiosities today, because we have better sources of power (better engines). In the past, they had better sources of power too (donkeys, wind, water, or human slaves). Newcomen engines, like all technologies, are only viable in certain economic environments. In all others they are curiosities.
Early steam engines did not produce large amounts of power on demand, though. They produced small amounts of power, were a hassle to fuel and maintain, and broke often. It was reasonable that the engineers of the 1700s said "well, until someone improves on this, it's not worth using"..
.. which is not far off from what people said about ChatGPT in 2022.
I don't know how long it'll take for AI to be as broadly impactful as the steam engine was, but.. it's definitely coming. I expect the world to look radically different in 50 years.
Thank you for your post. Very informative. Why is it too early for AI? It’s clearly an emergent cultural evolutionary byproduct that’s been many years in the making and quite mature. Perhaps your own bias is limiting you to imagine what AI is truly capable of?
This argument is the one that shook me, I’m curious if you think there’s any merit to it:
Humans have essentially three traits we can use to create value: we can do stuff in the physical world through strength and dexterity, and we can use our brains to do creative, knowledge, or otherwise “intelligent” work.
(Note by “dexterity” I mean “things that humans are better at than physical robots because of our shape and nervous system, like walking around complex surfaces and squeezing into tight spaces and assembling things”)
The Industrial Revolution, the one of coal and steam and eventually hydraulics, destroyed the jobs where humans were creating value through their strength. Approximately no one is hired today because they can swing a hammer harder than the next guy. Every job you can get in the first world today is fundamentally you creating value with your dexterity or intelligence.
I think AI is coming for the intelligence jobs. It’s just getting too good too quickly.
Indirectly, I think it’s also coming for dexterity jobs through the very rapid advances in robotics that appear to be partly fueled by AI models.
I think you are right, but here’s a fun counter-example. I recently bought a new robot* to do some of my housework and yet, at around 200lbs, it required two people to deliver it (strength) get it set up (dexterity) and explain to me how to use it (intelligence).
Yeah and I think that extends to even trades we see as protected because they often work in novel and unknown setting, like whatever a drunk tradesman rigged up in the decades previous.
Eventually it will be more economical to just destroy all those old world structures entirely, clear the site out, and replace it with the new modular world able to be repaired with robots that no longer have to look like humans and fit into human centric ux paradigms. They can be entirely purpose built to task unlike a human, who will still be average height and mass with all the usual pieces parts no matter how they are trained.
Most of the “delivery” (getting it from the factory to its final installed location) was done by machine: forklifts, cranes, ships, trucks, and (I'm guessing) a motorized lift on the back of the delivery truck.
They're not hired to swing a hammer hard, they're hired to swing it at the right thing, and if they can't swing it hard enough they pick a different tool.
Harder than someone else. A bodybuilder and a normal person ham swing a hammer just as efficiently as each other.
Dexterity is more important - after all you may have the stamina to bang in 1000 nails in an hour. I have a nail gun. What’s important is we can control where the nails go.
You said there are three traits, but seems like you only listed two - unless you're counting strength and dexterity as separate and just worded it weirdly.
I think they’re separate. You don’t need to be strong or intelligent to put circuit boards in printers, but there are factories full of people doing that. Purely because it’s currently cheaper to pay (low) wages to humans than to develop, deploy, and maintain automation to do that task. Yet.
Physical labor, especially jobs requiring dexterity, will be left for a long time yet. Largely because robotics hardware production cannot scale to meet the demand anytime soon. Like, for many decades.
I actually asked Gemini Deep Research to generate a report about the feasibility of automation replacing all physical labor. The main blockers are primarily critical supply chain constraints (specifically Rare Earth Elements; now you know why those have been in the news recently) and CapEx in the quadrillions.
Yeah and until ChatGPT I thought even 50 years was optimistic, which is why current days feel like SciFi! However, at its essence, the current AI revolution has been driven primarily by a few key algorithmic breakthroughs (cf the Bitter Lesson), which are relatively easy to scale up through compute.
On the other hand, the constraints on robotics are largely supply chain-related. The current SOTA for dexterity in robots requires motors, which require powerful magnets, which require Rare Earth Elements, which are critically supply-constrained.
To be precise, the elements are actually abundant in the Earth's crust, just that extracting them is very expensive and extremely toxic to the environment, and so far only China has been willing to sacrifice its environment (and certain citizens' health), which is why it has cornered the market. Scaling that up to the required demand is a humongous logistical, political and regulatory hurdle (which, BTW, is why I suspect the current US adminstration is busy gutting environmental regulations.)
Now there may be a research prototype somewhere in some lab that is the "Attention Is All You Need" equivalent of actuators, but I'm personally not aware of anything with that kinda potential.
Some types of motors don't require permanent magnets. If we need more motors than we can make permanent magnets, we'll adapt, perhaps with an efficiency loss.
Motors with permanent magnets are preferred because they are much more cost- and energy-efficient, even with the painful reliance on REEs. There is a very strong incentive to find alternatives but nothing comparable has been found yet.
There are of course non-electric alternatives like hyrdaulic and pneumatic actuators but they are mostly good for power, not dexterity. The size and complicated fluid dynamics simply are not conducive for fine motor control. I do think these will play a large part eventually because even electric motors cannot economically produce enough force to be practically useful. Like, last I checked, the base-level Unitree robots can lift 2kg or so? Not even enough to lift a load of laundry.
At this point I suspect we'll end up with hydraulics for strength (arms, legs, torso) and electrics for dexterity (grippers)
Uh, out of all the things that are the bottleneck, you think it's robotics hardware that is the bottleneck?
In an age where seemingly every single robot company has a humanoid prototype whose legs are actively supported through high powered actuators that are strong enough to kick your ribs in?
In an age where the recent advancements in machine learning have given bipedal walking a solution that is 80% of the way to perfection with the last 20% remaining the hardest to solve?
Honestly, from a kinematics/hardware perspective the robots are already good enough. Heck, even the robot hands are pretty good these days. Go back 10 years ago and the average humanoid robot hand was pretty bad. They might still not be perfect today, but they are a non-issue in terms of constructing them.
The only real bottleneck on the hardware side is that robot skin is still in its infancy. There needs to be some sort of textile with electronics weaved into it that gives robots the ability to sense touch and pressure.
What has remained hard is the software side of things and it is stuck in the mud of lack of data. Everyone is recording their own dataset that is unique to their specific robot.
Note I didn't say the bottleneck is the hardware itself, it's the supply chain for production of the hardware. Specifically the Rare Earth Elements, as I explained here: https://news.ycombinator.com/item?id=47178210
The problem with that argument as I see it is that a lot of jobs can be described that way if you want.
And it's not just these; i.e. video generation is getting better every other week too. It's not yet good enough to produce full length movies but it's getting there and the main component that seems to be missing is just more control over the generated output, but that'll come too.
You might say these movies will be AI slop and you'd be right, but then that'll be enough for most people who just want to see a lot of shit blow up on screen and superhereos fighting other superhereos.
You will still have a niche for 'real actor' films, but it will become a niche.
Intelligence jobs are sort of the apex of the economy where everything coalesces around to serve those positions ultimately. E.g. any low skilled area even devoid of any resources that basically insists upon its own existence at this point (e.g. walmart workers need gas station, gas station workers need walmart, there is a sort of economy but these are straight up consumption black holes with nothing actually being invented or produced, maybe agricultural products but not by a large fraction of the labor force any longer).
So where does that leave our world without actual creation, production, ideas? I work at the gas station and sell you zyns? You work at the walmart and sell me rotisserie chickens? We both work doubles and eat and sleep in the time remaining? Remain in this holding pattern until World Leader AI realizes we are just waste heat and culls us? I mean, that is sort of the path we are on. Disempowering people. Downskilling them. Passifying them. Removing their abilities to organize themselves. Removing access to technology and tooling. Making the inevitable as easy at it can be when it comes time for it.
We are in a death cult called business efficiency. Fire them, it's more efficient. Lean up the company. Don't invest in research, cheaper not to and buy back stock instead. These are death spirals no different than what happens with ants. We are justifying not giving our own species a seat at the table out of pragmatism. Why create a job for someone? It is inefficient, do more with less and don't worry about the unemployed it is their fault. Why pay them well and let them live comfortably? That is profit you could be making. Eventually it is going to be why feed the human species, because that is the line of logic here with business efficiency. We don't optimize to uplift our species. Quite the opposite, we optimize to hold it down and squeeze and extract.
The key mistake you make is to believe that "first world" is sustainable by it's own. A lot of people are hired today because they are good at a physical tasks, globalized capitalism just decided that it's cheaper to manufacture it overseas (with all the environmental and societal downsides that hit us back in the face).
So don't worry if we lure ourlselves that it's ok to stop caring for "intelligence job" globalization will provide for every aspect where AI is lacking. And that's not just a figure of speech they are already plenty of "fake it until you make it" stories about AI actually run by overseas cheap laborers.
This ignores that the forces of capitalism, the labor market, value, etc are all made up. They work because people (are made to) believe in them. As soon as people stop believing in them, everything will fall apart. The whole point of an economy is to care for people. It will adapt to continue doing that. Yes, the changeover period might be extremely painful for a lot of people.
The whole point of an economy is to generate value. Very, very different than caring for people
Feudalism was the dominant economic system for millennia. The point is to extract value for the upper class. Peasants only matter as a source of labor, and they only get 'cared for' to the extent of keeping them alive and working.
Now think about what feudalism might look like if the peasants' labor could be automated
Well, yeah, "keeping alive" sounds like caring to me. Not to a great standard, that's how we got numerous revolutions, and feudalism did end eventually. People stopped believing it, and some kings lost their heads.
But what if new jobs aren't created? I don't think it's an absolute given that because new jobs came after the invention of the loom and the tractor that there will always be new jobs. What if AI if a totally different beast altogether?
It's quite possible that the rich will essentially form a new economy.
They build the robots to build the factories, run the mines, build the solar farms, run the research labs, repair the robots, etc. They sell to and buy from each other.
Areas of the economy suffered this time and time again. Even if there are new jobs, even if those new jobs are better paid and better conditions than the ones they replace, how does that help the 55 year old coal miner who has seen his industry vanish. Can he realistically retrain?
It’s not unprecedented however the scale and speed that it will come at is. Things like the spinning jenny came along and replaced spinners, but weavers stayed for another generation.
Selfishly though I am more concerned about losing my job and industry than I was concerned about others suffering from the 80s, or during the pivot to the intenet. To quote Dr McCoy
> We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.
If you look closer into history -- or ask your favorite AI to summarize ;-) -- about what new jobs were created when existing jobs were replaced by automation, the answer is broadly the same every time: the newer jobs required higher-level a) cognitive, b) technical or c) social skills.
That is it. There is no other dimension to upskill along. (Would actually be relieved if someone can find counter-examples!)
LLMs are good at all three. And improving extremely rapidly.
You keep repeating it, but it’s obviously wrong in practice. I guess you can make an argument that sending WhatsApp message or generating video is just a search job but that’s not a great argument for why humans wouldn’t get replaced - it doesn’t matter if LLMs can be reduced to search tools, but if their output is good enough approximation of human worker output. If it is then it has a chance to replace human, even if you call it glorified search tool.
Surely you must realise that calling things like programming or different types of office jobs (which are almost replaceable even today) "manual search jobs" is absurd?
The "AI will destroy all the jobs" narrative also has one obvious problem from an economics perspective, which is being obscured by tribalism and egocentrism.
When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.
If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.
What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.
What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.
When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.
I think what most people are worried about is that, as you say, AGI won't necessarily have our biases/biological drives
That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)
So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win
> This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.
But the people who hoard the wealth, electricity, and whatever else is needed to run the uberoperators are not branded as useless. Why is that? An aside..
With a mentality like that no, you’re not going to get another job.
Job seeking while you’re employed means you have to subjugate the needs of your current employer. When an opportunity calls? You pick up the phone!
You’re the kind of person who is so dedicated to your job that you will have to lose it and then be unemployed before you get a new one. That is absolutely ok. Job seeking while employed takes a ton of energy and might not be worth it to you. Don’t bother then.
All your posts in this discussion is full of straw men and twisting peoples' words. Do better. Not fruitful to have a discussion with you. (Like your ranty assumptions about what kind of person someone is, come on...)
And no, if I were to answer the phone where every spammy prospective idiot is calling me it would be multiple times a day. I don't care.
I know someone who has used this trick to get a pay rise whilst not looking for work.
You wait until your boss is in earshot, get someone to ring you and the walk quickly away from your desk saying "yes, yes I'm still interested... Just a sec".
It might make your boss actually consider the reality of replacing you.
I'm sorry that just comes across as unprofessional, weak and passive aggressive. If someone started doing that on my team I'd take it as part of the case against them not a reason to fight to keep them. Also presumably that's in ear shot of other team members, it's disruptive to team morale. If you are serious about looking elsewhere, make it clear you want to stay but xyz is making you consider other options. Do it in private with the right people. Or say nothing at all.
One day I visited DistroWatch.com. The site deliberately tweaked its images so ad blockers would block some "good" images. It took me awhile to figure out what was going on. The site freely admitted what it was doing. The site's point was: you're looking at my site, which I provide for free, yet you block the thing that lets me pay for the site?
I stopped using ad blockers after that. If a site has content worth paying for, I pay. If it is a horrible ad-infested hole, I don't visit it at all. Otherwise, I load ads.
Which overall means I pay for more things and visit less crap things and just visit less things period. Which is good.
Moreover you don’t even need a 0-day to fall for phishing. All you need is to be a little tired or somehow not paying attention (inb4 “it will never happen to ME, I am too smart for that”)
At $JOB IT actually bundles uBlock in all the browsers available to us, as per CIA (or one of those 3-letter agencies, might've even been the NSA) guidelines it's a very important security tool. I work in banking.
I do that as well. For me it is almost exclusively the case with the news sites.
> If it is a horrible ad-infested hole, I don't visit it at all.
Same.
> Otherwise, I load ads.
There is no "otherwise" for me. I simply do not want to load any kind of ads or "sponsored" content. I see no reason, either moral, ethical or other, to ever do that.
Wouldn’t the Nevada Test Site be much better for this? Huge, government controlled, no major airports or cities, and moreover, already used for this sort of thing.
A major website sees over 46 percent of its traffic over ipv6. A major mobile operator has a network that runs entirely over ipv6.
This is not “waiting for adoption” so I stopped reading there.
https://www.google.com/intl/en/ipv6/statistics.html
https://www.internetsociety.org/deploy360/2014/case-study-t-...
reply