Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperionplays's commentslogin

my solution was old laptop running Apollo and running moonlight on my Linux PC - use office that way. It's not ideal, but it works fine for me


There's tonnes of companies out there who have smart remote hands in the major cities who can respond in sub 1hour to an outage at your choosen DC.

Refurb servers will still blast AWS, and spares are easy to source.

I know HE.net does a rack for like $500/mo intro price and that comes with a 1G internet feed as well.


New mikrotik gear is also a great option.


Have you tried the Reticulum network? https://reticulum.network

I have been meaning to try it out

https://unsigned.io/rnode/


I can't say I have. Let me look into it. Thanks for the share


Modern cable layers can carry thousands of kilometers of cable. they have massive tanks.


Current implementations break from simple vibrations such as a bus driving down the road and shaking the ducts the fibre is in. Lots of work required still. Crazy expensive and crazy fragile.


Jumping on this bandwagon - these days I'm working in the submarine telco cable industry.

Considering a cable from singapore <> LA direct can run up $1.4bn USD. I think author needs a lot more research.

1. route planning takes a long time, the ocean floor moves (see: Fault Lines, Underwater Volcanos, pesky fisherman) 2. The ships do move _ a lot_ even with fancy station keeping and stabilisation. 3. cables get broken - a lot. Even now there's 10-15 faults globally on submarine cables. There are companies (See: Optic Marine) who operate fleets of vessels to lay and maintain cables. I'm sure the HVDC industry has the same.

Cool idea, I have been pondering it a lot myself, I figured maybe a ground return HVDC cable might be better for inter-country power grid links.

I know Sun Cable out of Australia want to build a subsea powercable to sell energy into ASEAN.


the top spec RyzenG Chip is no slouch.


Backbone operator who was effected by this. We had a large number of routers in production with this bug, we were aware and upgrading as fast as we could, but with 99.999% uptime SLA's we only have so many minutes per router we could afford for downtime/outages. We had schedules in place (approx 3 months of out-of-hours upgrades) 1 week warning was a bullshit move. Dropping the BGP sessions on 1000's of routers globally was stupidity.

Bearing in mind we also couldn't just "apt-get upgrade" in place, most boxes required hard reboots to apply the patches.

The answer to your question is no, as per a few others have said, Don't violate Rule #1. We see bad actors very often, our job is to keep the bits flowing and the internet online.

Keeping the internet online is painful enough as it is without "researchers" dropping thousands of routers to "prove a point."


Verbatim this is the same things people said in early 00's about people testing XSS et al. against poorly coded PHP/Perl sites.


Pretty sure all xbox services use v6 where possible as well.


I am dual stack at home. While my xbox will happily display its ipv6 address my traffic logs still show it heavily preferring ipv4 when actually doing anything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: