Hacker Newsnew | past | comments | ask | show | jobs | submit | CountVonGuetzli's commentslogin

I didn't know OnShape had such a feature. Will check it out!

What you describe is one of the main reasons why I use Rhino3D. It can be scripted via the Grasshopper plugin, which integrates really nicely with Rhino and its primitives. Sadly, Rhino isn't open source and is quite pricy

- https://www.rhino3d.com/ - https://www.grasshopper3d.com/


The fun thing is that onshape itself has a very thin kernel. Most of what you see as built in features are actually featurescript based. Onshape provides the source code for their built in feature set as a reference. https://cad.onshape.com/documents/12312312345abcabcabcdeff/w... You do need an account login ( free ) to view it.

Kernel here is ambiguous.. I get what you mean, but parasolid is usually the thing described as the cad kernel.

You are right but I also kind of did mean it that way. I believe that Parasolid is at heart of Onshape, the true kernel. Then on top of that is a compatibility layer describing the set of low level operations available to featurescript. I'm sure that not everything in Parasolid is available to featurescript and perhaps there are some things added that are not in Parasolid. Featurescript also contains the selector/query logic for programatically picking geometry. Whether that comes from Parasolid I am not sure. I haven't worked with featurescript for a number of years now but when I did I was amazed. I managed to make an operation for taking any solid from the UI and generating customized interlocking ribbing. The idea was hollow surfboard design. It worked and I left it at that. Never built the surfboard!

However the downside with featurescript and I think a big mistake on their part was to use a custom language rather than python or javascript. Featurescript is almost javascript but with some syntax changes and magic DSL's. You are also forced to use the inbuilt editor which is horrible and if you have burned VIM keybinding into your nerve endings, going back to non modal editing is horrible.

Also the discovery of featurescript modules in the community has terrible UX. It's super weird that they have such a great system but finding useful extensions is horrible.


Wat, how have I never heard of this! Very cool. Do you have any insights you could share on your own setup, what worked well and what didn't? Are you just storing information in plaintext, or do you use some visualization libraries to make consuming the information a bit easier as well? Very curious about your setup.


I recently used the AI feature in n8n to write a code node to parse some data, which worked really well. Feels more like LLMs are enhancing low-code solutions.

Also, I see great value in not having to take care of the runtime itself. Sure, I can write a python script that does what I want much quicker and more effectively with claude code, but there is also a bunch of work to get it to run, restart, log, alert, auth…


Arial is licensed font, distributed by monotype.



... is an Apache-licensed metrically-compatible alternative, for everyone else who doesn't already know what an Armio. is


You mean rack mounts for humans?


For us, introducing a simple device and location validation system (track which users log in with which devices and from where), combined with breached password detection from HIBP, which both can trigger an email validation code flow, practically solved the credential stuffing issues we had immediately.

For the user it's kind of a a soft MFA via email where they don't have to enable it, but also don't always get the challenge.

Astonishingly, we had barely any complaints about the system via customer care and also didn't notice a drop in (valid) logins or conversion rates.


To me, that seems like a pretty reasonable approach... adding a password change at the end would probably be a good last add.

I tend to generate my passphrases for sites now, my only complaint is a password field should accept at least 100 characters. Assuming it's salted+hashed anyway, it's almost irresponsible to limit to under 20 characters. I'd rather see a minimum of 15 chars and a suggestion to use a "phrase or short sentence" in the hint/tip.

I wrote an auth system and integrated the zxcvbn strength check and HIBP as default enabled options. The password entry allowed for up to 1kb input, mostly as a practical limit. I also tend to prefer having auth separated from the apps, in that if auth fails via DDoS, etc, then already authenticated users aren't interrupted.


> a password field should accept at least 100 characters. Assuming it's salted+hashed anyway

There was recently a bug in bcrypt implementation where characters after first 64 were silently ignored.

Anyway, while it is easy to require long password it is almost impossible to detect password reuse. The only way to solve the issue is to not let users to choose passwords, if they want to change it then generate a new one for them. And that isn't happening unless sites are forced to do it by government.


As long as I can use a password manager for passwords... unfortunately, I have to login to the OS to get to the password manager itself.

I think there are plenty of other solutions, including 2fa, push notifications and likely more valuable than any of the previous mentioned bits would be to ensure that SSO works across an organization.

In general, simply requiring a minimum length of say 15 chars and the suggestion to use a phrase or sentence is enough. I've switched Bitwarden to the word generation option with capitals and numbers, which usually works, except when there's an arbitrarily small maximum length on the input field.

I switched because trying to type 20 random characters including special characters in under 20s (was a remote terminal limit on a VM I'd misconfigured and had no other way in) was pretty much impossible and had to run the reimage from scratch.


It would be really cool if it didn't just show the ping, but how much worse it is compared to the theoretical optimum (speed of light in fiber optic medium, which I believe is about 30% slower than c).

I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.


I'm under the impression that within the hyperscalers (and probably the big colo/hosting firms, too), this is known. It's important to them, and customers, especially when a customer is trying to architect an HA or DR system and needs to ensure they don't inadvertently choose a region (or even a zone that isn't physically in the same place at other zones in the same region) that has "artificially" (can be for all kinds of legitimate reasons) latency from the primary zone.

This is not an uncommon scenario. My current employer specializes in SAP migrations to cloud and this is now a conversation we have with both AWS & GCP networking specialists when pricing & scoping projects... after having made incorrect assumptions and being bitten by unacceptable latency in the past.


Doesn't look like this is a ping[0]! Which is good. Rather it is a socket stream connecting over tcp/443. Ping (ICMP) would be a poor metric.

[0] https://github.com/mda590/cloudping.co/blob/8918ee8d7e632765...


ping is synonymous with echo-request, which is largely transport agnostic.

but you're right


why 443? are you assuming ssl here? serious question, I'm not sure. But if it is, wouldn't it be hard to disregard the weight of SSL in the metric?


The code closes the connection immediately after opening a plain TCP socket, so no SSL work is done. Presumably 443 is just a convenient port to use.


tcp/443 is likely an open port on the target service (Dynamodb based on the domain name). TLS is not involved.

ICMP ECHO would be a bad choice as it is deprioritized by routers[0].

[0] https://archive.nanog.org/sites/default/files/traceroute-201...


The script connects to well known 'dynamodb.' + region_name + '.amazonaws.com' server that expects HTTPS


You would have to map out the cables to do that.

Light in fiber optic cable travels roughly 70% of the speed of light ~210,000 km/s Earth's circumferences is ~40,000 kilometers. Direct route from the other side of Earth to another would be roughly 100 milliseconds, round trip 200 ms.


It’s pretty trivial to do this, any big fiber company will provide you with Google Earth KMZ files (protected by NDA) when considering a purchase. This is absolutely necessary when designing a redundant network or if you want lower latency.


Since light travels at 100% the speed of light in a vacuum (by definition), I have wondered if latency over far distances could be improved by sending the data through a constellation of satellites in low earth orbit instead. Though I suspect the set of tradeoffs here (much lower throughput, much higher cost, more jitter in the latency due to satellites constantly moving around relative to the terrestrial surface) probably wouldn't make this worth it for a slight decrease in latency for any use case.


Hollow core fiber (HCF) is designed to substantially reduce the latency of normal fiber while maintaining equivalent bandwidth. It's been deployed quite a bit for low latency trading applications within a metro area, but might find more uses in reducing long-haul interconnect latency.


Absolutely! The distance to LEO satellites (like spacex or kuiper) is low enough that you would beat latency of fiber paths once the destination is far enough.


In the past we just had line of sight microwave links all over the US instead.

I think it's just too damn expensive for your average webapp to cut out ten milliseconds from backend latency.


Yes. There are companies that sell microwave links over radio relay towers to various high frequency traders.


I am pretty sure this was one of the advertised strength of Starlink. Technically the journey is a bit longer, but because you can rely on the full speed of light you still come out ahead.


Cable mapping would be nice but 100ms is a meaningfully long amount of time to make straight-line comparison worthwhile


clicking around that map, I don't see any examples where the latency is a long way out of line with the distance.

Obviously it's theoretically possible to do ~40% better by using hollow fibers and as-the-crow-flies fiber routing, but few are willing to pay for that.


The 'practical' way to beat fiber optics is to use either

(i) a series of overground direct microwave connections (often used by trading firms)

(ii) a series of laser links between low altitude satellites. This would be faster in principle for long distances, and presumably Starlink will eventually offer this service to people that are very latency sensitive


Low-bandwidth/low-latency people tend to also demand high reliability and consistency. A low-orbit satellite network might be fast but, because sats move to quickly, cannot be consistent in that speed. Sats also won't ever connect data centers other than perhaps for administrative stuff. The bandwidth/reliability/growth potential just isn't there compared to bundles of traditional fiber.


> Low-bandwidth/low-latency people tend to also demand high reliability and consistency.

For trading applications, people will absolutely pay for a service that is hard down 75% of the time and has 50% packet loss the rest, but saves a millisecond over the fastest reliable line. Because otherwise someone else will be faster than you when the service is working.

They can get reliability and consistency with a redundant slower line.


Can you provide a source to this statement? The redundancy needed to transmit at desirable reliability with 50 % packet loss would, I imagine, very quickly eat into any millisecond gains -- even with theoretically optimal coding.

Someone more familiar with Shannon than I could probably quickly back-of-the-napkin this.


Financial companies have taken and upgraded/invested in microwave links because they can be comparatively economical to get "as the crow flies" distances between sites:

https://www.latimes.com/business/la-fi-high-speed-trading-20...

https://arstechnica.com/information-technology/2016/11/priva...

https://en.wikipedia.org/wiki/TD-2#Reemergence

I'm not sure about the high packet loss statement, but it wouldn't suprise me that it's true if the latency is lower enough to get to take advantage of arbitrage opportunities often enough to justify the cost.


Traders wouldn't use redundancy etc. Whenever a packet with info arrives, they would trade on that info (eg. "$MSFT stock is about to go down, so buy before it drops!"). If there is packet loss, then some info is lost, and therefore some profitable trading opportunities are missed. But thats okay.

There are thousands of such opportunities each second - they can come from consumer 'order flow' - ie. information that someone would like to buy a stock tells you the price will slightly rise, so go buy ahead of them and sell after them in some remote location.


There is also a market for stocks that trade on different exchanges, resulting in fleeting differences in price between exchanges. Those who learn of price moves first can take advantage of such differences. In such cases, all you need to transmit is the current stock price. The local machine can then decide to buy or sell.


There's definitely a few billion a year in revenue for Starlink if they sell very low latency, medium bandwidth connections between Asia, the US, Europe and Australia to trading firms. Even if the reliability is much worse than fiber.


Starlink latencies sadly aren't competitive due to the routing paths it uses. And sadly there are currently no competitors to starlink.


The routing paths traveling via ground stations, you mean? My understanding is that they were experimenting with improvements to this, they just haven't deployed anything yet.


A radio will beat starlink on ping times. Even a simple ham bouncing a off the ionosphere can win out over an orbiting satellite, at least for the very small amounts of data needed for a trade order. The difficulty in such schemes is reliability, which can be hit-or-miss depending on a hundred factors.


No, even with proposed inter-satellite routing paths, they are too slow. The trading industry has very much done the math on this.

The comparison is against radio and hollow-core fiber, not conventional fiber.


Laser links between satellites have been active since late 2022, or was there some additional improvement you're referring to?


I haven't kept track of that, but there is no other improvement. Even with the straightest possible laser links in space, they are too slow.


> sats move to quickly, cannot be consistent

Satellites in geostationary orbit are a (very common) thing.


Geostationary is so much further than LEO though so worse latency


AU <-> South Africa & South America is way less than distance.


Author here - Interesting. Someone on X also gave this idea to me. Any good resources for how to accurately compute this?


The theoretical best latency would be something like speed_of_light_in_fiber/great_circle_distance_between_regions, both of which are pretty easy to find. The first is a constant you can look up, and the second you can compute from coordinates of each region pair.


Thats what we did as well, via wolfram alpha. I.e. we were too lazy to look up everything ourselves and just asked it straight up how long of a roundtrip it would be between two destinations via fiber. We checked one result and it was spot on. This was six years ago tho


IIRC about 125 miles per ms


I recommend this book as well (absolute beginner here). Learned to see the world a bit differently because of it.


Also, if for example the SaaS you’re running sends a lot of system emails that really shouldn’t end up in spam filters, you can’t afford to let things like marketing campaigns negatively influence your domain’s spam score.

Easier and safer to have separate domains.


After doing the first-time CTO thing three years ago in an established company with over 100 engineers, I think these two are the minimum required reading:

An Elegant Puzzle: Systems of Eng Management (https://lethain.com/elegant-puzzle)

and

The Art of Leadership, small things done well (https://www.amazon.com/Art-Leadership-Small-Things-Done/dp/1...)

There are a lot more that were helpful to me, but those two encompass most of the important concepts and skills already in a usefully synthesized way, at least for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: