Hacker Newsnew | past | comments | ask | show | jobs | submit | tkwa's commentslogin

It seems fine to me. When there is evidence for a certain type of current or future harm they present it, and when there is not they express uncertainty.

Can AI enable phishing? "Research has found that between January to February 2023, there was a 135% increase in ‘novel social engineering attacks’ in a sample of email accounts (343*), which is thought to correspond to the widespread adoption of ChatGPT."

Can AIs make bioweapons? "General-purpose AI systems for biological uses do not present a clear current threat, and future threats are hard to assess and rule out."


This is like someone saying "I am much more worried about the implications of dumb humans using flintlock muskets in the near term, then I am about the theoretical threat of machine guns and nuclear weapons." Surely the potential for both misuse and mistakes goes up the more powerful the technology gets.


Rather loaded analogy. We're well aware of the practical threat nuclear weapons pose, you're assuming a lot to compare them with AGI. It's as valid to say it's like someone in the 1980s talking about how they're much more worried about the dangers of poorly operated and designed Soviet fission reactors than they are about the theoretical threat of fusion (sure to become economical in the next twenty years!)


That's fair, but to keep going with the analogy: we are currently the Native Americans in the 1500's, and the Conquistadors are coming ashore with their flintlocks (ML). Should we be more worried about them, or the future B-2 bombers, each armed with sixteen B83 nukes (AGI)?

I understand that the timeline may be exponentially more compressed in our modern case, but should we ignore the immediate problem?

In this analogy, the flintlocks could be actual ML-powered murder bots, or just ML-powered economic kill bots, both fully controlled by humans.

The flintlocks enable the already powerful to further consolidate their power, to the great detriment of the less powerful. No super AGI is necessary, it just takes a large handful of human Conquistador sociopaths with >1,000x "productivity" gains, to erase our culture.

I don't understand how we could ever get to the point of handling the future B-2 nuke problem, as a civilization, without first figuring out how to properly share the benefits of the flintlock.


> Vyxal aims to bridge the gap between simplicity and "golfability".

With code golfing languages, there are inherent tradeoffs between code size and usability/fun. IMO the most important features for minimizing length (assuming the only rule is the interpreter must be published before the challenge) are:

* (in Vyxal) Efficient syntax. Not sure what state of the art is anymore but stack-based seems reasonable.

* (in Vyxal) String compression

* (not in Vyxal) Efficient encoding; Huffman coding at a minimum but ideally arithmetic coding using sophisticated machine learning to predict the next command. It's super inefficient to have each command be 1 or 2 bytes regardless of frequency.

* (not in Vyxal) Huge numbers of builtins; Vyxal has "only" ~560. Ideally every past code golf question and every OEIS sequence are their own builtin.

Vyxal might hit a sweet spot, but I'm skeptical that it actually score as well as other languages with more of these features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: