Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you elaborate on the "mode of peril"? Is it:

(a) Top labs quietly signing deals for military deployment of frontier models in unmanned strike weapons?

(b) Top labs agreeing to license LLMs for social engineering/propaganda ops?

(c) Models that vastly exceed human intelligence and have capacity to pursue own agenda (i.e. runaway intelligence)?

(d) Something else?

It looks like dangers of AGI are overblown (perhaps partially due to grant funding and ability to get political traction/investment/competitive advantage), while (a) and (b) are severely underdiscussed. Would love to get other perspectives.



All of the above (without even believing in AGI).

See CNN article linked here and follow links to articles mentioned in it for more details - https://news.ycombinator.com/item?id=46997198




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: