What you are looking for (as an employer) is people who are in love of AI.
I guess a lot of participants rather have an slight AI-skeptic bias (while still being knowledgeable about which weaknesses current AI models have).
Additionally, such a list has only a value if
a) the list members are located in the USA
b) the list members are willing to switch jobs
I guess those who live in the USA and are in deep love of AI already have a decent job and are thus not very willing to switch jobs.
On the other hand, if you are willing to hire outside the USA, it is rather easy to find people who want to switch the job to an insanely well-paid one (so no need to set up a list for finding people) - just don't reject people for not being a culture fit.
But isn't part of the point of this that you want people who are eager to learn about AI and how to use it responsibly? You probably shouldn't want employees who, in their rush to automate tasks or ship AI powered features, will expose secrets, credentials, PII etc. You want people who can use AI to be highly productive without being a liability risk.
And even if you're not in a position to hire all of those people, perhaps you can sell to some of them.
Honestly, it seems worse than web3. Yes, companies throw up their hands and say "well, yeah the original inventors are probably right, our safety teams quit en masse or we fired them, the world's probably gonna go to shit, but hey there's nothing we can do about it, and maybe it'll all turn out ok!" And then hire the guy who vibecoded the clawdbot so people can download whatever trojan malware they can onto their computers.
I've seen Twitter threads where people literally celebrate that they can remove RLHF from models and then download arbitrary code and run it on their computers. I am not kidding when I say this is going to end up far worse than web3 rugpulls. At least there, you could only lose the magic crypto money you put in. Here, you can not even participate and still be pwned by a swarm of bots. For example it's trivially easy to do reputational destruction at scale, as an advanced persistent threat. Just choose your favorite politician and see how quickly they start trying to ban it. This is just one bot: https://www.reddit.com/r/technology/comments/1r39upr/an_ai_a...
> I guess a lot of participants rather have an slight AI-skeptic bias (while still being knowledgeable about which weaknesses current AI models have)
I don't think that these people are good sales targets. I rather have a feeling that if you want to sell AI stuff to people, a good sales target is rather "eager, but somewhat clueless managers who (want to) believe in AI magic".
Also, how is it more data than when you buy a coffee? Unless you're cash-only.
I know everyone has their own unique risk profile (e.g. the PIN to open the door to the hangar where Elon Musk keeps his private jet is worth a lot more 'in the wrong hands' than the PIN to my front door is), but I think for most people the value of a single unit of "their data" is near $0.00.
How do you know? They can tell everyone they've won and snack their data. It's not a verifiable public contest.
> Also, how is it more data than when you buy a coffee?
Coffee-shop has no other personal data and is usually using other payment-methods. But still, there have been cases of misusage.
> but I think for most people the value of a single unit of "their data" is near $0.00.
This is a classical scenario for social engineering, and we are in a high profile social group here. There is a good chance that someone from a big company is participating here. This is not about stealing some peanuts or selling a handful or data on the darknet. It's about collecting personal data and scouting potential victims for a future attacks.
And I'm not saying this is an actual case happening here, but to not even see the problem is..interessting.
The various "dark mode" browser extensions work well here. Brave on Android also has a dark mode toggle built in, which I assume repackages one of the extensions.
I would assume he has to take into account that it took him about 2 years (based off his Reddit profile post) and there's Steam's 30% cut + publisher costs.
I'm indifferent to whether he's feeling wealthy after this. I'm surprised at the sales volume for what looks like a pretty generic game. Does this really look like a game that 50k people would buy to you? In early access? Are my priors so wrong on how what it takes to sell $100k of games?
To be honest, no. It does not look like a game 50k people would buy in early access to me. I would assume it has to be with some streamer playing his game [0].
Regarding your other question, I think that yes, you might have a high conception for what it takes to actually sell some games. Not to disparage on this other dev for example [1] because it sure looks like the games are well polished, but these games definitely look very unappealing to me and I'm pretty sure a lot of people as well. IIRC he talks about how people really have a fondness for specific topics in games and how one can leverage that.
In case you are wondering, this is using https://tauri.app/ which allows it to be web based inside a native app. I was not familiar with that project, but taking a peek at the Activity Monitor, it seems to spin up 2 processes, one for tauri itself which I presume is a small static file server (around 60MB memory), and one for the main UI window (around 30MB memory).
reply