Having mixed feelings on word "actually" as it is/was one of my favorites. Other stuff like "for instance" and "interestingly" are seem to be getting there too...
Honestly, first paragraph sounds more human and sincere for sure.
Also adding better "context" into the discussion, than the usual claims/punchlines of marketing-speak.
Maybe it's not exactly the grammar itself but also overall structuring of the idea/thought into the process. The regular output sounds much more like marketing-piece or news-coverage than an individual anyway. I think, people wanna discuss things with people, not with a news-editor.
> I think, people wanna discuss things with people, not with a news-editor.
If I understand you correctly, then Yes I completely agree, but my worry is that this can also be "emulated" as shown by my comment by Models already available to us. My question is, technically there's nothing to stop new accounts from using say Kimi and to have a system prompt meant to not sound AI and I feel like it can be effective.
If that's the case, doesn't that raise the question of what we can detect as AI or not (which was my point), the grand parent comment suggests that they use intentionally bad human writing sometimes to not be detected as AI but what I am saying is that AI can do that thing too, so is intentionally bad writing itself a good indicator of being human?
And a bigger question is if bad writing isn't an indicator, then what is?
Or if there can even be an good indicator (if say the bot is cautious)? If there isn't, can we be sure if the comments we read are AI or not
Essentially the dead-internet-theory. I feel like most websites have bots but we know that they are bots and they still don't care but we are also in this misguided trust that if we see some comments which don't feel like obvious bots, then they must be humans.
My question is, what if that can be wrong? It feels to me definitely possible with current Tech/Models like say Kimi for example, Doesn't this lead to some big trust issues within the fabric of internet itself?
Personally, I don't feel like the whole website's AI but there are chances of some sneaky action happening at distance type of new accounts for sure which can be LLM's and we can be none the wiser.
All the same time that real accounts are gonna get questioned if they are LLM or not if they are new (my account is almost 2 years old fwiw and I got questioned by people esentially if this account is AI or not)
But what this does do however, is make people definitely lose a bit of trust between each other and definitely a little cautious towards each message that they read.
(This comment's a little too conspiratorial for my liking but I can't help but shake this feeling sometimes)
It just is all so weird for me sometimes, Idk but I guess that there's still an intuition between whose human and not and actually the HN link/article iteslf shows that most people who deploy AI on HN in newer accounts use standard models without much care which is the reason why em-dashes get detected and maybe are good detector for sometime/some-people and this could make the original OP's comment of intentionally having bad grammar to sound more human make sense too because em-dashes do have more probability of sounding AI than not :/
It's just this very weird situation and I am not sure how to explain where depending on from whatever situation you look at, you can be right.
You can try to hurt your grammar to sound more human and that would still be right
and you can try to be the way you are because you think that models can already have intentionally bad grammar too/capable of it and to have bad grammar isn't a benchmark itself for AI/not so you are gonna keep using good grammar and you are gonna be right too.
It's sort of like a paradox and I don't have any answers :/ Perhaps my suggestion right now feels to me to not overthink about it.
Because if both situations are right, then do whatever imo. Just be human yourself and then you can back down this statement with well truth that you are human even if you get called AI.
So I guess, TLDR: Speak good grammar or not intentionally, just write human and that's enough or that should be enough I guess.
I started making deliberate grammar and spelling mistakes in professional context. Not like I have a perfect writing anyway, but at least I could prove that it was self-written, not an auto-generated slop. (Could be self-written slop though :)
This applies not only work-stuff itself also to the job-applications/cv/resume and cover-letters.
unrelated but I've never understood how to put a smiley at the end of parenthetical sentences (which comes up surprisingly often for me since I use smileys a lot and also like using parentheses). Just the smiley as an end parentheses (like this :) feels off but adding another parentheses (like this :) ) makes it look like it should be nested which causes problems since I also tend to nest parenthetical sentences (like (this)).
I like this simply for the absurdity of it, but will only use it when the entire parenthetical is modified by the smiley instead of a single word or phrase (:since I really like it:) but (it looks ugly, no hard feelings :) )
"Вот его, нет, не допустили (сама знаешь, почему)))"
My translation:
"But him - no, they didn't let him in (of course you know why :)"
When I went from texting friends in Russian or Ukrainian back to English, I missed right parentheses as a smiley; one or two - hi), hello)) - to me are like a smile, by ))) and )))) there's some laughing or some other joke going on. Native speakers could weigh in; my native tongue is English.
allow me to introduce my friend – turned smiley
here he is: ´◡`
(quite useful for brackets ´◡`)
you can find him on windows by pressing Win + ;
not as fast as typing, but quite faster then typing and then wondering if thats too much brackets or too little
tbh u can basically do this now lol... no flag needed.
if u want it to sound more real u just gotta tell the bot to write that way. like literally just ask it to throw in some typos or forget to capitalize stuff. or use slang and kinda ramble instead of being all robotic and organized.
I'm trademarking the improper use of it/it's, there/their/they're, were/we're, etc as a sign of my humanity. Apple's typocorrect is doing it for me anyways.
> I started making deliberate grammar and spelling mistakes in professional context.
I've also noticed an increase of this in myself and others, I used to edit a lot more before sending anything, but now it seems more authentic if you just hit send so it's more off the cuff with typos, broken sentences and all.
I'm sure an LLM could easily mimic this but it's not their default.
I appreciate you including a few minor mistakes in this very post:
> I started making deliberate grammar and spelling mistakes in professional context[s]. Not like I have ~a~ perfect writing anyway, but at least I could prove that it was self-written, not an auto-generated slop. (Could be self-written slop though :)
> This applies not only [to] work-stuff itself also to the job-applications/cv/resume and cover-letters.
Imagine the delays are so prominent, someone decides to make a website for CTA (call-to-action) and semi-regularly shares updates on it...
I've been to Seattle once, (ex-Amazon here) where the DevCon was held in the town while my team was located in Bellevue. I took initiative to rent a bike for a day (60$ for drop-bar gravel bike) I must say although I did not beat the time between Day-1 (Office across spheres) and Bingo (Bellevue office), it was not far off. Even comparing the "Shuttles" Amazon operated, shuttle took about 1h while ride takes around 1h15m. (Plus sweat)
> P.S: I would say I am in a "fair" shape as I ride quite a lot throughout the year.
Apache doesn't have it on by default but easy to turn it on. It's called usermod or mod_user. By default it's the ~/www directory. So, anyone with /home/<name>/www ends up being site.url/~<name>/
It is also possible to add .htaccess and other things there, like username/password challenge (WWW-Authenticate) into that on per-user basis.
Mostly universities had hosting setup the same way. ISPs would also offer a similar thing with an additional fee to your internet-subscription. They mostly provided FTP to upload files. Nowadays if anyone tries to, it will be a SFTP rather than FTP.
Unrelated to the topic described in the blog itself, I overall like the theme of `susam.net`. The name itself reminded me of a sesame seed in Turkish for a while. (I think author had recently mentioned one of the recent posts that they wanted to get susam.com but that was already taken by a Turkish company selling some spices...)
The content (that shows up in HN) is also good. Since I am on mobile device, I cannot tell the exact font used, but seems like Georgia to me. While https://github.com/susam/susam.net hosts the actual source code of the website.
Another remark: Would be really nice to have a same theme adaptation for BearBlog and similar places.
From the post itself, I am not sure if the author had sent a patch or some sort of a pull-request to the affected entities. Namely pyaes and aes-js.
The response might've been different if the author had already given a patch, in somewhat backward-compatible way. This doesn't even have to be a functional patch, could be a simple `@warning: usage of default IV will cause insecure storage` similar annotations on the affected functions.
Another thing to remark (and which might've been off-putting for the authors of these libraries) that the author had used term mistakes in various places. Of course in an ideal world, ego should not or would not matter, but these libraries both seem to be quite stale and possibly the authors are having other $DAYJOB responsibilities. Making it difficult to fix things that they just receive complaints about. (I am also guessing these are quite many...)
Again in relation to the points above, it might've been better to say: Cryptography evolves over time, last years' best-practices get outdated, vulnerabilities being found, replaced with newer best-practices of this year. Same will happen next year too. It's not a deliberate mistake or any type of incompetency issue, this is a matter of ever-evolving field that we know and understand better...
This is not about best practises, or something that changed, this has always been something you need to do to make CTR mode actually secure. It was an actual mistake to hard code the IV.
My overall take (elephant in the room): Blue light filters don't work, it depends on what you do & how you do it.
For example, most people keep watching/scrolling Instagram Reels and TikTok videos. They keep stimulating the brain constantly, not just at electrical level but also in emotional/chemical level too.
I have seen people who are addicted and cannot get rid of the addiction. This is not only the dopamine-boost, it has deeper connections of neuro-chemical stimuli. Just observe around you; people pick up their phone to directly open Insta/TikTok, start scrolling right away every 5-10 seconds. (watching stories included too)
This is to some extent that when you mention even the possibility of such addiction and abnormal behavior, one gets outright resistance and denial of addiction itself. Much like substance abuse...
My point is, majority of the population watches/scrolls these, needing 10g of melatonin to fall asleep.
Obviously if I get engaged in an interesting stuff continuously, the existence of blue light does not matter that much. It matters if/when I am reading a novel which is in a mediocre chapter where nothing that interesting going on. The existence of blue-light or lack thereof may tip the scale at that point.
Interesting take for me is that melatonin (over-)usage can be severely harmful for the individuals.
... over-the-counter melatonin supplements can contain anywhere between 10 to 30 times as much melatonin as is optimal to maintain circadian hygiene. If you have ever taken melatonin and got immediately knocked out cold, had weird dreams and woke up in the middle of the night sweaty or shivering, you likely took too much—which, to be clear, is not your fault, it’s the default in the US and Canada. The mega-doses in stores serve as hypnotics (punches you to sleep), but wreck sleep architecture. The right dose is ~0.3 mg, which is hard to find in pharmacies but can be found online.
I've heard this before, but the only metastudy I could find strongly supports a dose of 3 mg 3 hours before bedtime. Dose effectiveness effectively halves when taking less than ~2 mg or more than ~10 mg.
reply