This only works for one SSID. Even then, one thing that can mitigate this is using Private-PSK/Dynamic-PSK on WPA2, or using EAP/Radius VLAN property.
On WPA3/SAE this is more complicated: the standard supports password identifiers but no device I know of supports selecting an alternate password aside from wpa_supplicant on linux.
Hostapd now has support for multi pass SAE /WPA3 password as well. We have an implementation of dynamic VLAN+per device PSK with WPA3 (https://github.com/spr-networks/super) we've been using for a few years now.
Ironically one of the main pain points is Apple. keychain sync means all the apple devices on the same sync account should share a password for wireless. Secondly the MAC randomization timeouts require reassignment.
The trouble with SAE per device passwords is that the commit makes it difficult to evaluate more than one password per pairing without knowing the identity of a device (the MAC) a-priori, which is why it's harder to find this deployed in production. It's possible for an AP to cycle through a few attempts but not many, whereas in WPA2 an AP could rotate through all the passwords without a commit. The standard needs to adapt.
I was leaning towards using this configuration for splitting devices into VLANs while using one SSID. Yeah, dynamic VLAN+per device PSK would be best, but I'm probably happy enough with a shared PSK per VLAN to isolate a guest or IoT network. Would this VLAN isolation have prevented this attack? At least to prevent an attacker from jumping between VLANs? (I assume shared PSK per VLAN might be vulnerable to attacking client isolation within the VLAN?)
You can't think all the way about refining your prompt for LLMs as they are probabilistic. Your re-prompts are just retrying until you hit a jackpot - refining only works to increase the chance to get what you want.
When making them deterministic (setting the temperature to 0), LLMs (even new ones) get stuck in loops for longer streams of output tokens. The only way to make sure you get the same output twice is to use the same temperature and the same seed for the RNG used, and most frontier models don't have a way for you to set the RNG seed.
Randomness is not a problem by itself. Algorithms in BQP are probabilistic too. Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.
And provably correct one-shot program synthesis based on an unrestricted natural language prompt is obviously an oxymoron. So, it's not like we are clearly missing the target here.
>Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.
Yes, but that requires a formal specification of what counts as "success".
In my view, LLM based programming has to become more structured. There has to be a clear distinction between the human written specification and the LLM generated code.
If LLMs are a high level programming language, it has to be clear what the source code is and what the object code is.
Programs written in traditional PLs are also often probabilistic. It seems that the same mechanisms could be used to address this in both types (formal methods).
End-to-end usually means only the data's owner (aka the customer) holds the keys needed. The term most used across password managers and similar tools is "zero knowledge encryption", where only you know the password to a vault, needed to decrypt it.
There's a "data encryption key", encrypted with a hash derived of your username+master password, and that data encryption key is used locally to decrypt the items of your vault. Even if everything is stored remotely, unless the provider got your raw master password (usually, a hash of that is used as the "password" for authentication), your information is totally safe.
A whole other topic is communications, but we're talking decryption keys here
Sure, but there's a clear practical difference. Most professionals don't have the agency or company backing to allow LGPL, with their companies source code. Most personal users do.
GrapheneOS wants to make a FOSS Android with the security model that makes it hard for any bad party to break into the phone.
LineageOS wants to make a FOSS Android that respects user's privacy first and foremost - it implements security as best as it can but the level of security protections differs on different supported devices.
Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS - differing in that third parties with local access to the device can still brute-force their access whereas with GrapheneOS they can't - unless they have access to hardware level attacks.
This is the correct response. I use both GrapheneOS and LineageOS. But LineageOS focus is on delivering newer versions of Android to many phones abandoned by their OEM. GOS exclusively focuses on security and privacy. If you want a reasonably secure phone but don't want Google or Apple inside your device, your best bet is GOS.
How can LOS's security be somewhat close to GOS if it's worse than OEM? LOS lacks verified boot, hardware security features, it's often behind is security patches.. With "advanced protection" enabled stock OEMs are even more secure, but GOS is even more secure still. When it comes to EOL devices, LOS may be more secure than OEM depending on your threat model.
It very much depends on your personal threat model, if you expect targeted attacks LOS doesn't hold a candle to GOS, but at least for my threat model verified boot and hardware security features outside of my control don't have a substantial security benefit.
Obviously it would be preferable to have up to date security patches, but as long as there are plenty oven even more easily exploitable devices, and there is no WannaCry level attack ongoing it is a risk I'm willing to accept for more user freedom.
Interesting... I rarely form words in my inner thinking, instead I make a plan with abstract concepts (some of them have words associated, some don't). Maybe because I am multilingual?
English is not my native language, so I'm bilingual, but I don't see how this relates to that at all. I have monologue sometimes in English, sometimes in my native language. But yeah, I don't understand any other form of thinking. It's all just my inner monologue...
I used it. It's an (ugly) functional programming language that can transform one XML into another - think of it as Lisp for XML processing but even less readable.
It can work great when you have XML you want to present nicely in a browser by transforming it into XHTML while still serving the browser the original XML. One use I had was to show the contents of RSS/Atom feeds as a nice page in a browser.
I would just do this on the server side. You can even do it statically when generating the XML. In fact until all the stuff about XSLT in browsers appeared recently, I didn't even know that browsers could do it.
Converting the contents of an Atom feed into (X)HTML means it's no longer a valid Atom feed. The same is true for many other document formats, such as flattened ODF.
Is an XLST page a valid atom feed? Is it really so terrible to have to two different pages -- one for the human readable version, and one for the XML version?
Yes, an <?xml-stylesheet href="..."?> directive is valid in every XML document. You can use CSS to get many of the benefits of XSLT here, but it doesn't let you map RSS @link attributes to HTML a/@href attributes, and CSS isn't designed for interactivity. That's a rather significant gap in functionality.
It is rather terrible to have two different pages, because that requires either server or toolchain support, and complicates testing. The XSLT approach was tried, tested, and KISS – provided you didn't have any insecure/secure context mismatches, or CORS issues, which would stop the XSL stylesheet from loading. (But that's less likely to spontaneously go wrong than an update to a PHP extension breaking your script.)
But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots?
Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself.
> But you can find that information regardless of an LLM?
Do you have the same opinion if Google chooses to delist any website describing how to run apps as root on Android from their search results? If not, how is that different from lobotomizing their LLMs in this way? Many people use LLMs as a search engine these days.
> Why are we assuming just because the prompt responds that it is providing proper outputs?
"Trust but verify." It’s often easier to verify that something the LLM spit out makes sense (and iteratively improve it when not), than to do the same things in traditional ways. Not always mind you, but often. That’s the whole selling point of LLMs.
I tried to get VLC to open up a PDF and it didn't do as I asked. Should I cry censorship at the VLC devs, or should I accept that all software only does as a user asks insofar as the developers allow it?
On WPA3/SAE this is more complicated: the standard supports password identifiers but no device I know of supports selecting an alternate password aside from wpa_supplicant on linux.
reply