> How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like?
I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.
The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.
I think what you are describing is technically possible (not my immediate domain, however). They don't have real-time insight into what the model is being used for, you are correct about this afaik. But the incident that kicked off this paranoia was Anthopic calling around after the fact to try to find out how JSOC was using the model during the Maduro raid. None of the context of those questions are public, and I doubt they will become public, but it stands to reason that the nature of the questions was concerning enough for the War Department to cause them insist on the "any lawful use" language to be inserted into the contract.
>The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated.
You are also correct here imo, with one important caveat. Even if private companies have the means for enforcing that clause, it is not their business to do so. Maybe that's the crux of the problem, one of perspective. The for-profit entity in these arrangements is not and can never be trusted as the mechanism of enforcement for whatever we, as a republic, decide are the rules. That is the realm of elected government. Anthropic employees are certainly making their voice heard on how they believe these tools should be used, but, again, this is an is versus ought problem for them.
In the field of Geophysical Fluid Dynamics it is an important distinction as there are other very important waves. Rossby waves are not gravity waves and extremely important to the global climate (see their role in ENSO dynamics). Compressive waves (acoustic waves) are everywhere of course. There are also topographic Rossby waves, internal waves and Kelvin waves (note: kelvin waves and internal waves are gravity waves as well). Oh, and inertial waves!
Hubble just spotted a "bullseye" galaxy where a smaller galaxy passed through the center and caused ripples in the gas bobbing with gravity, like dropping a stone in a pond:
In addition to what others have said, Often from a network perspective you want smaller range.
At the end of the day, there is a total speed limit of Mb/s/Hz.
For example, in cities, with a high population density, you could theoretically have a single cell tower providing data for everyone.
However, the speed would be slow, as for a given bandwidth six the data is shared between everyone in the city.
Alternatively, one could have 100 towers, and then the data would only have to be shared by those within range. But for this to work, one of the design constraints is that a smaller range is beneficial, so that multiple towers do not interfere with each other.
I used to pick new languages when I started projects with AI for learning but lately I've been using ruby for everything possible and I generally prefer it's output as it writes stuff more idiomatically than I do (out of laziness)
That's an issue with any plugin system, right? AFAIK no IDE has a plugin system with capabilities or a sandboxed interpreter.
VSCode does have a thing where it's like do you trust the authors of this project. Not sure what it does because I've never had to use it. From StackOverflow[1]:
>If you select No, I don't trust the authors, Visual Studio Code will open the workspace in 'restricted mode'. This is the default for all new workspaces. It lets you safely browse through code but disables some editor feature, including debugging, tasks, and many extensions. However, keep in mind that 'restricted mode' is all you need for many use cases.
Actually if restricted mode[2] is any good, vscode might be better at security than most other editors/IDEs.
> Actually if restricted mode[2] is any good, vscode might be better at security than most other editors/IDEs.
Unfortunately, it’s not. Restricted mode is VSCode without any plugins. That means that unless you’re doing very basic TS development (I think that’s the only language VSCode supports out of the box), then you’re kinda hosed.
Yeah, I'm all in for a more secure option as long as it allows me to do everything that VSCode's SSH agent does. But if the devex goes down the drain because of "security" then I'm good for now.
I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.
The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.