Hacker Newsnew | past | comments | ask | show | jobs | submit | thayne's commentslogin

It seems logical to me that a term limit could increase vulnerability to corruption in your last term. If you can't be re-elected, there is less incentive to be loyal to the people you represent.

The potential for corruption exists independent of term limits. "the studies" are readily available for investigation.

It's more likely to be an issue for distributions like Debian, Ubuntu, Red Hat, etc.

Although, if I'm understanding this correctly, I think all they would have to do to comply is have something during installation that asks for the age category, and write a file that is world readable, but only writable by root that contains that category that applications can read.


That is already way too much as far as I'm concerned. It's not that it's difficult, it's that it's arbitrary and a form of commanded speech or action. Smallness and easiness isn't an excuse.

If you write a story, there must be a character in it somewhere that reminds kids not to smoke. That's all. It's very easy.


I actually don't mind mandating the market take reasonable actions. The EU mandating USB C was an excellent move that materially improved things.

However I think mandated actions should to the greatest extent possible be minimal, privacy preserving, and have an unambiguous goal that is clearly accomplished. This legislation fails in that regard because it mandates sharing personal information with third parties where it could have instead mandated queries that are strictly local to the device.


Under no circumstances should we be “mandating” how hobbyists write their software. If you want to scope this to commercial OSes, be my guest. That’s not what was done here.

I'm not sure where the line between "hobby" and "professional" lies when it comes to linux distributions. Many of them are nonprofit but not really hobbyist at this point. Debian sure feels like a professional product to me (I daily drive it).

We regulate how a hobbyist constructs and uses a radio. We regulate how a hobbyist constructs a shed in his yard or makes modifications to the electrical wiring in his house.

I think mandating the implementation of strictly device local filtering based on a standardized HTTP header (or in the case of apps an attached metadata field) would be reasonably non-invasive and of benefit to society (similar to mandating USB C).


> I'm not sure where the line between "hobby" and "professional" lies when it comes to linux distributions. Many of them are nonprofit but not really hobbyist at this point. Debian sure feels like a professional product to me (I daily drive it).

"Professional" means you're being paid for the work. Debian is free (gratis), contributors are volunteers, and that makes it not professional.


What about Ubuntu? Its a combination of work by volunteers and paid employees, it is distributed by a commercial company, and said company sells support contracts, but the OS itself is free.

And there are developers who are paid to work on various components of linux from the kernel, to Gnome, does that make it professional?

Is Android not professional, because you don't pay for the OS itself, and it is primarily supported by ad revenue?


I would argue they're not, because they're not fully under the responsibility of a commercial entity, because they're open source. Companies can volunteer employees to the project, even a project they started themselves, but the companies and employees can come and go. Open source projects exist independently as public goods. Ultimately, it just takes anyone in the world to fork a project to exclude everybody else from its development.

Mint started off as Ubuntu. Same project, with none of the support contracts, no involvement from Canonical needed at the end of the day, etc.

On a practical level, it doesn't make sense to put thousands of dollars per user in liabilities to non-compensated volunteers whatever the case may be with regards to the employment of other contributors.


You've confused and confabulated like 11 different things there. None of what you said has anything to do with either what I said or what the law says.

The way this currently exists is basically unenfoceable because the critical terms are not even defined. It's not even ultimately intelligible, which is a prerequisite to enforcing, or even being able to tell where it does and does not apply, and whether some covered entity is or is not in compliance.


> You've confused and confabulated like 11 different things there.

Feel free to elaborate. As it stands that's nothing more than name calling.

I wasn't speaking to the current CA or CO proposed implementations (which I don't support as it happens). I responded specifically to your statement:

> It's not that it's difficult, it's that it's arbitrary and a form of commanded speech or action.

My response being that I think it's acceptable for the regulator to require action under certain limited circumstances.


And then another state will pass a law mandating scanning of all local images, and another state will want automated scanning of text, and a different country will want a backdoor for law enforcement. We have to stop this here and now.

That sounds like some excellent fodder for an anti-trust suit if you ask me.

It does. Reddit has defined what truth is. Banning r/nonewnormal is merely one part of that

I'm not actively looking, but what I've noticed from recruiters contacting me (I'm not in SFBA) is:

- I'm contacted less often

- Offered Salaries are lower, despite inflation

- They basically all required being in office, and relocating at my expense

- Most of them are from AI startups that have a decent chance of not lasting more than a few years.


Blaze and bazel may have been intentionally designed, but it was designed for Google's needs, and it shows (at least from my observations of bazel, I don't have any experience with blaze). It is better now than it was, but it obviously was designed for a system where most dependencies are vendored, and worked better for languages that google used like c++, java, and python.

Blaze instead of make, ant, maven. But now there's cmake and ninjabuild. gn wraps ninjabuild wraps cmake these days fwiu.

Blaze is/was integrated with Omega scheduler, which is not open.

Bazel is open source.

By the time Bazel was open sourced, Twitter had pantsbuild and Facebook had buck.

OpenWRT's Makefiles are sufficient to build OpenWRT and the kernel for it. (GNU Make is still sufficient to build the Linux kernel today, in 2026.)

Make compares files to determine whether to rebuild them if they already exist; by comparing file modification time (mtime) unless the task name is in the .PHONY: list at the top of the Makefile. But the task names may not contain slashes or spaces.

`docker build` and so also BuildKit archive the build chroot after each build step that modifies the filesystem (RUN, ADD, COPY) as a cacheable layer identified by a hash of its content.

Other Dockerfile instructions add metadata: CMD, ENTRYPOINT, LABEL, ENV, ARG, WORKDIR, USER, EXPOSE <port/tcp>, VOLUME <path>.

The FROM instruction creates a build stage from scratch or from a different container layer.

Dockerfile added support for Multi-stage builds with multiple `FROM` instructions in 2017 (versions 17.05, 17.06CE).

`docker build` is now moby and there is also buildkit? `podman buildx` seems to work.

nerdctl supports a number of features that have not been merged back to docker or to podman.

> it obviously was designed for a system where most dependencies are vendored, and worked better for languages that google used like c++, java, and python.

Those were the primary languages at google at the time. And then also to build software? Make, shell scripts, python, that Makefile calls git which calls perl so perl has to be installed, etc.

Also gtests and gflags.

"Compiler Options Hardening Guide for C and C++" https://news.ycombinator.com/item?id=43551959 :

>> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).

Which CPU microarchitectures and flags are supported?

  ld.so --help | grep "supported"
  cat /proc/cpuinfo | grep -E '^(flags|bugs)'` 
AVX-512 is in x86-64-v3. By utilizing features like AVX-512, we would save money (by utilizing features in processors newer than Pentium 4 (x86-64-v1)).

How to add an `-march=x86-64-v3` argument to every build?

How to add build flags to everything for something like x86-64-v4?

Which distros support consistent build parametrization to make adding global compiler build flags for multiple compilers?

- Gentoo USE flags

- rebuild a distro and commit to building the core and updates and testing and rawhide with your own compiler flags and package signatures and host mirrored package repos

- Intel Clear Linux was cancelled.

- CachyOS (x86-64-v3, x86-64-v4, Zen4)

- conda-forge?

Gentoo:

- ChromiumOS was built on gentoo and ebuild IIRC

- emerge app-portage/cpuid2cpuflags, CPU_FLAGS_X86=, specify -march=native for C/[C++] and also target-cpu=native for Rust in /etc/portage/make.conf

- "Gentoo x86-64-v3 binary packages available" (2024) https://news.ycombinator.com/item?id=39250609

Google, Facebook, and Twitter have a monorepo to build packages from.

Google had a monorepo at the time that blaze was written.

Twitter ("X") is moving from pantsbuild to blaze BUILD files.

TIL there is a buck2. How does facebook/buck2 compare to google/bazel (compare to what is known about blaze)?

Should I build containers (chroot fs archives) with ansible? Then there is no buildkit.

FWIW `podman-kube-play` can run some kubernetes yaml.


The ansible-in-containers thing is very much an unsolved problem. Basically right now you have three choices:

- install ansible in-band and run it against localhost (sucks because your playbook is in a final image layer; you might not want Python at all in the container)

- use packer with ansible as your provisioner and a docker container export, see: https://alex.dzyoba.com/blog/packer-for-docker/

- copy a previous stage's root into a subdirectory and then run ansible on that as a chroot, afterward copy the result back to a scratch container's root.

All of these options fall down when you're doing anything long-running though, because they can't work incrementally. As soon as you call ansible (or any other tool), then from Docker's point of view it's now a single step. This is really unfortunate because a Dockerfile is basically just shell invocations, and ansible gives a more structured and declarative-ish way to do shell type things.

I have wondered if a system like Dagger might be able to do a better job with this, basically break up the playbook programmatically into single task sub-playbooks and call each one in its own Dagger task/layer. This would allow ansible to retain most of its benefits while not being as hamstrung by the semantics of the caller. And it would be particularly nice for the case where the container is ultimately being exported to a machine image because then if you've defined everything in ansible you have a built-in story for freshening that deployed system later as the playbook evolves.


It isn't just chrome. Firefox, Safari, and Edge also use that list.

Google should really be seeing some anti-trust action for requiring you to create an account with them on their search console in order to contest being added to a blacklist used by all the major browsers.

> no matter how small the edit was, the entire file gets rewritten

SQLite doesn't fix this, because you would still need to encrypt the whole file (at least with standard sqlite). If you just encrypted the data in the cells of the table, then you would expose metadata in plaintext.

SQLCipher does provide that, but as mentioned by others, it isn't quite the same thing as sqlite, and is maintained by a different entity.

> The primary issue is that new features cannot be added natively to the XML tree without causing breaking changes for older clients or third-party clients which have not adopted the change yet.

That isn't a limitation of xml, and could also be an issue with sqlite. The real problem here is if clients fail if they encounter tags or attributes they don't recognize. The fix here is for clients to just ignore data it doesn't know about, whether that is xml or sqlite.

The complaints about compatibility between different implementations would be just as bad with sqlite. You would still have some implementations storing data in custom attributes, and others using builtin fields. In fact it could be even worse if separate implementations have diverging schemas for the tables.

> Governance Issues

None of this has anything to do with sqlite vs xml. It is a social issue that could be solved without switching the underlying format, or remain an issue even if it was changed.


SQLite has its own closed-source page-level cipher format, so I don't think this argument makes sense.

https://www.sqlite.org/see/doc/trunk/www/readme.wiki

A weakness though, again, is that this is closed source...


The biggest weakness is the cost. Each client would have to purchase an expensive license. The source code is provided upon purchase though, but essentially destroys the ability to build a client from source due to the compiled binary distribution.

yeah, that was the point that I was making. Although I do wonder if encrypting the whole file is necessary.

I really doubt it. I have not seen any evidence to suggest that there are irreconcilable issues with SQLCipher's page level encryption over a flat file. Codebook, Enpass, Signal, and a ton of other important clients use it just fine.

That isn't really an option for an open source project like keepass(xc)

I'm not sure why it's a concern that the whole file needs a rewrite.

Naiively perhaps I thought that was helpful with solid state storage because it means that old data is trimmed faster?

It mentions it near the entire file being in memory but that seems a dubious concern. If the key is in memory the entire file can be comprised either way. Nothing can really stop that if you have access to the programs memory.


It's an issue if you have a really big password database, for example because you are storing large attachments in there. Especially if you are also syncing the file over the network. Also if your file is multiple GB, having it all in memory is an issue because of the amount of memory used.

That isn't really how keepass is meant to be used, but apparently people do use it this way.


I would if there was a viable mobile phone OS I could switch to. iOS isn't any better. Linux phones, sadly, aren't very practical for daily use. AOSP based projects also have many limitations, and are still dependent on Google.

> then give parents strong monitoring and restriction tools and empower them to protect their children

I think this is the right way to solve the problem.

For example, I think websites should have a header or something that indicates a recommended age level, and what kinds of more mature content and interactions it has, so that filters can use that without having to use heuristics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: