First of all, the whole point of the "-f" option is to disable confirmation, which really means "I know exactly what I'm doing". The easiest fix is to stop using that option all the damned time.
When you have strong permissions (e.g. running as "root"), you should never use patterns in destructive commands, period.
At best, you should perform a nondestructive pattern command such as a "find" and generate a precise list of target files that can be audited. For example, here is one way to produce a script of commands that deletes an exact list of matching files:
It's really more effective to have a very regular backup (e.g. ".snapshot" directories are really nice), because you can't control all the ways a file may be deleted.
Just because you protect one "rm" command doesn't mean there isn't another. Someone might have used unlink() in a Perl script or a C program. Maybe "mv" was used to write one file over another, or "cat >! filename", or a dozen other things.
In the end, if a file needs to be safe then it needs a backup (and the sooner it can be restored, the better). And then given a good backup the file still needs an appropriate Unix group, owner, file access control list, etc. to minimize the chance that you'll ever need the backup.
The problem with using /tmp is that you may not realize something critical has been deleted until you reboot. Using an explicit trash or backups folder is safer.
An amusing and helpful trick that I learned was to keep a file named "-i" in the directories that you want to protect. Glob style pattern matching picks it up and the program rm interprets it as the "-i" flag. It is of course not quite foolproof as it can be subverted, but it has saved the day on occasions, particularly one of my friend's. I had a friend who, for totally incomprehensible reasons, would name his files with * s and "."s only and then try to delete one of them with predictable and undesired results.
Well, I have done stupid things myself, for example typing "rm -rf / some_dir" instead of "rm -rf /some_dir". I noticed because it was taking a wee bit too long. It is always good to do an ls with the intended pattern first, to check what all files and directories are being matched before invoking rm with the same pattern.
In the other thread someone mentions using a file named -i. A better approach is to use a file named -X, which is an invalid flag for virtually every file-oriented command. They'll bail out complaining an invalid option has been supplied.
On company I contracted for had something clever going on. Not only did they litter -X files everywhere, attempting to remove one (rm -- -X) would result in an access violation of some kind and your session would be killed as a result, preventing a recursive rm from continuing.
People alias -i and forever supply -f. That doesn't do any good at all. The real answer is to be more careful. It eventually becomes habitual. In about 15 years I have lost data to rm twice: the one time I mistakenly removed the wrong folder, and once when I thought I had a copy of the data.
Because of the inherent dangers in -f, I rarely use it...with one major exception. Whenever I am trying to delete a directory with a git repository in it, the fact that a lot of the things in the .git directory are write-protected means that I have to either punch Y for what is likely dozens of files, or use -f (or some other incredibly ridiculous and equally dangerous thing like "yes | rm").
One: don't do anything as root. Root is the system's account, not your user account. If you need to run a service or application, make a new user for that! I've never needed root for anything other than system administration tasks, like apt-get or adding a user. Also, don't run multiple applications as the same user. If you have a web server, a blog, and a forum, you need three users. The web servers can talk to the backend servers via UNIX sockets or TCP.
Two: don't pass -f. Do you even know what -f does, or are you just cargo-culting it? If you need -f, rm will tell you. Don't use it until then.
Technically doesn't prevent rm -rf /* itself, but still goes long way to prevent a disaster: use a snapshotting filesystem, like NILFS2 http://en.wikipedia.org/wiki/NILFS
Some solutions here center on avoiding issuing rm -rf /* interactively... that's not enough! A broken script or unexpected variable expansion can wreak just as much havoc.
For example rm -rf $SOMEDIR/* :
- if $SOMEDIR is empty, or
- (if you suffer from bash) if $SOMEDIR contains trailing space so it will be expanded into separate words: SOMEDIR='foo '; rm -rf $SOMEDIR/* => rm -rf foo /* (which means, `remove ./foo and remove /* ')
An alias won't help if full path to command is specified; that is quite common for start-up scripts.
I have experienced consequences of rm -rf /* once or twice. Now I pause for a moment every time I am about to remove something and double-check the command. Sometimes even prepend `echo' for a dry run ;-)
Edit:
another nasty case of unintended deletion I had was due to a dumb Makefile rule:
$(CC) -o $(OUTFILE) $(INFILE)
for some reason $(OUTFILE) ended up empty, so outupt went to $(INFILE) -- a C source file -- effectively removing its content. How would I guard against that kind of data loss? A snapshotting filesystem...
How about replacing rm with something like https://github.com/andreafrancia/trash-cli ? If you only purge the trash when it's necessary and not automatically after every rm, you'd save yourself.
Type rm -rf /* in your terminal emulator, place your finger over the Enter key and feel the temptation:
"We stand upon the brink of a precipice. We peer into the abyss—we grow sick and dizzy. Our first impulse is to shrink away from the danger. Unaccountably we remain... it is but a thought, although a fearful one, and one which chills the very marrow of our bones with the fierceness of the delight of its horror. It is merely the idea of what would be our sensations during the sweeping precipitancy of a fall from such a height... for this very cause do we now the most vividly desire it."
If I'm going to be doing something major to a lot of files, I often write a script that outputs the commands to execute, so I can verify what's going to be done. Then I reexecute and pipe to bash.
It's not quite applicable to something used as off-handedly as rm, though it could be done. Something like:
If any files begin with '-', the second version will just expand to '-filename' which rm will try to process as an option (possibly failing or creating undesired results). Using
./*
expands to
./-filename
which won't be picked up by option processing. Note: You can also do
rm -rf -- *
to prevent option processing after the '--'.
edit: added a bunch of breaks to prevent the * from being converted to italics.
The former will gleefully delete all files/directories, even if there exists a directory entry named "-i", without asking.
The difference is in glob expansion: ./* keeps the prefix on every expanded item. As mentioned above, using any sort of path (relative or absolute) prefix when globbing will circumvent all the careful "-i" wards a superstitious sysadmin may have put in place.
Create a version of rm that detects if you try to delete the root filesystem, deny it, and makes you put a --really-delete-filesystem-root flag to do so.
It does seem like this would be appropriate. All *nix systems share the desire to not accidentally rm -rf /, and it should be easy to check for inside rm.
It might be default in some distros but there is no reason not to put it in an alias. The point is that it's already there and you don't have to modify your rm binary.
And how can you detect /* if the shell expands it?
Needing the effects of "rm -r /<something>/* " is rare. Just cd first.
I think I rarely use rm -r with an absolute path. And tab completion does something similar (list your targets) if you don't jump the gun with <enter>.
PS I'm entirely comfortable with my Alt-B as "rxvt -e 'sudo zsh'".
Always type the full path, there's various key combinations to pull this into your command line. Also, you should be using the 'find' command to list (which you check), then delete files. In short, take your time.
I'm sure many of us know the feeling of dread that creeps over you when you suddenly realize an rm command you've dispatched is taking longer to complete than one would expect based on the contents of the directory you think you're deleting...
This is a dangerous but common crutch. The reason it's dangerous is people get used to it, and then when they go to a system where it's not there, pain and anguish (or hilarity, depending on your point of view) ensue.
I'm more of a fan of the other comment, which was basically "don't use -f then". Personally, when using that command at root, you should be pretty aware of what you're doing.
I don't remember if it's something I had to explicitly turn on, but zsh gives me a "sure you want to delete all files in ... [yn]?" prompt when I do any form of "rm *", even if I include -f
When you have strong permissions (e.g. running as "root"), you should never use patterns in destructive commands, period.
At best, you should perform a nondestructive pattern command such as a "find" and generate a precise list of target files that can be audited. For example, here is one way to produce a script of commands that deletes an exact list of matching files: