good article
|
Author | Content |
---|---|
DarrenR114 Feb 13, 2007 6:45 AM EDT |
This is a good read on hardening a linux box against educated users. I haven't found setting up self-contained enviroments for a chroot jail to be that much of a pain, however. I simply followed this tutorial on setting up a chroot jail for CVS: http://www.ffnn.nl/pages/articles/linux/setting-up-a-chroot-... On using symlinks for the restricted shell (3rd to last paragraph,) using the humble "find" command will do this easily. From within the desired target "bin" directory, simply execute this on the commandline: >$ find /bin -name * -exec ln -sf "{}" . \ ; (there's supposed to be a backslash before the semicolon that keeps disappearing.) And like magic you'll have created symbolic links to all the files in the /bin directory. You could also use 'ls' with 'xargs' if you want to sharpen your CLI skills: >$ ls /bin/* |xargs -t -I{} ln -sf {} . (using xargs with ls comes in handy when you want to delete an arbitrary number of files from a directory with more than 64K entries.) One thing not mentioned: Don't bother creating symlinks for your chroot jail - the links won't actually go anywhere once you've executed the chroot. Also, since you're creating a restricted environment, you'll probably want to remove links to the commands you don't want the user to access: >$ rm -f bash At the least, remove the links to bash, ksh, csh, sh and any other shells you may have loaded, because those would give an escape from the restricted shell. |
jdixon Feb 13, 2007 6:58 AM EDT |
> And like magic you'll have created symbolic links to all the files in the /bin directory. Wouldn't it be simpler just to create a symbolic link to the bin directory itself? Or am I overlooking something? |
DarrenR114 Feb 13, 2007 7:00 AM EDT |
> Wouldn't it be simpler just to create a symbolic link to the bin directory itself? Or am I overlooking something? Yes, you could do that. But then you don't have the option of removing any commands you don't want the restricted user having any access to. I also found what might be a better tutorial to setting up a chroot jail - why wasn't this around when I had to do it the first time?? http://www.technicalarticles.org/index.php/How_to_Setup_a_Ch... |
jdixon Feb 13, 2007 7:17 AM EDT |
> But then you don't have the option of removing any commands you don't want the restricted user having any access to. OK, that makes sense. If you want to be able to limit the number of commands available, the individual symbolic links are necessary, as linking to directory gives access to all of the commands. I should have realized that, but your example fooled me into thinking you wanted to give full access. :( |
DarrenR114 Feb 13, 2007 7:21 AM EDT |
I wasn't clear on that step ... I'll correct it now. |
DarrenR114 Feb 13, 2007 8:46 AM EDT |
Notice: This post is WAY off topic to the article, but pertains to my previous post about the use of ls and xargs. I noted that using xargs with ls comes in handy when trying to remove more than 64K files. This is something that I ran into back in 1998 - a system operator at the large corporation I worked for back then was trying to clean up some old log files from a directory. These log files went back months, and there were about 75,000 files he wanted to delete. Because it was impossible to simply do a 'rm *', or use a '*' in any way to create a single rm command, and because there were so many files to be deleted, I suggested using 'ls' with 'xargs'. Now it would have been possible to 'rm' multiple times against shorter lists, but that would not have been very productive - it would have worked out to running 'rm' more than twenty times with the different filelist parameters. It was easier to use 'xargs'. xargs is a command that will take list input and execute the same command over and over for each item on the list. To demonstrate a bit of the problem, you can easily create 90,000 empty files with this script: #!/bin/bash for i in `seq 1 30000`; do touch $i.tmp touch $i.txt touch $i.lst done Save this as 'tmp.sh' in an empty directory. Set the script to be executable with "chmod +x tmp.sh" and run it: >$ ./tmp.sh It'll take maybe five minutes to finish creating the empty files. Now, if you attempt to delete all 90,000 files at once with "rm *", you'll get an error similar to "bash: /bin/rm: Argument list too long" At this point you could try to do "rm *.tmp; rm *.txt; rm *.lst" and be done with it, but you'll get the same error. You could try "rm 26*" and that would work, but how many times do you want to execute rm? So what do you do? Now that you executed that script I gave you above, you've got 90,000 empty files to get rid of. Silly reader - why did you listen to me? 'xargs' to the rescue! Feed the list of all those empty files to the rm command one at a time, and you're done: >$ ls -tr |xargs -t -I{} rm -f {} BTW - the command as constructed above will also delete the shell script used to create the files in the first place. Also, I am not responsible for any mess anyone makes of their directories by following my instructions above. I've tested the above steps on my own machines, but I'm not prescient enough to anticipate all possible misreadings, so caveat emptor. And remember, Trix are for kids. |
dcparris Feb 13, 2007 7:30 PM EDT |
Gosh, put it up as a brief tutorial! Don't hide nifty things like this in the forums! ;-) |
Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]
Becoming a member of LXer is easy and free. Join Us!