How to setup linux permissions for the WWW folder?

Solution 1:

After more research it seems like another (possibly better way) to answer this would be to setup the www folder like so.

  1. sudo usermod -a -G developer user1 (add each user to developer group)
  2. sudo chgrp -R developer /var/www/site.com/ so that developers can work in there
  3. sudo chmod -R 2774 /var/www/site.com/ so that only developers can create/edit files (other/world can read)
  4. sudo chgrp -R www-data /var/www/site.com/uploads so that www-data (apache/nginx) can create uploads.

Since git runs as whatever user is calling it, then as long as the user is in the "developer" group they should be able to create folders, edit PHP files, and manage the git repository.

Note: In step (3): '2' in 2774 means to 'set Group ID' for the directory. This causes new files and sub directories created within it to inherit the group ID of the parent directory (instead of the primary group of the user) Reference: http://en.wikipedia.org/wiki/Setuid#setuid_and_setgid_on_directories

Solution 2:

I'm not sure whether it's "right", but here's what I do on my server:

  • /var/www contains a folder for each website.
  • Each website has a designated owner, which is set as the owner of all files and folders in the website's directory.
  • All of the users that maintain the website are put into a group for the website.
  • This group is set as the group owner of all files and folders in the directory.
  • Any files or folders that need to be written by the webserver (ie. PHP) have their owner changed to www-data, the user that apache runs under.

Keep in mind that you should have the execute bit enabled on directories so that you can list the contents.


Solution 3:

After doing more research it seems that git/svn TOOLS are NOT a problem since they run as whatever user is using them. (However, the git/svn daemons are a different matter!) Everything I created/cloned with git had my permissions and the git tool was listed in /usr/bin which fits this thesis.

Git permissions solved.

User permissions seems to be solvable by adding all users that need access to the www directory to the www-data group that apache (and nginx) run as.

So it seems that one answer to this question goes like this:

By default /var/www is owned by root:root and no one can add or changes files there.

1) Change group owner

First we need to change the www directory group to be owned by "www-data" instead of "root" group

sudo chgrp -R www-data /var/www

2) Add users to www-data

Then we need to add the current user (and anyone else) to the www-data group

sudo usermod -a -G www-data demousername

3) CHMOD www directory

Change the permissions so that ONLY the owner (root) and all users in the group "www-data" can rwx (read/write/execute) files and directories (no one else should even be able to access it).

sudo chmod -R 2770 /var/www

Now all files and directories created by any user that has access (i.e. in the "www-data" group) will be readable/writable by apache and hence php.

Is this correct? What about files that PHP/Ruby create - can the www-data users access them?


Solution 4:

Stickiness is not permissions inheritance. Stickiness on a directory means that only the owner of a file, or the directory owner, can rename or delete that file in the directory, despite the permissions saying otherwise. Thus 1777 on /tmp/.

In classical Unix, there is no permissions inheritance based on the file-system, only on the current process' umask. On *BSD, or Linux with setgid on the directory, the group field of newly created files will be set to the same as that of the parent directory. For anything more, you need to look into ACLs, with the 'default' ACL on directories, which do let you have inherited permissions.

You should start by defining: * what users have access to the system * what your threat model is

For instance, if you're doing web hosting with multiple customers and you don't want them seeing each others files, then you might use a common group "webcusts" for all those users and a directory mode of 0705. Then files served by the webserver process (not in "webcusts") will see the Other perms and be allowed; customers can't see each others files and the users can mess with their own files. However, this does mean that the moment you allow CGI or PHP you have to make sure that the processes run as the specific user (good practice anyway, for multiple-users-on-one-host, for accountability). Otherwise, customers could mess with each others' files by having a CGI do so.

However, if the run-time user for a website is the same as the owner of the website, then you do have issues with not being able to protect content from abusers in the case of a security hole in the script. Which is where dedicated hosts win, so that you can have a run-time user distinct from the static content owner and not have to worry so much about interaction with other users.


Solution 5:

I do believe that the best way to do this is using Posix ACLs. They are comfortable to work with and offer all the functionality you need.

http://en.wikipedia.org/wiki/Access_control_list#Filesystem_ACLs