Up to Table of Contents | ||
Back to Client Side Security | Forward to CGI Scripts |
You need to protect the server from the prying eyes of both local and remote users. The simplest strategy is to create a "www" user for the Web administration/webmaster and a "www" group for all the users on your system who need to author HTML documents. On Unix systems edit the /etc/passwd file to make the server root the home directory for the www user. Edit /etc/group to add all authors to the www group.
The server root should be set up so that only the www user can write to the configuration and log directories and to their contents. It's up to you whether you want these directories to also be readable by the www group. They should _not_ be world readable. The cgi-bin directory and its contents should be world executable and readable, but not writable (if you trust them, you could give local web authors write permission for this directory). Following are the permissions for a sample server root:
drwxr-xr-x 5 www www 1024 Aug 8 00:01 cgi-bin/ drwxr-x--- 2 www www 1024 Jun 11 17:21 conf/ -rwx------ 1 www www 109674 May 8 23:58 httpd drwxrwxr-x 2 www www 1024 Aug 8 00:01 htdocs/ drwxrwxr-x 2 www www 1024 Jun 3 21:15 icons/ drwxr-x--- 2 www www 1024 May 4 22:23 logs/
The document root has different requirements. All files that you want to serve on the Internet must be readable by the server while it is running under the permissions of user "nobody". You'll also usually want local Web authors to be able to add files to the document root freely. Therefore you should make the document root directory and its subdirectories owned by user and group "www", world readable, and group writable:
drwxrwxr-x 3 www www 1024 Jul 1 03:54 contents drwxrwxr-x 10 www www 1024 Aug 23 19:32 examples -rw-rw-r-- 1 www www 1488 Jun 13 23:30 index.html -rw-rw-r-- 1 lstein www 39294 Jun 11 23:00 resource_guide.html
Many servers allow you to restrict access to parts of the document tree to Internet browsers with certain IP addresses or to remote users who can provide a correct password (see below). However, some Web administrators may be worried about unauthorized _local_ users gaining access to restricted documents present in the document root. This is a problem when the document root is world readable.
One solution to this problem is to run the server as something other than "nobody", for example as another unprivileged user ID that belongs to the "www" group. You can now make the restricted documents group- but not world-readable (don't make them group-writable unless you want the server to be able to overwrite its documents!). The documents can now be protected for prying eyes both locally and globally. Remember set the read and execute permissions for any restricted server scripts as well.
The CERN server generalizes this solution by allowing the server to execute under different user and group privileges for each part of a restricted document tree. See the CERN documentation for details on how to set this up.
If your server starts up as root but runs as another user, then it's especially important that the logs directory not be writable by the user that the server runs as. For example, both the Netscape FastTrack and SuiteSpot servers come with the logs directory writable by the user that the server runs as (i.e. as "nobody" if you choose the default configuration values). This can make the effect of some CGI bugs much worse than they would normally be. For example if a CGI bug enables a remote user to run arbitrary commands on the server, then the remote user can also gain root access to the server by exploiting the bug to replace a log file with a symlink to /etc/passwd. When the server restarts, the symlink will result in /etc/passwd being chown'd to the server user. The remote user can now exploit the CGI bug again to add an entry to /etc/passwd. The suggested workaround is to change the ownership of the logs directory so that it's not writable by the server user, and then create empty log and pid files that are owned by the server user (the server won't start up if it can't open these files.) Although this solution is less than optimal, because it allows crackers to tamper with the log files, it is much better than the default configuration. This bug may also be present in other commercial servers. (Thanks to Laura Pearlman for this information.)
Of course, turning off automatic directory listings doesn't prevent people from fetching files whose names they guess at. It also doesn't avoid the pitfall of an automatic text keyword search program that inadvertently adds the "hidden" file to its index. To be safe, you should remove unwanted files from your document root entirely.
The NCSA and Apache servers allows you to turn symbolic link following off completely. Another option allows you to enable symbolic link following only if the owner of the link matches the owner of the link's target (i.e. you can compromise the security of a part of the document tree that you own, but not someone else's part).
Options IncludesNoExec
This is not the scenario that people warn about when they talk about "running the server as root". This warning is about servers that have been configured to run their _child processes_ as root, (e.g. by specifying "User root" in the server configuration file). This is a whopping security hole because every CGI script that gets launched with root permissions will have access to every nook and cranny in your system.
Some people will say that it's better not to start the server as root at all, warning that we don't know what bugs may lurk in the portion of the server code that controls its behavior between the time it starts up and the time it forks a child. This is quite true, although the source code to all the public domain servers is freely available and there don't _seem_ to be any bugs in these portions of the code. Running the server as an ordinary unprivileged user may be safer. Many sites launch the server as user "nobody", "daemon" or "www". However you should be aware of two potential problems with this approach:
Consider this scenario: the WWW server that has been configured to execute any file ending with the extension ".cgi". Using your ftp daemon, a remote hacker uploads a perl script to your ftp site and gives it the .cgi extension. He then uses his browser to request the newly-uploaded file from your Web server. Bingo! he's fooled your system into executing the commands of his choice.
You can overlap the ftp and Web server hierarchies, but be sure to limit ftp uploads to an "incoming" directory that can't be read by the "nobody" user.
In order to run a server in a chroot environment, you have to create a whole miniature root file system that contains everything the server needs access to. This includes special device files and shared libraries. You also need to adjust all the path names in the server's configuration files so that they are relative to the new root directory. To start the server in this environment, place a shell script around it that invokes the chroot command in this way:
chroot /path/to/new/root /server_root/httpdSetting up the new root directory can be tricky and is beyond the scope of this document. See the author's book (above), for details. You should be aware that a chroot environment is most effective when the new root directory is as barren as possible. There shouldn't be any interpreters, shells, or configuration files (including
/etc/passwd
!) in the new root directory. Unfortunately this means
that CGI scripts that rely on Perl or shells won't run in the chroot
environment. You can add these interpreters back in, but you lose
some of the benefits of chroot.
Also be aware that chroot only protects files; it's not a panacea. It doesn't prevent hackers from breaking into your system in other ways, such as grabbing system maps from the NIS network information service, or playing games with NFS.
other hosts \ server <-----> FIREWALL <------> OUTSIDE / other hosts
However, if you want to make the server available to the rest of the world, you'll need to place it somewhere outside the firewall. From the standpoint of security of your organization as a whole, the safest place to put it is completely outside the local area network:
other hosts \ other hosts <----> FIREWALL <---> server <----> OUTSIDE / other hosts
This is called a "sacrificial lamb" configuration. The server is at risk of being broken into, but at least when it's broken into it doesn't breach the security of the inner network.
It's _not_ a good idea to run the WWW server on the firewall machine. Now any bug in the server will compromise the security of the entire organization.
There are a number of variations on this basic setup, including architectures that use paired "inner" and "outer" servers to give the world access to public information while giving the internal network access to private documents. See the author's book for the gory details.
ftp://ftp.tis.com/pub/firewalls/toolkit/
The CERN server can also be configured to act as a proxy. I feel much less comfortable recommending it, however, because it is a large and complex piece of software that may contain unknown security holes.
More information about firewalls is available in the books Firewalls and Internet Security by William Cheswick and Steven Bellovin, and Building Internet Firewalls by D. Brent Chapman and Elizabeth D. Zwicky.
ftp://ftp.cerias.purdue.edu/pub/tools/unix/ids/tripwire/
You should also check your access and error log files periodically for suspicious activity. Look for accesses involving system commands such as "rm", "login", "/bin/sh" and "perl", or extremely long lines in URL requests (the former indicate an attempt to trick a CGI script into invoking a system command; the latter an attempt to overrun a program's input buffer). Also look for repeated unsuccessful attempts to access a password protected document. These could be symptomatic of someone trying to guess a password.
Details of the exploit have not been published, but you can find a longer description in the original article at http://www.sddt.com/files/library/98/06/25/tbc.html.
Netscape is reportedly working on a fix. Please visit the Netscape site for possible patches. If you use server-side includes, you are urged to upgrade as soon as a patch becomes available.
O'Reilly's WebSite and WebSite Professional servers are also vulnerable to this bug. Microsoft IIS servers do not appear to be.
This bug is fixed in Enterprise Server 3.5.1 or higher (see this technical note). It is unclear whether there is a patch available for the FastTrack server, however, which was still at version 3.01 as of June 30, 1998.
The same bug is present in the Microsoft IIS server. O'Reilly's WebSite Professional are reportedly free of the problem
Unfortunately this technique allows anyone on the Internet to execute an arbitrary set of Perl commands on your server by invoking such scripts as /cgi-bin/perl.exe?&-e+unlink+%3C*%3E (when run, this URL removes every file in the server's current directory). This is not a good idea. A current Netscape technical note suggests encapsulating your Perl scripts in a .bat file. However, because of a related problem with batch scripts, this is no safer.
Because the EMWACS, Purveyor and WebSite NT servers all use the File Manager extension associations, you can execute perl scripts on these servers without placing perl.exe into cgi-bin. They are safe from this bug.
Ian Redfern (redferni@logica.com) has discovered that a similar hole exists in the processing of CGI scripts implemented as .bat files. The following is excerpted from his e-mail describing the problem:
Consider test.bat: @echo off echo Content-type: text/plain echo echo Hello World! If this is called as "/cgi-bin/test.bat?&dir" you get the output of the CGI program, followed by a directory listing. It appears that the server is doing system("test.bat &dir") which the command interpreter is handling (not unreasonably) in the same way /bin/sh would - execute it, and if things go OK, execute the dir command.
Details of the exploit have not been published, but you can find a longer description in the original article at http://www.sddt.com/files/library/98/06/25/tbc.html.
O'Reilly has announced that a fix will be available in WebSite and WebSite Professional version 2.3. If you use server-side includes, you should strongly consider upgrading.
Windows-based Netscape servers are also vulnerable to this bug. Microsoft IIS servers do not appear to be.
This hole has been fixed in version 1.1c. You should upgrade to this version with the patch provided at the WebSite home page.
Detailed information on the actions necessary to close the WebSite .bat file security hole can be found at this page provided by WebSite's developer.
A patch is available on Microsoft's security pages. Newer versions of IIS are free of the problem.
The same bug is present in the Netscape Enterprise and Commerce servers. Recent versions of WebSite Professional are reportedly free of the problem
Microsoft has released a patch for this bug, available at http://www.microsoft.com/infoserv/. In addition, all copies of the IIS server downloaded after 3/5/96 should be free of this bug. If you use this server, you should check the creation date of your server binary and upgrade it if necessary.
Versions of Microsoft IIS through 3.0 are vulnerable to a bug that allows remote users to download and read the contents of executable scripts, potentially learning sensitive information about the local network configuration, the name of databases, or the algorithm used to calculate vendor discounts. This bug appears whenever a script-mapped file is placed in a directory that has both execute and read permissions. Remote users can download the script itself simply by placing additional periods at the end of its URL. To avoid this bug, turn off read permissions in any directory that contains scripts. Alternatively, download the patch provided by Microsoft at:
ftp://ftp.microsoft.com/bussys/winnt/winnt-public/fixes/usa/nt40/hotfixes-postsp2/iis-fix
The exact length of the URL that is required to cause the crash varies from server to server, and depends on such issues as memory usage. The magic length is generally around 8192 characters in length, suggesting that the problem is a memory buffer overflow. In the past such problems have often been exploited by knowledgeable hackers to execute remote commands on the server, so this bug is potentially more than annoyance.
A patch is available from Microsoft at ftp://ftp.microsoft.com/bussys/winnt/winnt-public/fixes/usa/nt40/hotfixes-postSP3/iis-fix
The Windows NT version of JavaWebServer is vulnerable to a bug that allows the source code for Java servlets to be downloaded by remote users. This bug is similar to ones identified for Windows NT versions of O'Reilly WebSite Professional and Netscape Enterprise Server. By appending certain characters to the end of a servlet's URL, a remote user can fool the server into sending him the compiled servlet, which can then be decompiled by a product such as Mocha. Since servlets may contain proprietary code, trade secrets or even database access passwords, this is a significant problem.
Sun has not yet announced a fix for this problem. Check their Web site for details. More information can be found at http://www.sddt.com/files/library/98/06/29/tbd.html
According to Jeff Forristal, who discovered the bug, MetaWeb is vulnerable to the "double-dot" problem that plagued early versions of the Microsoft IIS server. By including ".." pairs in the URL path, the server can be tricked into giving access to directories outside the Web document root, including documents in the Windows system directory. This allows password files and other confidential information to be retrieved. Worse, a variant of this attack also gives remote users the ability to run any executable binary that happens to be installed on the server machine.
MetaWeb has not yet made an upgrade or patch available. You are urged to upgrade when a fix does become available. A good short-term solution is to disable remote administration via the Web interface.
More information about the MetaInfo bug may be posted Jeff Forristal's site.
Recently it has come to light that example C code (cgi_src/util.c) long distributed with the NCSA httpd as an example of how to write safe CGI scripts ommitted the newline character from the list of characters that are shouldn't be passed to shells. This ommission introduces a serious bug into any CGI scripts that were built on top of this example code: a remote user can exploit this bug to force the CGI script to execute any arbitrary Unix command. This is another example of the dangers of executing shell commands from CGI scripts.
In addition, the NCSA server source code tree itself contains the same bug (versions 1.5a and earlier). The faulty subroutine is identical, but in this case is found in the file src/util.c as opposed to cgi_src/util.c. After looking through the server source code, I haven't found a place where a user-provided string is passed to a shell after being processed by this subroutine, so I don't think this represents a actual security hole. However, it's best to apply the patch shown below to be safe.
The Apache server, versions 1.02 and earlier, also contains this hole in both its cgi_src and src/ subdirectories. It's not unlikely that the same problem is present in other derivatives of the NCSA source code.
The patch to fix the holes in the two util.c files is simple. "phf" and any CGI scripts that use this library should be recompiled after applying this patch (the GNU patch program can be found at ftp://prep.ai.mit.edu/pub/gnu/patch-2.1.tar.gz). You should apply this patch twice, once while inside the cgi_src/ subdirectory, and once within the src/ directory itself:
tulip% cd ~www/ncsa/cgi_src tulip% patch -f < ../util.patch tulip% cd ../src tulip% patch -f < ../util.patch ---------------------------------- cut here ---------------------------------- *** ./util.c.old Tue Nov 14 11:38:40 1995 --- ./util.c Thu Feb 22 20:37:07 1996 *************** *** 139,145 **** l=strlen(cmd); for(x=0;cmd[x];x++) { ! if(ind("&;`'\"|*?~<>^()[]{}$\\",cmd[x]) != -1){ for(y=l+1;y>x;y--) cmd[y] = cmd[y-1]; l++; /* length has been increased */ --- 139,145 ---- l=strlen(cmd); for(x=0;cmd[x];x++) { ! if(ind("&;`'\"|*?~<>^()[]{}$\\\n",cmd[x]) != -1){ for(y=l+1;y>x;y--) cmd[y] = cmd[y-1]; l++; /* length has been increased */ ---------------------------------- cut here ----------------------------------
Versions prior to v1.3.20 contain server programming errors that present moderate to serious security risks. Under the right circumstances, authentication headers would not be provided to the client. The default configuration would lead two modules to present directory listings instead of presenting the default index.html file, if the URL retrieved was artificially long and contained many slashes.
NetWare Paths - 31 January 2001
The NetWare functions created a bug whereby directives for path settings were not interpreted correctly.
mod_rewrite Globbing - 14 Oct 2000
The mod_rewrite module, if the result of filename rewrites caused a file to include $0 or other such references, could result in server configuration information being presented to a browser.
Other
Versions of Apache httpd prior to 1.2.5 contain several programming errors that present moderate security risks. Users who have local access to the server machine (e.g. Web authors), can carefully craft HTML files which, when fetched, will give the user the ability to execute Unix commands with Web server user permissions. Since local users usually already have as much, if not more, access to the system as the Web server, this does not present a major risk, but it may be of concern to ISP's who provide Web hosting services to untrusted authors. Apache version 1.2.5 is free of these bugs; upgrade at your earliest convenience. If you are using a 1.3 beta version of Apache, you may apply a patch located atthe Apache site, or await the release of 1.3b4.
Apache servers prior to 1.1.3 contain two security holes which are of far more concern. The first hole affects servers compiled with the "mod_cookies" module. Servers compiled with this module contain a vulnerability that allows remote users to send the server extremely long cookies and overrun the program stack, potentially allowing arbitrary commands to be executed. Because this gives remote users access to the server host, it is a far greater vulnerability than the holes discussed above, which only can be exploited by local users.
The second problem with 1.1.1 affects automatic directory listings. Ordinarily, a remote user cannot obtain a directory listing if the directory contains a "welcome page", such as "index.html". A bug causes this check to fail under certain circumstances, allowing the remote user to see the contents of the directory even if the welcome page is present. This hole is less serious than the first one, but is still a potential information leak.
More information and current Apache binaries can be found at:
http://www.apache.org/
There have also been two well-publicized recent episodes in which the system used by the Netscape Secure Commerce Server to encrypt sensitive communications was cracked. In the first episode, a single message encrypted with Netscape's less secure 40-bit encryption key was cracked by brute force using a network of workstations. The 128-bit key used for communications within the U.S. and Canada is considered immune from this type of attack.
In the second episode, it was found that the random number generator used within the server to generate encryption keys was relatively predictable, allowing a cracking program to quickly guess at the correct key. This hole has been closed in the recent releases of the software, and you should upgrade to the current version if you rely on encryption for secure communications. Both the server and the browser need to be upgraded in order to completely close this hole. See http://home.netscape.com/newsref/std/random_seed_security.html for details.
According to Richard L. Gray (rlgray@us.ibm.com>) of IBM, all known problems have been fixed in versions 4.2.1.3 and higher. Lotus Domino Go also now runs on Windows 95, Windows NT, OS/390, HPUX and Solaris systems.
As far as the security of the WebSTAR server itself goes, there is reason to think that WebSTAR is more secure than its Unix and Windows counterparts. Because the Macintosh does not have a command shell, and because it does not allow remote logins, it is reasonable to expect that the Mac is inherently more secure than the other platforms. In fact this expectation has been borne out so far: no specific security problems are known in either WebStar or its shareware ancestor MacHTTP.
In early 1996 a consortium of Macintosh Internet software development companies, including StarNine, the developer of WebStar, posted a $10,000 reward to anyone who could read a password-protected Web page on a Macintosh running WebStar software. As described in an article about the challenge in Tidbits#317/04-Mar-96, after 45 days no one had stepped forward to claim the prize.
Although one cannot easily "break in" to a Macintosh host in the conventional way, potential security holes do exist:
(This information provided by Paul DuBois <dubois@primate.wisc.edu>).
Most servers log every access. The log usually includes the IP address and/or host name, the time of the download, the user's name (if known by user authentication or obtained by the identd protocol), the URL requested (including the values of any variables from a form submitted using the GET method), the status of the request, and the size of the data transmitted. Some browsers also provide the client the reader is using, the URL that the client came from, and the user's e-mail address. Servers can log this information as well, or make it available to CGI scripts. Most WWW clients are probably run from single-user machines, thus a download can be attributed to an individual. Revealing any of those datums could be potentially damaging to a reader.
For example, XYZ.com downloading financial reports on ABC.com could signal a corporate takeover. The accesses to a internal job posting reveals who might be interested in changing jobs. The time a cartoon was downloaded reveals that the reader is misusing company resources. A referral log entry might contain something like:
file://prez.xyz.com/hotlists/stocks2sellshort.html -> http://www.xyz.com/
The pattern of accesses made by an individual can reveal how they intend to use the information. And the input to searches can be particularly revealing.
Another way Web usage can be revealed locally is via browser history, hotlists, and cache. If someone has access to the reader's machine, they can check the contents of those databases. An obvious example is shared machines in an open lab or public library.
Proxy servers used for access to Web services outside an organization's firewall are in a particularly sensitive position. A proxy server will log every access to the outside Web made by every member of the organization and track both the IP number of the host making the request and the requested URL. A carelessly managed proxy server can therefore represent a significant invasion of privacy.
If you are a government site, you may be required by law to protect the privacy of your readers. For example, U.S. Federal agencies are not allowed to collect or publish many types of data about their clients.
In most U.S. states, it is illegal for libraries and video stores to sell or otherwise distribute records of the materials that patrons have checked out. While the courts have yet to apply the same legal standard to be applied to electronic information services, it is not unreasonable for users to have the same expectation of privacy on the Web. In other countries, for example Germany, the law explicitly forbids the disclosure of online access lists. If your site chooses to use the Web logs to populate your mailing lists or to resell to other businesses, make sure you clearly advertise that fact.
The easiest way to avoid collecting too much information is to use a server that allows you to tailor the output logs, so that you can throw away everything but the essentials. Another way is to regularly summarize and discard the raw logs. Since the logs of popular sites tend to grow quickly, you probably will need to do that anyway.
You can protect outsiders by summarizing your logs. You can help protect insiders by:
If your site does not want to reveal certain Web accesses from your site's domain, you may need to get Web client accounts from another Internet provider that can provide anonymous access.
Up to Table of Contents | ||
Back to Client Side Security | Forward to CGI Scripts |
Lincoln D. Stein (lstein@cshl.org) and John N. Stewart (jns@digitalisland.net)
$Id: wwwsf3.html,v 1.10 2001/07/28 17:54:26 lstein Exp $