Session 32: Permissions and network filesystems

Permissions (Section 5.5.2)
Network filesystems (not in text)

Network filesystems

With the rise of networks of individual computers in the 1980's, it became clear that filesystems should become more distributed, so that there could be a single filesystem shared across computers. You're quite familiar with the convenience of this technology here at CSB|SJU, as you can freely move between computers and still see your own files regardless of your location. For your Unix files, that is done via NFS, a particular network filesystem designed for Unix computers.


We're going look at two systems for Unix computers, AFS and NFS. Actually, we're going to look at AFS first, even though it's a more recent innovation, since it's actually quite a bit simpler than NFS in concept.

In AFS, you have the file servers and the client computers. The file servers run a user process called Vice, which in many ways works like a Web server, though AFS uses its own custom protocol for transferring files called Virtue. (Today, there's a reasonable chance they'd choose to use HTTP, but AFS predates the Web.) Also, incorporated into each client computer's OS is a module called Venus.

  +-----------+      +-----------+
  |           |    --+->Vice     |
  |           |   /  |           |
  + - - - - - +  /   + - - - - - +
  |   (Venus)-+--    |           |
  | OS        |      | OS        |
  +-----------+      +-----------+
 client computer      file server
Of course, there are many clients. And in fact there are many file servers (but relatively few compared to the number of clients).

Venus intercepts two system calls sent to the OS: open() and close(). On an open() request, it investigates the filename to determine whether it lies in AFS space, easily identified since its root directory will be /afs.

If the filename does not lie in AFS space, then the file is a file on the local hard drive, and Venus simply passes the system call on to the regular open() system call handler to handle as normal. But if it lies in AFS space, then Venus has some work to do.

The directory below /afs names the domain in which the file lies. Venus will contact a file server for that domain and request the file, and Vice on that computer should respond with the file. (Vice at this point is behaving identically to a Web server. Except that, in practice, there will be some authentication process since the file will have some protection associated with it.) Venus downloads the file into some random name in the local drive's /cache directory. It then passes this cached file onto the OS's open() system call handler.

Any subsequent work with the file actually accesses the cached copy, not the master copy located at the file server. The file server doesn't get involved again until finally the user on the client computer decides to close() the file. At this time, Venus will determine whether the file has been changed at all. If not, it has no work to do. But if so, it sends the file back to Vice, for Vice to save on its local drive as the new master copy.

In practice, it gets more complicated. As I presented it above, there would be pretty heavy traffic to and from Vice: Vice has to get involved every time a computer opens a file. In fact, Venus has a sizable cache. When it finds a file in AFS space, it simply translates it into the cached version. Vice only enters into the mix when the file isn't cached, which is relatively rare.

This raises problems if somebody opens a file on computer A, changes it on B, and then opens it on the A again: The second time you open it on A, you still get the old version. AFS solves this problem by having Vice track what's in each client's cache. Whenever B sends the file back, it sends a message to each client caching the file to tell it to remove that file from its cache. This way, when A opens the file the second time, it won't find the file, and so it will download the updated master copy from Vice.

Of course, this means that Vice has the additional problem of keeping track of who might have each file in its cache. But this additional complexity and inefficiency is easily offset by the reduction in confusion.


NFS works differently in two respects. The biggest change is in how a client gets the master copy. It maintains the client/server aspect of AFS, but it does its communication on read() and write() system calls, instead on open() and close(). Communication with the server occurs in 8KB blocks. So, if I'm reading through a 20KB file, my client would get one block from the server, then the next block, and finally the last block. (With AFS, it would copy the entire file from the server when the file is open()ed, and no further communication would occur with the server.) Writes are also buffered into 8KB blocks as they are sent to the server.

Of course, for performance reasons, NFS caches blocks also. But the same problem arises that we saw in AFS, when two computers are working with the same file virtually simultaneously. The NFS designers didn't want the server to have to know anything about the clients, so the AFS solution wasn't acceptable to them. What they did was to give each cached blocks have a very short lifespan (3 seconds for data blocks and 30 for directory blocks). Also, whenever an NFS client requests an block that is cached but expired, it tells the server the age of its cached block, so that the server can simply respond with the much-shorter message that its cached block is correct.

The other respect in which NFS works differently is in how it identifies whether a file lies on the server. With NFS, a directory is mounted from the server during the boot process. On our system, if I type df (which lists all the currently mounted filesystems), I see the following.

/                  (/dev/dsk/c0t0d0s0 ):16721340 blocks  1185572 files
/proc              (/proc             ):       0 blocks     3804 files
/dev/fd            (fd                ):       0 blocks        0 files
/etc/mnttab        (mnttab            ):       0 blocks        0 files
/var               (/dev/dsk/c0t0d0s5 ): 8140062 blocks   518074 files
/var/run           (swap              ): 1016736 blocks    25950 files
/tmp               (swap              ): 1016736 blocks    25950 files
/home              ( blocks 12484618 files
/usr/local         ( blocks  4004808 files
/usr/people        ( blocks  4497816 files
/var/mail          ( blocks 27030479 files
The NFS-mounted filesystems are the last four rows. For example, the /usr/local directory on the local filesystem has been aliased to the /apps directory on computer (a.k.a. maple). If I open the directory /usr/local/bin/math, the OS will detect that this lies in the /usr/local NFS-mounted directory, and so it will translate this to being the /apps/bin/math file located on NFS server


The second difference between NFS and AFS (how files are identified) is easier to interpret. The AFS system makes it trivial to recognize an AFS file, based only on the filename. NFS is more difficult, but it allows more flexibility in how the directory hierarchy is laid out.

The first difference (caching blocks instead of entire files) is more major. AFS requires less traffic in general, though NFS's technique wins out if the pattern of file usage tends to be accesses to short portions of large files. This latter pattern is most typical with databases. In AFS, sharing a database across a network isn't really reasonable. In practice, this isn't such a problem, as commercial database servers work off a single on-disk copy, and they define their own protocol for communicating with clients. So you wouldn't want a database in AFS space anyway.

In general, AFS is more efficient in terms of keeping network traffic down. And its actual speed works quite well. People use NFS more often, however, largely because NFS was first (though I'm sure it didn't hurt that NFS has all the marketing power of Sun behind it, whereas AFS was initially built at a university (Carnegie Mellon)).