[36] in The Cryptographic File System users list
Re: remote file system
daemon@ATHENA.MIT.EDU (Rob Stampfli)
Wed Jan 28 00:13:06 1998
From owner-cfs-users@research.att.com Wed Jan 28 00:13:04 1998
Received: from rumor.research.att.com (rumor.research.att.com [192.20.225.9]) by bloom-picayune.MIT.EDU (8.7.6/2.3JIK) with SMTP id AAA25064 for <cfs-mtg@bloom-picayune.mit.edu>; Wed, 28 Jan 1998 00:13:02 -0500
Received: from research.att.com ([135.207.30.100]) by rumor; Wed Jan 28 00:08:41 EST 1998
Received: from amontillado.research.att.com ([135.207.24.32]) by research-clone; Wed Jan 28 00:09:30 EST 1998
Received: from nsa.research.att.com (majordomo@nsa.research.att.com [135.207.24.155])
by amontillado.research.att.com (8.8.7/8.8.7) with ESMTP id AAA13802;
Wed, 28 Jan 1998 00:09:25 -0500 (EST)
Received: (from majordomo@localhost) by nsa.research.att.com (8.7.3/8.7.3) id AAA15966 for cfs-users-list; Wed, 28 Jan 1998 00:09:11 -0500 (EST)
X-Authentication-Warning: nsa.research.att.com: majordomo set sender to owner-cfs-users@nsa.research.att.com using -f
Received: from research.att.com (research-clone.research.att.com [135.207.30.100]) by nsa.research.att.com (8.7.3/8.7.3) with SMTP id AAA15962 for <cfs-users@nsa.research.att.com>; Wed, 28 Jan 1998 00:09:09 -0500 (EST)
Received: from elektro.cmhnet.org ([192.188.133.3]) by research-clone; Wed Jan 28 00:08:33 EST 1998
Received: from colnet by elektro.cmhnet.org with uucp
(Smail3.1.29.1 #1) id m0xxPjL-0000WhC; Wed, 28 Jan 98 00:08 EST
Received: from kd8wk.cmhnet.org by colnet.cmhnet.org with smtp
(Smail3.1.28.1 #4) id m0xxPgr-0008F9C; Wed, 28 Jan 98 00:05 EST
Received: by kd8wk.cmhnet.org (Smail3.1.28.1 #4)
id m0xxPgb-0000nqC; Wed, 28 Jan 98 00:05 EST
Message-Id: <m0xxPgb-0000nqC@kd8wk.cmhnet.org>
Date: Wed, 28 Jan 98 00:05 EST
From: res@kd8wk.cmhnet.org (Rob Stampfli)
To: cfs-users@research.att.com
Subject: Re: remote file system
Sender: owner-cfs-users@research.att.com
Precedence: bulk
In recent email Rachel writes:
>Does cfsd work on remote file system?
>For example the crypted data is on server:/data, I've it exported for
>localhost and machine1. How to make the cfs work remotely on machine1?
Rachel asks a good question. The answer appears to be a qualified "yes".
I successfully run cfs on one personal machine, a sun w/4.1.4, with encrypted
file systems residing on an NFS mounted remote file system from a server
machine (another sun). It is an excellent use of CFS as far as I am
concerned: The encryption is all handled locally, with all data leaving
the machine fully encrypted. On the downside, the data is "double dipped",
first thru CFS over the localhost, and then thru NFS. That makes encrypted
access somewhat slow.
I encountered a few "gotchas" when trying to accomplish this: First,
there appeared to be a problem with the cfsd daemon accessing the root
directory of the encrypted file system during a cattach. I finally figured
out that cfsd, at least initially, was accessing the directory as "root",
and on NFS "root" generally has restricted permissions on an NFS-mounted
FS. I had to gave the encrypted directory at least "x" (search) permission
across the board to get it to work. This is a minor security risk in
that someone knowlegeable of CFS can access the ..? files contained
therein. Alternately, I could export the FS from the server with the
"root=mymachinename" option specified. I must add that I have corresponded
with Matt about this recently and he believes these "root" access problems
have been fixed in later versions of cfsd. Frankly, I haven't checked.
The other problem is that on low end machines -- originally I was using a
Sparc 1 -- it simply took too long to do the double-dip plus the encryption.
The results were NFS timeouts, which generally made things untenable. I
could solve this problem by using the 1DES option, and I haven't observed
it at all on Sparc 2 or faster machines (with any cipher).
A third "annoyance" is that occasionally I find myself unable to remove a
subdirectory under a /crypt cattach point. When this happens, rmdir reports
"directory not empty", but it surely looks empty from a /crypt perspective.
Without providing a complete explanation for why this occurs, in a nutshell
this phenomenon was traced to the presence of ".nfs*" files in the
encrypted version of the directory being removed. After removing them
from the encrypted directory, I was able to do the rmdir under /crypt.
To make a long story short, I have been using CFS in this mode on my Suns
for quite a while and have been very happy with it.
That said, I recently tried setting up the same configuration on an HP
workstation at work (HPUX10.10) and its server (HPUX9.04). I never could
get it to work in this environment. The problem appeared to be with NFS
on the HP and not CFS. Small files would work fine, but attempting to
access anything larger than about 10K through CFS in the encrypted directory
resulted in NFS hanging on the local machine. I could kill (and restart)
what turned out to be a hung biod process to bring NFS back to life, but
everything I tried accessing a big file in /crypt, another one of the biods
would hang. Experimenting with wsize and rsize had no effect (other than
breaking CFS completely for really small values).
So, the answer is that CFS to an NFS mounted FS seems to work reasonably
well on some architectures, if you engineer it right, but has problems
on other platforms. YMMV.
Hope this helps.
--
Rob Stampfli rob@colnet.cmhnet.org The Bill of Rights: It was a
614-864-9377 HAM RADIO: kd8wk@w8cqk.oh good thing while it lasted...