Go to:
Gentoo Home
Documentation
Forums
Lists
Bugs
Planet
Store
Wiki
Get Gentoo!
Gentoo's Bugzilla – Attachment 42511 Details for
Bug 68740
Update for OpenAFS doc
Home
|
New
–
[Ex]
|
Browse
|
Search
|
Privacy Policy
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
openafs.xml
openafs.xml (text/plain), 29.83 KB, created by
Steve-o
on 2004-10-24 13:23:40 UTC
(
hide
)
Description:
openafs.xml
Filename:
MIME Type:
Creator:
Steve-o
Created:
2004-10-24 13:23:40 UTC
Size:
29.83 KB
patch
obsolete
><?xml version='1.0' encoding="UTF-8"?> ><!-- $Header: /var/www/www.gentoo.org/raw_cvs/gentoo/xml/htdocs/doc/en/openafs.xml,v 1.17 2004/09/22 11:42:11 swift Exp $ --> > ><!DOCTYPE guide SYSTEM "/dtd/guide.dtd"> > ><guide link = "/doc/en/openafs.xml"> ><title>Gentoo Linux OpenAFS Guide</title> ><author title="Editor"> > <mail link="darks@gentoo.org">Holger Brueckner</mail> ></author> ><author title="Editor"> > <mail link="bennyc@gentoo.org">Benny Chuang</mail> ></author> ><author title="Editor"> > <mail link="blubber@gentoo.org">Tiemo Kieft</mail> ></author> ><author title="Editor"> > <mail link="fnjordy@gmail.com">Steven McCoy</mail> ></author> > ><abstract> >This guide shows you how to install a openafs server and client on gentoo linux ></abstract> > ><license/> > ><version>0.8</version> ><date>October 24, 2004</date> > ><chapter> > <title>Overview</title> > <section> > <title>About this Document</title> > <body> > <p>This document provides you with all neccessary steps to install an openafs server on Gentoo Linux. > Parts of this document are taken from the AFS FAQ and IBM's Quick Beginnings guide on AFS. Well, never reinvent > the wheel :)</p> > </body> > </section> > <section> > <title>What is AFS ?</title> > <body> > > <p> > AFS is a distributed filesystem that enables co-operating hosts > (clients and servers) to efficiently share filesystem resources > across both local area and wide area networks. Clients hold a > cache for often used objects (files), to get quicker > access to them. > </p> > <p> > AFS is based on a distributed file system originally developed > at the Information Technology Center at Carnegie-Mellon University > that was called the "Andrew File System". "Andrew" was the name of the research project at CMU - honouring the > founders of the University. Once Transarc was formed and AFS became a > product, the "Andrew" was dropped to indicate that AFS had gone beyond > the Andrew research project and had become a supported, product quality > filesystem. However, there were a number of existing cells that rooted > their filesystem as /afs. At the time, changing the root of the filesystem > was a non-trivial undertaking. So, to save the early AFS sites from having > to rename their filesystem, AFS remained as the name and filesystem root. > </p> > </body> > </section> > <section> > <title>What is an AFS cell ?</title> > <body> > <p>An AFS cell is a collection of servers grouped together administratively > and presenting a single, cohesive filesystem. Typically, an AFS cell is a set of > hosts that use the same Internet domain name (like for example gentoo.org) > Users log into AFS client workstations which request information and files > from the cell's servers on behalf of the users. Users won't know on which server > a file which they are accessing, is located. They even won't notice if a server > will be located to another room, since every volume can be replicated and moved > to another server without any user noticing. The files are always accessable. > Well it's like NFS on steroids :) > </p> > </body> > </section> > <section> > <title>What are the benefits of using AFS ?</title> > <body> > <p>The main strengths of AFS are its: > > caching facility (on client side, typically 100M to 1GB), > security features (Kerberos 4 based, access control lists), > simplicity of addressing (you just have one filesystem), > scalability (add further servers to your cell as needed), > communications protocol. > </p> > </body> > </section> > <section> > <title>Where can i get more information ?</title> > <body> > <p> > Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS FAQ</uri>. > </p> > <p> > Openafs main page is at <uri link="http://www.openafs.org">www.openafs.org</uri>. > </p> > <p> > AFS was originally developed by Transarc which is now owned by IBM. > You can find some information about AFS on > <uri link="http://www.transarc.ibm.com/Product/EFS/AFS/index.html">Transarcs Webpage</uri> > </p> > </body> > </section> > ></chapter> > ><chapter> ><title>Documentation</title> > <section> > <title>Getting AFS Documentation</title> > <body> > <p> > You can get the original IBM AFS Documentation. It is very well written and you > really want > read it if it is up to you to administer a AFS Server. > </p> ><pre> ># <i>emerge app-doc/afsdoc</i> ></pre> > </body> > </section> ></chapter> > ><chapter> ><title>Client Installation</title> > <section> > <title>Preliminary Work</title> > <body> > <note> > All commands should be written in one line !! In this document they are > sometimes wrapped to two lines to make them easier to read. > </note> > <note> > Unfortunately the AFS Client needs a ext2 partiton for it's cache to run > correctly, because there are some locking issues with reiserfs. You need to > create a ext2 partition of approx. 200MB (more won't hurt) and mount it to > <path>/usr/vice/cache</path>. To use ext3 try the <path>afs-client</path> script from > <uri link="http://bugs.gentoo.org/show_bug.cgi?id=59624">Bug 59624</uri>. > </note> > <p> > You should adjust the two files CellServDB and ThisCell before you build the > afs client. (These files are in <path>/usr/portage/net-fs/openafs/files</path>) > </p> > <pre> > CellServDB: > >netlabs #Cell name > 10.0.0.1 #storage > > ThisCell: > netlabs > </pre> > > <warn> > Only use spaces inside the <path>CellServDB</path> file. The client will most > likely fail if you use TABs. > </warn> > > <p> > CellServDB tells your client which server(s) he needs to contact for a > specific cell. ThisCell should be quite obvious. Normally you use a name > which is unique for your organisation. Your (official) domain might be a > good choice. > </p> > </body> > </section> > <section> > <title>Building the Client</title> > <body> ><pre> ># <i>emerge net-fs/openafs</i> ></pre> > <p> > After successful compilation you're ready to go. > </p> > </body> > </section> > <section> > <title>Starting afs on startup</title> > <body> > <p> > The following command will create the appropriate links to start your afs client > on system startup. > </p> > <warn> > You should always have a running afs server in your domain when trying to start the afs client. You're system won't boot > until it gets some timeout if your afs server is down. (and this is quite a long long time) > </warn> ><pre> ># <i>rc-update add afs default</i> ></pre> > </body> > </section> ></chapter> > ><chapter> ><title>Server Installation</title> > <section> > <title>Building the Server</title> > <body> > <p> > The following command will install all necessary binaries for setting up a AFS Server > <e>and</e> Client. > </p> ><pre> ># <i>emerge net-fs/openafs</i> ></pre> > </body> > </section> > <section> > <title>Starting AFS Server</title> > <body> > <p> > You need to remove the sample CellServDB and ThisCell file first. > </p> ><pre> ># <i>rm /usr/vice/etc/ThisCell</i> ># <i>rm /usr/vice/etc/CellServDB</i> ></pre> > <p> > Next you will run the <b>bosserver</b> command to initialize the Basic OverSeer (BOS) > Server, which monitors and controls other AFS server processes on its server > machine. Think of it as init for the system. Include the <b>-noauth</b> > flag to disable authorization checking, since you haven't added the admin user yet. > </p> > <warn> > Disabling authorization checking gravely compromises cell security. > You must complete all subsequent steps in one uninterrupted pass > and must not leave the machine unattended until you restart the BOS Server with > authorization checking enabled. Well this is what the AFS documentation says :) > </warn> ><pre> ># <i>/usr/afs/bin/bosserver -noauth &</i> ></pre> > <p> > Verify that the BOS Server created <path>/usr/vice/etc/CellServDB</path> > and <path>/usr/vice/etc/ThisCell</path> > </p> ><pre> ># <i>ls -al /usr/vice/etc/</i> >-rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB >-rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell ></pre> > > </body> > </section> > <section> > <title>Defining Cell Name and Membership for Server Process</title> > <body> > <p> > Now assign your cells name. > </p> > <impo>There are some restrictions on the name format. > Two of the most important restrictions are that the name > cannot include uppercase letters or more than 64 characters. Remember that > your cell name will show up under <path>/afs</path>, so you might want to choose > a short one.</impo> > <note>In the following and every instruction in this guide, for the <server name> > argument substitute the full-qualified hostname > (such as <b>afs.gentoo.org</b>) of the machine you are installing. > For the <cell name> > argument substitute your cell's complete name (such as <b>gentoo</b>)</note> > <p> > Run the <b>bos setcellname</b> command to set the cell name: > </p> ><pre> ># <i>/usr/afs/bin/bos setcellname <server name> <cell name> -noauth</i> ></pre> > </body> > </section> > <section> > <title>Starting the Database Server Process</title> > <body><p> > Next use the <b>bos create</b> command to create entries for the four database > server processes in the > <path>/usr/afs/local/BosConfig</path> file. The four processes run on database > server machines only. > </p> > > <table> > <tr> > <ti>kaserver</ti> > <ti>The Authentication Server maintains the Authentication Database. > This can be replaced by a Kerberos 5 daemon. If anybody want's to try that > feel free to update this document :)</ti> > </tr> > <tr> > <ti>buserver</ti> > <ti>The Backup Server maintains the Backup Database</ti> > </tr> > <tr> > <ti>ptserver</ti> > <ti>The Protection Server maintains the Protection Database</ti> > </tr> > <tr> > <ti>vlserver</ti> > <ti>The Volume Location Server maintains the Volume Location Database (VLDB). > Very important :)</ti> > </tr> > </table> ><pre> ># <i>/usr/afs/bin/bos create <server name> kaserver simple > /usr/afs/bin/kaserver -cell <cell name> -noauth</i> ># <i>/usr/afs/bin/bos create <server name> buserver simple > /usr/afs/bin/buserver -cell <cell name> -noauth</i> ># <i>/usr/afs/bin/bos create <server name> ptserver simple > /usr/afs/bin/ptserver -cell <cell name> -noauth</i> ># <i>/usr/afs/bin/bos create <server name> vlserver simple > /usr/afs/bin/vlserver -cell <cell name> -noauth</i> ></pre> > <p> > You can verify that all servers are running with the <b>bos status</b> command: > </p> ><pre> ># <i>/usr/afs/bin/bos status <server name> -noauth</i> >Instance kaserver, currently running normally. >Instance buserver, currently running normally. >Instance ptserver, currently running normally. >Instance vlserver, currently running normally. ></pre> > > </body> > </section> > <section> > <title>Initializing Cell Security</title> > <body> > <p> > Now we'll initialize the cell's security mechanisms. We'll begin by creating the > following two initial entries in the > Authentication Database: The main administrative account, called <b>admin</b> by > convention and an entry for > the AFS server processes, called <b>afs</b>. No user logs in under the > identity <b>afs</b>, but the Authentication > Server's Ticket Granting Service (TGS) module uses the account > to encrypt the server tickets that it grants to AFS clients. This sounds > pretty much like Kerberos :) > </p> > <p> > Enter <b>kas</b> interactive mode > </p> ><pre> ># <i>/usr/afs/bin/kas -cell <cell name> -noauth</i> >ka> <i>create afs</i> >initial_password: >Verifying, please re-enter initial_password: >ka> <i>create admin</i> >initial_password: >Verifying, please re-enter initial_password: >ka> <i>examine afs</i> > >User data for afs > key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001 > password will never expire. > An unlimited number of unsuccessful authentications is permitted. > entry never expires. Max ticket lifetime 100.00 hours. > last mod on Mon Jun 4 20:49:30 2001 by $lt;none> > permit password reuse >ka> <i>setfields admin -flags admin</i> >ka> <i>examine admin</i> > >User data for admin (ADMIN) > key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001 > password will never expire. > An unlimited number of unsuccessful authentications is permitted. > entry never expires. Max ticket lifetime 25.00 hours. > last mod on Mon Jun 4 20:51:10 2001 by $lt;none> > permit password reuse >ka> ></pre> > <p> > Run the <b>bos adduser</b> command, to add the <b>admin</b> user to > the <path>/usr/afs/etc/UserList</path>. > </p> ><pre> ># <i>/usr/afs/bin/bos adduser <server name> admin -cell <cell name> -noauth</i> ></pre> > <p> > Issue the <b>bos addkey</b> command to define the AFS Server > encryption key in <path>/usr/afs/etc/KeyFile</path> > </p> > <note> > If asked for the input key, give the password you entered when creating > the afs entry with <b>kas</b> > </note> ><pre> ># <i>/usr/afs/bin/bos addkey <server name> -kvno 0 -cell <cell name> -noauth</i> > input key: > Retype input key: ></pre> > <p> > Issue the <b>pts createuser</b> command to create a Protection Database > entry for the admin user > </p> > <note> > By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user, because > it is the first user > entry you are creating. If the local password file (/etc/passwd or equivalent) > already has an entry for > <b>admin</b> that assigns a different UID use the <b>-id</b> argument > to create matching UID's > </note> ><pre> ># <i>/usr/afs/bin/pts createuser -name admin -cell <cell name> [-id <AFS UID>] -noauth</i> ></pre> > <p> > Issue the <b>pts adduser</b> command to make the <b>admin</b> user a member > of the system:administrators group, > and the <b>pts membership</b> command to verify the new membership > </p> ><pre> ># <i>/usr/afs/bin/pts adduser admin system:administrators -cell <cell name> -noauth</i> ># <i>/usr/afs/bin/pts membership admin -cell <cell name> -noauth</i> > Groups admin (id: 1) is a member of: > system:administrators ></pre> > <p> > Restart all AFS Server processes > </p> ><pre> ># <i>/usr/afs/bin/bos restart <server name> -all -cell <cell name> -noauth</i> ></pre> > </body> > </section> > <section> > <title>Starting the File Server, Volume Server and Salvager</title> > <body> > <p> > Start the <b>fs</b> process, which consists of the File Server, Volume Server and Salvager (fileserver, > volserver and salvager processes). > </p> ><pre> ># <i>/usr/afs/bin/bos create <server name> fs fs /usr/afs/bin/fileserver > /usr/afs/bin/volserver > /usr/afs/bin/salvager > -cell <cell name> -noauth</i> ></pre> > <p> > Verify that all processes are running > </p> ><pre> > # <i>/usr/afs/bin/bos status <server name> -long -noauth</i> > Instance kaserver, (type is simple) currently running normally. > Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts) > Last exit at Mon Jun 4 21:07:17 2001 > Command 1 is '/usr/afs/bin/kaserver' > > Instance buserver, (type is simple) currently running normally. > Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts) > Last exit at Mon Jun 4 21:07:17 2001 > Command 1 is '/usr/afs/bin/buserver' > > Instance ptserver, (type is simple) currently running normally. > Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts) > Last exit at Mon Jun 4 21:07:17 2001 > Command 1 is '/usr/afs/bin/ptserver' > > Instance vlserver, (type is simple) currently running normally. > Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts) > Last exit at Mon Jun 4 21:07:17 2001 > Command 1 is '/usr/afs/bin/vlserver' > > Instance fs, (type is fs) currently running normally. > Auxiliary status is: file server running. > Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts) > Command 1 is '/usr/afs/bin/fileserver' > Command 2 is '/usr/afs/bin/volserver' > Command 3 is '/usr/afs/bin/salvager' ></pre> > <p> > Your next action depends on whether you have ever run AFS file server machines > in the cell: > </p> > <p> > If you are installing the first AFS Server ever in the cell create the > first AFS volume, <b>root.afs</b> > </p> > <note> > For the partition name argument, substitute the name of one of the machine's > AFS Server partitions. By convention > these partitions are named <path>/vicepx</path>, where x is in the range of a-z. > </note> ><pre> > # <i>/usr/afs/bin/vos create <server name> > <partition name> root.afs > -cell <cell name> -noauth</i> ></pre> > <p> > If there are existing AFS file server machines and volumes in the cell > issue the <b>vos sncvldb</b> and <b>vos > syncserv</b> commands to synchronize the VLDB (Volume Location Database) with > the actual state of volumes on the local machine. This will copy all necessary data to your > new server. > </p> > <p> > If the command fails with the message "partition /vicepa does not exist on > the server", ensure that the partition is mounted before running OpenAFS > servers, or mount the directory and restart the processes using > <c>/usr/afs/bin/bos restart <server name> -all -cell <cell > name> -noauth</c>. > </p> ><pre> > # <i>/usr/afs/bin/vos syncvldb <server name> -cell <cell name> -verbose -noauth</i> > # <i>/usr/afs/bin/vos syncserv <server name> -cell <cell name> -verbose -noauth</i> ></pre> > </body> > </section> > <section> > <title>Starting the Server Portion of the Update Server</title> > <body> ><pre> ># <i>/usr/afs/bin/bos create <server name> > upserver simple "/usr/afs/bin/upserver > -crypt /usr/afs/etc -clear /usr/afs/bin" > -cell <cell name> -noauth</i> ></pre> > </body> > </section> > <section> > <title>Configuring AFS client options</title> > <body> > <p> > Now that the server is running you need to configure a client to access the server. If you have > time synchronisation software (NTP) already installed you can use the option -nosettime, otherwise > AFS will periodically reset your clock to the server time. If you want to use a memory cache try the following: > </p> ><pre caption="/etc/afs/afs.conf"> >CACHESIZE=64000 >OPTIONS="$MEDIUM -nosettime -memcache" ></pre> > <p> > For a disk based cache you can use similar options: > </p> ><pre caption="/etc/afs/afs.conf"> >CACHESIZE= >OPTIONS="$MEDIUM -nosettime" ></pre> > </body> > </section> > <section> > <title>Configuring the Top Level of the AFS filespace</title> > <body> > <p> > First you need to set some acl's, so that any user can lookup <path>/afs</path>. > </p> > <note> > This step is not necessary when using the afsd option -dynroot and will cause an error "fs:'/afs': Connection timed out". > </note> ><pre> ># <i>/usr/afs/bin/fs setacl /afs system:anyuser rl</i> ></pre> > <p> > Then you need to create the root volume, mount it readonly on <path>/afs/<cell name></path> and read/write > on <path>/afs/.<cell name></path> > </p> ><pre> ># <i>/usr/afs/bin/vos create <server name><partition name> root.cell</i> ># <i>/usr/afs/bin/fs mkmount /afs/<cell name> root.cell </i> ># <i>/usr/afs/bin/fs setacl /afs/<cell name> system:anyuser rl</i> ># <i>/usr/afs/bin/fs mkmount /afs/.<cell name> root.cell -rw</i> ></pre> > <p> > Finally you're done !!! You should now have a working AFS file server > on your local network. Time to get a big > cup of coffee and print out the AFS documentation !!! > </p> > <note> > It is very important for the AFS server to function properly, that all system > clock's are synchronized. > This is best > accomplished by installing a ntp server on one machine (e.g. the AFS server) > and synchronize all client clock's > with the ntp client. This can also be done by the afs client. > </note> > </body> > </section> > ></chapter> > ><chapter> ><title>Basic Administration</title> ><section> ><title>Disclaimer</title> ><body> > ><p> >OpenAFS is an extensive technology. Please read the AFS documentation for more >information. We only list a few administrative tasks in this chapter. ></p> > ></body> ></section> ><section> ><title>Configuring PAM to Acquire an AFS Token on Login</title> ><body> > ><p> >To use AFS you need to authenticate against the KA Server if using >an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using >MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a >machine you will also need a user account, this can be local in >/etc/passwd, NIS, LDAP (OpenLDAP), or a Hesiod database. PAM allows >Gentoo to tie the authentication against AFS and login to the user >account. ></p> > ><p> >You will need to update /etc/pam.d/system-auth which is used by the >other configurations. "use_first_pass" indicates it will be checked >first against the user login, and "ignore_root" stops the local super >user being checked so as to order to allow login if AFS or the network >fails. ></p> > ><pre caption="/etc/pam.d/system-auth"> >auth required /lib/security/pam_env.so >auth sufficient /lib/security/pam_unix.so likeauth nullok >auth sufficient /usr/afsws/lib/pam_afs.so.1 use_first_pass ignore_root >auth required /lib/security/pam_deny.so > >account required /lib/security/pam_unix.so > >password required /lib/security/pam_cracklib.so retry=3 >password sufficient /lib/security/pam_unix.so nullok md5 shadow use_authtok >password required /lib/security/pam_deny.so > >session required /lib/security/pam_limits.so >session required /lib/security/pam_unix.so ></pre> > ><p> >In order for sudo to keep the real user's token and to prevent local >users gaining AFS access change /etc/pam.d/su as follows: ></p> > ><pre caption="/etc/pam.d/su"> ><comment># Here, users with uid > 100 are considered to belong to AFS and users with ># uid <= 100 are ignored by pam_afs.</comment> >auth sufficient /usr/afsws/lib/pam_afs.so.1 ignore_uid 100 > >auth sufficient /lib/security/pam_rootok.so > ><comment># If you want to restrict users begin allowed to su even more, ># create /etc/security/suauth.allow (or to that matter) that is only ># writable by root, and add users that are allowed to su to that ># file, one per line. >#auth required /lib/security/pam_listfile.so item=ruser \ ># sense=allow onerr=fail file=/etc/security/suauth.allow > ># Uncomment this to allow users in the wheel group to su without ># entering a passwd. >#auth sufficient /lib/security/pam_wheel.so use_uid trust > ># Alternatively to above, you can implement a list of users that do ># not need to supply a passwd with a list. >#auth sufficient /lib/security/pam_listfile.so item=ruser \ ># sense=allow onerr=fail file=/etc/security/suauth.nopass > ># Comment this to allow any user, even those not in the 'wheel' ># group to su</comment> >auth required /lib/security/pam_wheel.so use_uid > >auth required /lib/security/pam_stack.so service=system-auth > >account required /lib/security/pam_stack.so service=system-auth > >password required /lib/security/pam_stack.so service=system-auth > >session required /lib/security/pam_stack.so service=system-auth >session optional /lib/security/pam_xauth.so > ><comment># Here we prevent the real user id's token from being dropped</comment> >session optional /usr/afsws/lib/pam_afs.so.1 no_unlog ></pre> > ></body> ></section> > <section> > <title>Adding an AFS user</title> > <body> > <p> > With AFS you have to separate the concept of account and authorisation. The account is the details > about the user and can be handled by Unix, NIS, LDAP, etc. Authorisation is provided by AFS. In > order to add an AFS user we already need a Unix account. First step is to register the user with > the Protection Database: > </p> ><pre> ># /usr/afs/bin/pts createuser <user name> <user id> ></pre> > <p> > Next is to add to the Authentication Database: > </p> ><pre> ># /usr/afs/bin/kas create <user name> -admin admin >Administrator's (admin_user) password: admin_password >initial_password: initial_password >Verifying, please re-enter initial_password: initial_password ></pre> > <p> > Now we create a volume to store the users files, to disable quota specify zero "0" as the quota size. > </p> ><pre> ># /usr/afs/bin/vos create <server name> <partition name> user.<user name> <quota size> ></pre> > <p> > To add to the file system we first create a standard directory for user home volumes and > mount in <path>/afs/.<cell name>/usr</path> > </p> ><pre> ># /usr/afs/bin/vos create <server name> <partition name> usr ># /usr/afs/bin/fs mkmount /afs/.<cell name>/usr usr ># /usr/afs/bin/fs mkmount /afs/.<cell name>/usr/<user name> user.<user name> ></pre> > <p> > Set AFS ownership of the volume to the user: ><pre> ># /usr/afs/bin/fs setacl /afs/.<cell name>/usr/<user name> <user name> all system:administrators all ></pre> > </p> > <p> > Set Unix ownership of the volume directory to the user: > </p> ><pre> ># chown <user name>:<user group> /afs/.<cell name>/usr/<user name> ></pre> > <p> > Finally release for replication: > </p> ><pre> ># /usr/afs/bin/vos release user.<user name> ></pre> > </body> > </section> > > <section> > <title>AFSDB: Database server configuration in DNS</title> > <body> > <p> > To allow clients to discover AFS database servers automatically instead of manually configuring > a CellServDB on each host you can use the AFSDB extension to DNS. AFSDB includes extra > entries that specify the IP addresses of the database servers for a particular cell. AFSDB is > supported by BIND but is still in draft and possible to change. > </p> > <p> > The following example shows that clients with a dns domain of gentoo.org should > use afs1.gentoo.org and afs2.gentoo.org as AFS database servers</p> ><pre> >gentoo.org. IN AFSDB 1 afs1.gentoo.org. >gentoo.org. IN AFSDB 1 afs2.gentoo.org. ></pre> > <p> > Note that AFSDB only provides the address of the database server, AFS uses the database > servers to provide the server details about the file servers where volumes are stored. To > configure the clients to use AFSDB simply add -afsdb to the startup options, e.g. > </p> ><pre caption="/etc/afs/afs.conf"> >CACHESIZE= >OPTIONS="$XXLARGE -nosettime -memcache -dynroot -fakestat -afsdb" ></pre> > <p> > More details at > <uri link="http://grand.central.org/twiki/bin/view/AFSLore/AFSDB">AFSLore</uri>. > </p> > </body> > </section> > > <section> > <title>AFS Backup System</title> > <body> > <p> > AFS comes with an integrated backup system that can make the previous backup files immediately available > for recovery. It does this in an intelligent manner such that the backup files are really only links to > the real files until they are changed. When a file is changed and there is a backup AFS will create a > separate copy of the file. Backup volumes are needed for full backup to tape because when such a backup occurs > read-write volumes are offline and made unavailable. It is therefore possible to create a backup volume > snapshot of a real volume and then backup that backup volume to tape. > </p> > <p> > To create backup volumes regularly we create a cronjob with the BOS Server: > </p> ><pre> ># /usr/afs/bin/bos create <server name> backupusers cron -cmd "/usr/afs/bin/vos backupsys -prefix user -localauth" "1:00" ></pre> > <p> > To make a backup volume available to a user we need to first create the volume: > </p> ><pre> ># /usr/afs/bin/vos backupsys -localauth ></pre> > <p> > You can confirm that the backup volume has been created with the following: > </p> ><pre> ># /usr/afs/bin/vos listvol <server name> >Total number of volumes on server <server name> partition <partition name>: 5 >root.afs 536870912 RW 2 K On-line >root.cell 536870915 RW 3 K On-line >user.<user name> 536870924 RW 3 K On-line >user.<user name>.backup 536870926 BK 3 K On-line >usr 536870918 RW 4 K On-line > >Total volumes onLine 5 ; Total volumes offLine 0 ; Total busy 0 ></pre> > <p> > And then mount the backup volume in the users home directory, standard names include ".backup", ".yesterday", or "OldFiles" > </p> ><pre> ># /usr/afs/bin/fs mkmount /afs/.<cell name>/usr/<user name>/.backup user.<user name>.backup ></pre> > </body> > </section> > ></chapter> > ></guide>
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 68740
: 42511