AFS fileservers can be distributed throughout the network, so that in an inhomogeneous
network environment, it is possible to have one or more servers more local in
a network sense. Arranging to have the most frequently accessed volumes from a particular
client on a close fileserver will improve performance.
Volumes which are frequently read but only occasionally modified can be replicated
on multiple servers. Clients will automatically pick a read-only copy if one is available, and will
fail-over to a different copy if a server becomes unavailable. The top level volumes are usually
replicated on every file server in a cell.
Since volumes are the unit of AFS server space for transactions such as
migration between servers, replication, and backups,
many operations are easier if the size is kept smaller. The
important parameter is the ratio of the volume size to fileserver partition size.
The daily backup snapshots can cause the actual disk usage of a volume to be
double the visible space, if the files are modified daily. Large individual files make this
problem worse. Backup snapshots limit the fraction of a server that can be allowed to fill.
The tape backup system also requires staging space to hold the compressed backup snaphots, so
large, frequently updated volumes have a significant impact.
AFS performance is very dependant on the speed and configuration of the client. Upgrading client
CPU, disk, memory and networking all can have a large effect. The disk cache used on the client
should if possible be on a disk partition of its own, ideally on a striped local filesystem. The
client cache manager has various startup options which control how much memory it uses, and how much metadata
related to volumes, directory entries and file data are kept cached locally. Increasing those values
generally improves performance, but there is a point at which performance suffers because of searching times.
The amount of file data transferred in one transaction (chunk size) can be tuned. Well connected fast
networks with few lost packets will benefit from larger chunk sizes than the default. Graphical file
managers will benefit from larger amounts of file and directory metadata in cache.