Multi-host defragmentation channel operation

The defragment database can optionally be stored on a filesystem accessible to multiple hosts (for instance, over NFS1), and  then shared by multiple hosts. This can be particularly useful when multiple "front end" hosts can potentially deliver to the  same "back end" message stores, particularly when the  "back end" message store can not do message defragmentation  itself (as for LMTP message stores).

To set up such sharing, make a link from the   on each individual system to whatever file you want to have be the shared  defragment database on the shared (NFS) disk. Note that the NFS mounted file system should be set for "soft mount" with a relatively  short time out, rather than for (the default for NFS) "hard  mount". Regarding the NFS time out, the NFS mount option (see the   man page)   will need to be  set on the   entry (or amtomount map) that  causes the file system to be mounted. (With a "hard mount", if NFS went down then the defragment channel would hang, waiting for  the access to the defragment cache to succeed. But with a "soft  mount", the defragment channel will time out its attempt to access  the defragment cache. So the channel will not hang; instead, in the  unlikely event that all message fragments happen to end up on one host,  that host&#x27;s defragment channel should be able to reassemble the  fragments and send the message onwards, properly reassembled; but more  likely, the fragments will be spread among different hosts, none of  which can reassemble or properly route to another host&#x27;s defragment  channel in the absence of successful access to the defragment cache, so  instead the various fragments with eventually get sent onwards still as  separate fragments.)

When setting up a defragment database that will be shared over NFS by multiple systems, note that the  MTA user (typically    -- see the   option in  ) on each system must be defined to have the same    number on each system. If systems define the MTA user with different   numbers, permission problems can be  expected.

The defragment database entries include a field specifying the host upon which a message fragment resides. Once an initial part has been received and noted in the defragment database, any other parts of the  message that are received on any other systems using the same  defragment database will get routed to that "first" host that  received the "first" part. (The defragment channel when it runs, first checks if any message fragment parts are already present,  and if so on which host; then if a part or parts are already present on  some other host, the defragment channel sends its just-received part  onward to the other host, using explicit source routing to route to the  other host, rather than retaining the part for reassembly attempts  itself. See the  Multi-host defragmentation channel operation example for an example.) Thus all remaining parts of  a fragmented message end up getting redirected to the host whose  defragment channel happened to attempt processing the very first (first  to arrive, not necessarily part=1) part of the message; that host&#x27;s  defragment channel is then responsible for doing the message  defragmentation (reassembly) once all fragments have been received. (One consequence is some load-balancing of the defragmentation of messages depending upon which host happens to receive the  "first" part of each message.)

1Note that sharing the defragmentation database over NFS is an exception to the general         rule that the MTA does not support sharing filesystems via NFS. The         MTA&#x27;s use of the defragment database has been specially designed with          NFS&#x27; limited locking semantics in mind.

See also:
 * Defragmentation channel
 * Multi-host defragmentation channel operation example