Commit 88c66bfe authored by Gavin Henry's avatar Gavin Henry
Browse files

New TOC, new sdf files and merging/reworking of existing data. Makefile updated and tested also.

parent b3e43051
......@@ -18,16 +18,19 @@ sdf-src: \
../plain.sdf \
../preamble.sdf \
abstract.sdf \
appendix-configs.sdf \
backends.sdf \
config.sdf \
dbtools.sdf \
glossary.sdf \
guide.sdf \
install.sdf \
intro.sdf \
maintenance.sdf \
master.sdf \
monitoringslapd.sdf \
overlays.sdf \
preface.sdf \
proxycache.sdf \
quickstart.sdf \
referrals.sdf \
replication.sdf \
......@@ -36,9 +39,9 @@ sdf-src: \
schema.sdf \
security.sdf \
slapdconfig.sdf \
syncrepl.sdf \
title.sdf \
tls.sdf \
troubleshooting.sdf \
tuning.sdf
sdf-img: \
......
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
H1: Configuration File Examples
H2: slapd.conf
H2: ldap.conf
H2: a-n-other.conf
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
H1: Backends
H2: Berkley DB Backends
H3: Overview
H3: back-bdb/back-hdb Configuration
H3: Further Information
H2: LDAP
H3: Overview
H3: back-ldap Configuration
H3: Further Information
H2: LDIF
H3: Overview
H3: back-ldif Configuration
H3: Further Information
H2: Metadirectory
H3: Overview
H3: back-meta Configuration
H3: Further Information
H2: Monitor
H3: Overview
H3: back-monitor Configuration
H3: Further Information
H2: Relay
H3: Overview
H3: back-relay Configuration
H3: Further Information
H2: Perl/Shell
H3: Overview
H3: back-perl/back-shell Configuration
H3: Further Information
H2: SQL
H3: Overview
H3: back-sql Configuration
H3: Further Information
......@@ -154,6 +154,12 @@ LDAP also supports data security (integrity and confidentiality)
services.
H2: When should I use LDAP?
H2: When should I not use LDAP?
H2: How does LDAP work?
LDAP utilizes a {{client-server model}}. One or more LDAP servers
......@@ -221,6 +227,9 @@ simultaneously is quite problematic. LDAPv2 should be avoided.
LDAPv2 is disabled by default.
H2: LDAP vs RDBMS
H2: What is slapd and what can it do?
{{slapd}}(8) is an LDAP directory server that runs on many different
......
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
H1: Maintenance
H2: Directory Backups
H2: Berkeley DB Logs
H2: Checkpointing
......@@ -48,6 +48,12 @@ PB:
!include "dbtools.sdf"; chapter
PB:
!include "backends.sdf"; chapter
PB:
!include "overlays.sdf"; chapter
PB:
!include "schema.sdf"; chapter
PB:
......@@ -60,25 +66,29 @@ PB:
!include "tls.sdf"; chapter
PB:
!include "monitoringslapd.sdf"; chapter
!include "referrals.sdf"; chapter
PB:
#!include "tuning.sdf"; chapter
#PB:
!include "replication.sdf"; chapter
PB:
!include "referrals.sdf"; chapter
!include "maintenance.sdf"; chapter
PB:
!include "replication.sdf"; chapter
!include "monitoringslapd.sdf"; chapter
PB:
!include "syncrepl.sdf"; chapter
!include "tuning.sdf"; chapter
PB:
!include "proxycache.sdf"; chapter
!include "troubleshooting.sdf"; chapter
PB:
# Appendices
# Config file examples
!include "appendix-configs.sdf"; appendix
PB:
# Terms
!include "glossary.sdf"; appendix
PB:
......
# $OpenLDAP$
# Copyright 2003-2007 The OpenLDAP Foundation, All Rights Reserved.
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
H1: The Proxy Cache Engine
H1: Overlays
H2: Access Logging
H3: Overview
H3: Access Logging Configuration
H2: Audit Logging
H3: Overview
H3: Audit Logging Configuration
H2: Constraints
H3: Overview
H3: Constraint Configuration
H2: Dynamic Directory Services
H3: Overview
H3: Dynamic Directory Service Configuration
H2: Dynamic Groups
H3: Overview
H3: Dynamic Group Configuration
H2: Dynamic Lists
H3: Overview
H3: Dynamic List Configuration
H2: The Proxy Cache Engine
{{TERM:LDAP}} servers typically hold one or more subtrees of a
{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of
......@@ -11,7 +67,7 @@ from the master server to replica (slave) servers using LDAP Sync
replication. An LDAP cache is a special type of replica which holds
entries corresponding to search filters instead of subtrees.
H2: Overview
H3: Overview
The proxy cache extension of slapd is designed to improve the
responseiveness of the ldap and meta backends. It handles a search
......@@ -52,14 +108,14 @@ The Proxy Cache paper
design and implementation details.
H2: Proxy Cache Configuration
H3: Proxy Cache Configuration
The cache configuration specific directives described below must
appear after a {{EX:overlay proxycache}} directive within a
{{EX:"database meta"}} or {{EX:database ldap}} section of
the server's {{slapd.conf}}(5) file.
H3: Setting cache parameters
H4: Setting cache parameters
> proxyCache <DB> <maxentries> <nattrsets> <entrylimit> <period>
......@@ -75,7 +131,7 @@ entries in a cachable query. The <period> specifies the consistency
check period (in seconds). In each period, queries with expired
TTLs are removed.
H3: Defining attribute sets
H4: Defining attribute sets
> proxyAttrset <index> <attrs...>
......@@ -84,7 +140,7 @@ set is associated with an index number from 0 to <numattrsets>-1.
These indices are used by the proxyTemplate directive to define
cacheable templates.
H3: Specifying cacheable templates
H4: Specifying cacheable templates
> proxyTemplate <prototype_string> <attrset_index> <TTL>
......@@ -94,7 +150,7 @@ its prototype filter string and set of required attributes identified
by <attrset_index>.
H3: Example
H4: Example
An example {{slapd.conf}}(5) database section for a caching server
which proxies for the {{EX:"dc=example,dc=com"}} subtree held
......@@ -117,7 +173,7 @@ at server {{EX:ldap.example.com}}.
> index cn,sn,uid,mail pres,eq,sub
H4: Cacheable Queries
H5: Cacheable Queries
A LDAP search query is cacheable when its filter matches one of the
templates as defined in the "proxyTemplate" statements and when it references
......@@ -126,7 +182,7 @@ In the example above the attribute set number 0 defines that only the
attributes: {{EX:mail postaladdress telephonenumber}} are cached for the following
proxyTemplates.
H4: Examples:
H5: Examples:
> Filter: (&(sn=Richard*)(givenName=jack))
> Attrs: mail telephoneNumber
......@@ -145,4 +201,87 @@ H4: Examples:
is not cacheable, because the filter does not match the template ( logical
OR "|" condition instead of logical AND "&" )
H2: Password Policies
H3: Overview
H3: Password Policy Configuration
H2: Referential Integrity
H3: Overview
H3: Referential Integrity Configuration
H2: Return Code
H3: Overview
H3: Return Code Configuration
H2: Rewrite/Remap
H3: Overview
H3: Rewrite/Remap Configuration
H2: Sync Provider
H3: Overview
H3: Sync Provider Configuration
H2: Translucent Proxy
H3: Overview
H3: Translucent Proxy Configuration
H2: Attribute Uniqueness
H3: Overview
H3: Attribute Uniqueness Configuration
H2: Value Sorting
H3: Overview
H3: Value Sorting Configuration
H2: Overlay Stacking
H3: Overview
H3: Example Senarios
H4: Samba
......@@ -9,7 +9,7 @@ P1: Preface
# document's copyright
P2[notoc] Copyright
Copyright 1998-2006, The {{ORG[expand]OLF}}, {{All Rights Reserved}}.
Copyright 1998-2007, The {{ORG[expand]OLF}}, {{All Rights Reserved}}.
Copyright 1992-1996, Regents of the {{ORG[expand]UM}}, {{All Rights Reserved}}.
......
This diff is collapsed.
# $OpenLDAP$
# Copyright 2003-2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
H1: LDAP Sync Replication
The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for
short, is a consumer-side replication engine that enables the
consumer {{TERM:LDAP}} server to maintain a shadow copy of a
{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side
as one of the {{slapd}}(8) threads. It creates and maintains a
consumer replica by connecting to the replication provider to perform
the initial DIT content load followed either by periodic content
polling or by timely updates upon content changes.
Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for
short) protocol as the replica synchronization protocol. It provides
a stateful replication which supports both pull-based and push-based
synchronization and does not mandate the use of a history store.
Syncrepl keeps track of the status of the replication content by
maintaining and exchanging synchronization cookies. Because the
syncrepl consumer and provider maintain their content status, the
consumer can poll the provider content to perform incremental
synchronization by asking for the entries required to make the
consumer replica up-to-date with the provider content. Syncrepl
also enables convenient management of replicas by maintaining replica
status. The consumer replica can be constructed from a consumer-side
or a provider-side backup at any synchronization status. Syncrepl
can automatically resynchronize the consumer replica up-to-date
with the current provider content.
Syncrepl supports both pull-based and push-based synchronization.
In its basic refreshOnly synchronization mode, the provider uses
pull-based synchronization where the consumer servers need not be
tracked and no history information is maintained. The information
required for the provider to process periodic polling requests is
contained in the synchronization cookie of the request itself. To
optimize the pull-based synchronization, syncrepl utilizes the
present phase of the LDAP Sync protocol as well as its delete phase,
instead of falling back on frequent full reloads. To further optimize
the pull-based synchronization, the provider can maintain a per-scope
session log as a history store. In its refreshAndPersist mode of
synchronization, the provider uses a push-based synchronization.
The provider keeps track of the consumer servers that have requested
a persistent search and sends them necessary updates as the provider
replication content gets modified.
With syncrepl, a consumer server can create a replica without
changing the provider's configurations and without restarting the
provider server, if the consumer server has appropriate access
privileges for the DIT fragment to be replicated. The consumer
server can stop the replication also without the need for provider-side
changes and restart.
Syncrepl supports both partial and sparse replications. The shadow
DIT fragment is defined by a general search criteria consisting of
base, scope, filter, and attribute list. The replica content is
also subject to the access privileges of the bind identity of the
syncrepl replication connection.
H2: The LDAP Content Synchronization Protocol
The LDAP Sync protocol allows a client to maintain a synchronized
copy of a DIT fragment. The LDAP Sync operation is defined as a set
of controls and other protocol elements which extend the LDAP search
operation. This section introduces the LDAP Content Sync protocol
only briefly. For more information, refer to {{REF:RFC4533}}.
The LDAP Sync protocol supports both polling and listening for
changes by defining two respective synchronization operations:
{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented
by the {{refreshOnly}} operation. The client copy is synchronized
to the server copy at the time of polling. The server finishes the
search operation by returning {{SearchResultDone}} at the end of
the search operation as in the normal search. The listening is
implemented by the {{refreshAndPersist}} operation. Instead of
finishing the search after returning all entries currently matching
the search criteria, the synchronization search remains persistent
in the server. Subsequent updates to the synchronization content
in the server cause additional entry updates to be sent to the
client.
The {{refreshOnly}} operation and the refresh stage of the
{{refreshAndPersist}} operation can be performed with a present
phase or a delete phase.
In the present phase, the server sends the client the entries updated
within the search scope since the last synchronization. The server
sends all requested attributes, be it changed or not, of the updated
entries. For each unchanged entry which remains in the scope, the
server sends a present message consisting only of the name of the
entry and the synchronization control representing state present.
The present message does not contain any attributes of the entry.
After the client receives all update and present entries, it can
reliably determine the new client copy by adding the entries added
to the server, by replacing the entries modified at the server, and
by deleting entries in the client copy which have not been updated
nor specified as being present at the server.
The transmission of the updated entries in the delete phase is the
same as in the present phase. The server sends all the requested
attributes of the entries updated within the search scope since the
last synchronization to the client. In the delete phase, however,
the server sends a delete message for each entry deleted from the
search scope, instead of sending present messages. The delete
message consists only of the name of the entry and the synchronization
control representing state delete. The new client copy can be
determined by adding, modifying, and removing entries according to
the synchronization control attached to the {{SearchResultEntry}}
message.
In the case that the LDAP Sync server maintains a history store and
can determine which entries are scoped out of the client copy since
the last synchronization time, the server can use the delete phase.
If the server does not maintain any history store, cannot determine
the scoped-out entries from the history store, or the history store
does not cover the outdated synchronization state of the client,
the server should use the present phase. The use of the present
phase is much more efficient than a full content reload in terms
of the synchronization traffic. To reduce the synchronization
traffic further, the LDAP Sync protocol also provides several
optimizations such as the transmission of the normalized {{EX:entryUUID}}s
and the transmission of multiple {{EX:entryUUIDs}} in a single
{{syncIdSet}} message.
At the end of the {{refreshOnly}} synchronization, the server sends
a synchronization cookie to the client as a state indicator of the
client copy after the synchronization is completed. The client
will present the received cookie when it requests the next incremental
synchronization to the server.
When {{refreshAndPersist}} synchronization is used, the server sends
a synchronization cookie at the end of the refresh stage by sending
a Sync Info message with TRUE refreshDone. It also sends a
synchronization cookie by attaching it to {{SearchResultEntry}}
generated in the persist stage of the synchronization search. During
the persist stage, the server can also send a Sync Info message
containing the synchronization cookie at any time the server wants
to update the client-side state indicator. The server also updates
a synchronization indicator of the client at the end of the persist
stage.
In the LDAP Sync protocol, entries are uniquely identified by the
{{EX:entryUUID}} attribute value. It can function as a reliable
identifier of the entry. The DN of the entry, on the other hand,
can be changed over time and hence cannot be considered as the
reliable identifier. The {{EX:entryUUID}} is attached to each
{{SearchResultEntry}} or {{SearchResultReference}} as a part of the
synchronization control.
H2: Syncrepl Details
The syncrepl engine utilizes both the {{refreshOnly}} and the
{{refreshAndPersist}} operations of the LDAP Sync protocol. If a
syncrepl specification is included in a database definition,
{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread
and schedules its execution. If the {{refreshOnly}} operation is
specified, the syncrepl engine will be rescheduled at the interval
time after a synchronization operation is completed. If the
{{refreshAndPersist}} operation is specified, the engine will remain
active and process the persistent synchronization messages from the
provider.
The syncrepl engine utilizes both the present phase and the delete
phase of the refresh synchronization. It is possible to configure
a per-scope session log in the provider server which stores the
{{EX:entryUUID}}s of a finite number of entries deleted from a
replication content. Multiple replicas of single provider content
share the same per-scope session log. The syncrepl engine uses the
delete phase if the session log is present and the state of the
consumer server is recent enough that no session log entries are
truncated after the last synchronization of the client. The syncrepl
engine uses the present phase if no session log is configured for
the replication content or if the consumer replica is too outdated
to be covered by the session log. The current design of the session