Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
Menu
Open sidebar
Tero Saarni
OpenLDAP
Commits
e4067862
Commit
e4067862
authored
Jun 15, 2020
by
Quanah Gibson-Mount
Browse files
ITS#9275 -- Update wording to remove slave and master terms, consolidate on provider/consumer
parent
a2c81aeb
Changes
114
Hide whitespace changes
Inline
Side-by-side
ANNOUNCEMENT
View file @
e4067862
...
...
@@ -9,7 +9,7 @@ A N N O U N C E M E N T -- OpenLDAP 2.4
* Slapd(8) enhancements
- Syncrepl enhancements, including push-mode and
Multi-
Mast
er support
Multi-
Provid
er support
- Dynamic configuration enhancements, including
online schema editing and full access control
- Dynamic monitoring enhancements, including
...
...
CHANGES
View file @
e4067862
...
...
@@ -134,7 +134,7 @@ OpenLDAP 2.4.47 Release (2018/12/19)
Fixed slapd-bdb/hdb/mdb to not convert certain IDLs to ranges (ITS#8868)
Fixed slapo-accesslog deadlock during cleanup (ITS#8752)
Fixed slapo-memberof cn=config modifications (ITS#8663)
Fixed slapo-ppolicy with multi
mast
er replication (ITS#8927)
Fixed slapo-ppolicy with multi
-provid
er replication (ITS#8927)
Fixed slapo-syncprov with NULL modlist (ITS#8843)
Build Environment
Added slapd reproducible build support (ITS#8928)
...
...
@@ -196,7 +196,7 @@ OpenLDAP 2.4.45 Release (2017/06/01)
Fixed slapd segfault with invalid hostname (ITS#8631)
Fixed slapd sasl SEGV rebind in same session (ITS#8568)
Fixed slapd syncrepl filter handling (ITS#8413)
Fixed slapd syncrepl infinite looping mods with delta-sync M
M
R (ITS#8432)
Fixed slapd syncrepl infinite looping mods with delta-sync M
P
R (ITS#8432)
Fixed slapd callback struct so older modules without writewait should function.
Custom modules may need to be updated for sc_writewait callback (ITS#8435)
Fixed slapd-ldap/meta broken LDAP_TAILQ macro (ITS#8576)
...
...
@@ -271,7 +271,7 @@ OpenLDAP 2.4.43 Release (2015/11/30)
Fixed slapd-ldap to skip client controls in ldap_back_entry_get (ITS#8244)
Fixed slapd-null to have an option to return a search entry (ITS#8249)
Fixed slapd-relay to correctly handle quoted options (ITS#8284)
Fixed slapo-accesslog delta-sync M
M
R with interrupted refresh phase (ITS#8281)
Fixed slapo-accesslog delta-sync M
P
R with interrupted refresh phase (ITS#8281)
Fixed slapo-dds segfault when using slapo-memberof (ITS#8133)
Fixed slapo-ppolicy to allow purging of stale pwdFailureTime attributes (ITS#8185)
Fixed slapo-ppolicy to release entry on failure (ITS#7537)
...
...
@@ -315,7 +315,7 @@ OpenLDAP 2.4.41 Release (2015/06/21)
Fixed slapd slapadd config db import of minimal frontend entry (ITS#8150)
Fixed slapd slapadd onetime leak with -w (ITS#8014)
Fixed slapd sasl auxprop crash with invalid config (ITS#8092)
Fixed slapd syncrepl delta-m
m
r issue with overlays and slapd.conf (ITS#7976)
Fixed slapd syncrepl delta-m
p
r issue with overlays and slapd.conf (ITS#7976)
Fixed slapd syncrepl mutex for cookie state (ITS#7968)
Fixed slapd syncrepl memory leaks (ITS#8035)
Fixed slapd syncrepl to free presentlist at end of refresh mode (ITS#8038)
...
...
@@ -475,7 +475,7 @@ OpenLDAP 2.4.38 Release (2013/11/16)
Fixed liblmdb wasted space on split (ITS#7589)
Fixed slapd for certs with a NULL issuerDN (ITS#7746)
Fixed slapd cn=config with empty nested includes (ITS#7739)
Fixed slapd syncrepl memory leak with delta-sync M
M
R (ITS#7735)
Fixed slapd syncrepl memory leak with delta-sync M
P
R (ITS#7735)
Fixed slapd-bdb/hdb to stop processing on dn not found (ITS#7741)
Fixed slapd-bdb/hdb with indexed ANDed filters (ITS#7743)
Fixed slapd-mdb to stop processing on dn not found (ITS#7741)
...
...
@@ -581,7 +581,7 @@ OpenLDAP 2.4.34 Release (2013/03/01)
Fixed liblmdb to validate data limits (ITS#7485)
Fixed liblmdb mdb_update_key for large keys (ITS#7505)
Fixed ldapmodify to not core dump with invalid LDIF (ITS#7477)
Fixed slapd syncrepl for old entries in M
M
R setup (ITS#7427)
Fixed slapd syncrepl for old entries in M
P
R setup (ITS#7427)
Fixed slapd signedness for index_substr_any_* (ITS#7449)
Fixed slapd enforce SLAPD_MAX_DAEMON_THREADS (ITS#7450)
Fixed slapd mutex in send_ldap_ber (ITS#6164)
...
...
@@ -598,7 +598,7 @@ OpenLDAP 2.4.34 Release (2013/03/01)
Fixed slapd-meta segfault when modifying olcDbUri (ITS#7526)
Fixed slapd-sql back-config support (ITS#7499)
Fixed slapo-constraint handle uri and restrict correctly (ITS#7418)
Fixed slapo-constraint with multi-
mast
er replication (ITS#7426)
Fixed slapo-constraint with multi-
provid
er replication (ITS#7426)
Fixed slapo-constraint segfault (ITS#7431)
Fixed slapo-deref control initialization (ITS#7436)
Fixed slapo-deref control exposure (ITS#7445)
...
...
@@ -635,7 +635,7 @@ OpenLDAP 2.4.33 Release (2012/10/10)
Fixed slapd alock handling on Windows (ITS#7361)
Fixed slapd acl handling with zero-length values (ITS#7350)
Fixed slapd syncprov to not reference ops inside a lock (ITS#7172)
Fixed slapd delta-syncrepl M
M
R with large attribute values (ITS#7354)
Fixed slapd delta-syncrepl M
P
R with large attribute values (ITS#7354)
Fixed slapd slapd_rw_destroy function (ITS#7390)
Fixed slapd-ldap idassert bind handling (ITS#7403)
Fixed slapd-mdb slapadd -q -w double free (ITS#7356)
...
...
@@ -721,7 +721,7 @@ OpenLDAP 2.4.31 Release (2012/04/21)
Fixed slapd listener initialization (ITS#7233)
Fixed slapd cn=config with olcTLSVerifyClient (ITS#7197)
Fixed slapd delta-syncrepl fallback on non-leaf error (ITS#7195)
Fixed slapd to reject M
M
R setups with bad serverID setting (ITS#7200)
Fixed slapd to reject M
P
R setups with bad serverID setting (ITS#7200)
Fixed slapd approxIndexer key generation (ITS#7203)
Fixed slapd modification of olcSuffix (ITS#7205)
Fixed slapd schema validation with missing definitions (ITS#7224)
...
...
@@ -799,7 +799,7 @@ OpenLDAP 2.4.27 Release (2011/11/24)
Added slapd support for draft-wahl-ldap-session (ITS#6984)
Added slapadd pipelining capability (ITS#7078)
Added slapd Add-if-not-present (ITS#6561)
Added slapd delta-syncrepl M
M
R (ITS#6734,ITS#7029,ITS#7031)
Added slapd delta-syncrepl M
P
R (ITS#6734,ITS#7029,ITS#7031)
Added slapd-mdb experimental backend (ITS#7079)
Added slapd-passwd dynamic config support
Added slapd-perl dynamic config support
...
...
@@ -1083,11 +1083,11 @@ OpenLDAP 2.4.24 Release (2011/02/10)
Fixed slapo-syncprov filter race condition (ITS#6708)
Fixed slapo-syncprov active mod race (ITS#6709)
Fixed slapo-syncprov to refresh if context is dirty (ITS#6710)
Fixed slapo-syncprov CSN updates to all
replica
s (ITS#6718)
Fixed slapo-syncprov CSN updates to all
consumer
s (ITS#6718)
Fixed slapo-syncprov sessionlog ordering (ITS#6716)
Fixed slapo-syncprov sessionlog with adds (ITS#6503)
Fixed slapo-syncprov mutex (ITS#6438)
Fixed slapo-syncprov mincsn check with M
M
R (ITS#6717)
Fixed slapo-syncprov mincsn check with M
P
R (ITS#6717)
Fixed slapo-syncprov control leak (ITS#6795)
Fixed slapo-syncprov error codes (ITS#6812)
Fixed slapo-translucent entry leak (ITS#6746)
...
...
@@ -1279,7 +1279,7 @@ OpenLDAP 2.4.20 Release (2009/11/27)
OpenLDAP 2.4.19 Release (2009/10/06)
Fixed client tools with null timeouts (ITS#6282)
Fixed slapadd to warn about missing attrs for
replica
s (ITS#6281)
Fixed slapadd to warn about missing attrs for
consumer
s (ITS#6281)
Fixed slapd acl cache (ITS#6287)
Fixed slapd tools to allow -n for conversion (ITS#6258)
Fixed slapd-ldap with null timeouts (ITS#6282)
...
...
@@ -1446,8 +1446,8 @@ OpenLDAP 2.4.16 Release (2009/04/05)
Fixed slapd schema_init freed value (ITS#6036)
Fixed slapd syncrepl newCookie sync messages (ITS#5972)
Fixed slapd syncrepl hang during shutdown (ITS#6011)
Fixed slapd syncrepl too many M
M
R messages (ITS#6020)
Fixed slapd syncrepl skipped entries with M
M
R (ITS#5988)
Fixed slapd syncrepl too many M
P
R messages (ITS#6020)
Fixed slapd syncrepl skipped entries with M
P
R (ITS#5988)
Fixed slapd-bdb/hdb cachesize handling (ITS#5860)
Fixed slapd-bdb/hdb with slapcat with empty dn (ITS#6006)
Fixed slapd-bdb/hdb with NULL transactions (ITS#6012)
...
...
@@ -1457,19 +1457,19 @@ OpenLDAP 2.4.16 Release (2009/04/05)
Fixed slapo-accesslog interaction with ppolicy (ITS#5979)
Fixed slapo-dynlist conversion to cn=config (ITS#6002)
Fixed slapo-syncprov newCookie sync messages (ITS#5972)
Fixed slapd-syncprov too many M
M
R messages (ITS#6020)
Fixed slapo-syncprov
replica
lockout (ITS#5985)
Fixed slapd-syncprov too many M
P
R messages (ITS#6020)
Fixed slapo-syncprov
consumer
lockout (ITS#5985)
Fixed slapo-syncprov modtarget tracking (ITS#5999)
Fixed slapo-syncprov multiple CSN propagation (ITS#5973)
Fixed slapo-syncprov race condition (ITS#6045)
Fixed slapo-syncprov sending cookies without CSN (ITS#6024)
Fixed slapo-syncprov skipped entries with M
M
R (ITS#5988)
Fixed slapo-syncprov skipped entries with M
P
R (ITS#5988)
Fixed tools passphrase free (ITS#6014)
Build Environment
Cleaned up alloc/free functions for Windows (ITS#6005)
Fixed running of autosave files in testsuite (ITS#6026)
Documentation
admin24 clarified M
M
R URI requirements (ITS#5942,ITS#5987)
admin24 clarified M
P
R URI requirements (ITS#5942,ITS#5987)
Added ldapexop(1) manual page (ITS#5982)
slapd-ldap/meta(5) added missing TLS options (ITS#5989)
...
...
@@ -1519,14 +1519,14 @@ OpenLDAP 2.4.14 Release (2009/02/14)
Fixed slapd connection assert (ITS#5835)
Fixed slapd epoll handling (ITS#5886)
Fixed slapd frontend/backend options handling (ITS#5857)
Fixed slapd glue with M
M
R (ITS#5925)
Fixed slapd glue with M
P
R (ITS#5925)
Fixed slapd logging on Windows (ITS#5392)
Fixed slapd listener comparison (ITS#5613)
Fixed slapd manageDSAit with glue entries (ITS#5921)
Fixed slapd relax behavior with structuralObjectClass (ITS#5792)
Fixed slapd syncrepl rename handling (ITS#5809)
Fixed slapd syncrepl M
M
R when adding new server (ITS#5850)
Fixed slapd syncrepl M
M
R with deleted entries (ITS#5843)
Fixed slapd syncrepl M
P
R when adding new server (ITS#5850)
Fixed slapd syncrepl M
P
R with deleted entries (ITS#5843)
Fixed slapd syncrepl replication with glued DB (ITS#5866)
Fixed slapd syncrepl replication with moddn (ITS#5901)
Fixed slapd syncrepl replication with referrals (ITS#5881)
...
...
@@ -1760,7 +1760,7 @@ OpenLDAP 2.4.11 Release (2008/07/16)
Fixed slapd equality rules for olcRootDN/olcSchemaDN (ITS#5540)
Fixed slapd sets memory leak (ITS#5557)
Fixed slapd sortvals binary search (ITS#5578)
Fixed slapd syncrepl updates with multiple
mast
ers (ITS#5597)
Fixed slapd syncrepl updates with multiple
provid
ers (ITS#5597)
Fixed slapd syncrepl superior objectClass delete/add (ITS#5600)
Fixed slapd syncrepl/slapo-syncprov contextCSN updates as internal ops (ITS#5596)
Added slapd-ldap/slapd-meta option to filter out search references (ITS#5593)
...
...
@@ -1837,7 +1837,7 @@ OpenLDAP 2.4.9 Release (2008/05/07)
Fixed slapd syncrepl crash on empty CSN (ITS#5432)
Fixed slapd syncrepl refreshAndPersist (ITS#5454)
Fixed slapd syncrepl modrdn processing (ITS#5397)
Fixed slapd syncrepl M
M
R partial refresh (ITS#5470)
Fixed slapd syncrepl M
P
R partial refresh (ITS#5470)
Fixed slapd value list termination (ITS#5450)
Fixed slapd/slapo-accesslog rq mutex usage (ITS#5442)
Fixed slapd-bdb ID_NOCACHE handling (ITS#5439)
...
...
@@ -1909,7 +1909,7 @@ OpenLDAP 2.4.8 Release (2008/02/19)
Fixed slapd-bdb crash with modrdn (ITS#5358)
Fixed slapd-bdb SEGV with bdb4.6 (ITS#5322)
Fixed slapd-bdb modrdn to same dn (ITS#5319)
Fixed slapd-bdb M
M
R (ITS#5332)
Fixed slapd-bdb M
P
R (ITS#5332)
Added slapd-bdb/slapd-hdb DB encryption (ITS#5359)
Fixed slapd-ldif delete (ITS#5265)
Fixed slapd-meta link to slapd-ldap (ITS#5355)
...
...
@@ -1946,7 +1946,7 @@ OpenLDAP 2.4.7 Release (2007/12/14)
Fixed slapd paged results handling when using rootdn (ITS#5230)
Fixed slapd syncrepl presentlist handling (ITS#5231)
Fixed slapd core schema 'c' definition for RFC4519 (ITS#5236)
Fixed slapd 3-way
M
ulti-
Mast
er
R
eplication (ITS#5238)
Fixed slapd 3-way
m
ulti-
provid
er
r
eplication (ITS#5238)
Fixed slapd hash collisions in index slots (ITS#5183)
Fixed slapd replication of dSAOperation attributes (ITS#5268)
Fixed slapadd contextCSN updating (ITS#5225)
...
...
contrib/ldaptcl/ldap.n
View file @
e4067862
...
...
@@ -84,8 +84,7 @@ Currently simple and kerberos-based authentication, are supported.
To use LDAP and still have reasonable security in a networked,
Internet/Intranet environment, secure shell can be used to setup
secure, encrypted connections between client machines and the LDAP
server, and between the LDAP server and any replica or slave servers
that might be used.
server, and between all LDAP nodes that might be used.
To perform the LDAP "bind" operation:
...
...
contrib/slapd-modules/lastbind/slapo-lastbind.5
View file @
e4067862
...
...
@@ -60,7 +60,7 @@ attribute is updated on each successful bind operation.
.B lastbind_forward_updates
Specify that updates of the authTimestamp attribute
on a consumer should be forwarded
to a
mast
er instead of being written directly into the consumer's local
to a
provid
er instead of being written directly into the consumer's local
database. This setting is only useful on a replication consumer, and
also requires the
.B updateref
...
...
doc/guide/admin/Makefile
View file @
e4067862
...
...
@@ -69,7 +69,7 @@ sdf-img: \
intro_tree.png
\
ldap-sync-refreshandpersist.png
\
ldap-sync-refreshonly.png
\
n-way-multi-
mast
er.png
\
n-way-multi-
provid
er.png
\
push-based-complete.png
\
push-based-standalone.png
\
refint.png
\
...
...
doc/guide/admin/config.sdf
View file @
e4067862
...
...
@@ -45,9 +45,9 @@ H2: Replicated Directory Service
slapd(8) includes support for {{LDAP Sync}}-based replication, called
{{syncrepl}}, which may be used to maintain shadow copies of directory
information on multiple directory servers. In its most basic
configuration, the {{
mast
er}} is a syncrepl provider and one or more
{{
slave
}} (or {{shadow}}) are syncrepl consumers. An example
master-slave
configuration is shown in figure 3.3. Multi-
Mast
er
configuration, the {{
provid
er}} is a syncrepl provider and one or more
{{
consumer
}} (or {{shadow}}) are syncrepl consumers. An example
provider-consumer
configuration is shown in figure 3.3. Multi-
Provid
er
configurations are also supported.
!import "config_repl.png"; align="center"; title="Replicated Directory Services"
...
...
doc/guide/admin/intro.sdf
View file @
e4067862
...
...
@@ -33,7 +33,7 @@ tuned to give quick response to high-volume lookup or search
operations. They may have the ability to replicate information
widely in order to increase availability and reliability, while
reducing response time. When directory information is replicated,
temporary inconsistencies between the
replica
s may be okay, as long
temporary inconsistencies between the
consumer
s may be okay, as long
as inconsistencies are resolved in a timely manner.
There are many different ways to provide a directory service.
...
...
@@ -436,11 +436,11 @@ a pool of threads. This reduces the amount of system overhead
required while providing high performance.
{{B:Replication}}: {{slapd}} can be configured to maintain shadow
copies of directory information. This {{single-
mast
er/multiple-
slave
}}
copies of directory information. This {{single-
provid
er/multiple-
consumer
}}
replication scheme is vital in high-volume environments where a
single {{slapd}} installation just doesn't provide the necessary availability
or reliability. For extremely demanding environments where a
single point of failure is not acceptable, {{multi-
mast
er}} replication
single point of failure is not acceptable, {{multi-
provid
er}} replication
is also available. {{slapd}} includes support for {{LDAP Sync}}-based
replication.
...
...
doc/guide/admin/maintenance.sdf
View file @
e4067862
...
...
@@ -159,7 +159,7 @@ type are:
.{{S: }}
+{{B: Start the server}}
Obviously this doesn't cater for any complicated deployments like {{SECT: MirrorMode}} or {{SECT: N-Way Multi-
Mast
er}},
Obviously this doesn't cater for any complicated deployments like {{SECT: MirrorMode}} or {{SECT: N-Way Multi-
Provid
er}},
but following the above sections and using either commercial support or community support should help. Also check the
{{SECT: Troubleshooting}} section.
...
...
doc/guide/admin/n-way-multi-
mast
er.png
→
doc/guide/admin/n-way-multi-
provid
er.png
View file @
e4067862
File moved
doc/guide/admin/overlays.sdf
View file @
e4067862
...
...
@@ -79,7 +79,7 @@ or in raw form.
It is also used for {{SECT:delta-syncrepl replication}}
Note: An accesslog database is unique to a given
mast
er. It should
Note: An accesslog database is unique to a given
provid
er. It should
never be replicated.
H3: Access Logging Configuration
...
...
@@ -259,13 +259,13 @@ default when {{B:--enable-ldap}}.
H3: Chaining Configuration
In order to demonstrate how this overlay works, we shall discuss a typical
scenario which might be one
mast
er server and three Syncrepl
slave
s.
scenario which might be one
provid
er server and three Syncrepl
replica
s.
On each replica, add this near the top of the {{slapd.conf}}(5) file
(global), before any database definitions:
> overlay chain
> chain-uri "ldap://ldap
mast
er.example.com"
> chain-uri "ldap://ldap
provid
er.example.com"
> chain-idassert-bind bindmethod="simple"
> binddn="cn=Manager,dc=example,dc=com"
> credentials="<secret>"
...
...
@@ -275,48 +275,48 @@ On each replica, add this near the top of the {{slapd.conf}}(5) file
Add this below your {{syncrepl}} statement:
> updateref "ldap://ldap
mast
er.example.com/"
> updateref "ldap://ldap
provid
er.example.com/"
The {{B:chain-tls}} statement enables TLS from the
slave
to the ldap
mast
er.
The {{B:chain-tls}} statement enables TLS from the
replica
to the ldap
provid
er.
The DITs are exactly the same between these machines, therefore whatever user
bound to the
slave
will also exist on the
mast
er. If that DN does not have
update privileges on the
mast
er, nothing will happen.
bound to the
replica
will also exist on the
provid
er. If that DN does not have
update privileges on the
provid
er, nothing will happen.
You will need to restart the
slave
after these {{slapd.conf}} changes.
You will need to restart the
replica
after these {{slapd.conf}} changes.
Then, if you are using {{loglevel stats}} (256), you can monitor an
{{ldapmodify}} on the
slave
and the
mast
er. (If you're using {{cn=config}}
{{ldapmodify}} on the
replica
and the
provid
er. (If you're using {{cn=config}}
no restart is required.)
Now start an {{ldapmodify}} on the
slave
and watch the logs. You should expect
Now start an {{ldapmodify}} on the
replica
and watch the logs. You should expect
something like:
> Sep 6 09:27:25
slave
1 slapd[29274]: conn=11 fd=31 ACCEPT from IP=143.199.102.216:45181 (IP=143.199.102.216:389)
> Sep 6 09:27:25
slave
1 slapd[29274]: conn=11 op=0 STARTTLS
> Sep 6 09:27:25
slave
1 slapd[29274]: conn=11 op=0 RESULT oid= err=0 text=
> Sep 6 09:27:25
slave
1 slapd[29274]: conn=11 fd=31 TLS established tls_ssf=256 ssf=256
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=people,dc=example,dc=com" method=128
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=People,dc=example,dc=com" mech=SIMPLE ssf=0
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=1 RESULT tag=97 err=0 text=
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=2 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=2 MOD attr=mail
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=2 RESULT tag=103 err=0 text=
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 op=3 UNBIND
> Sep 6 09:27:28
slave
1 slapd[29274]: conn=11 fd=31 closed
> Sep 6 09:27:28
slave
1 slapd[29274]: syncrepl_entry: LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY)
> Sep 6 09:27:28
slave
1 slapd[29274]: syncrepl_entry: be_search (0)
> Sep 6 09:27:28
slave
1 slapd[29274]: syncrepl_entry: uid=user1,ou=People,dc=example,dc=com
> Sep 6 09:27:28
slave
1 slapd[29274]: syncrepl_entry: be_modify (0)
And on the
mast
er you will see this:
> Sep 6 09:23:57 ldap
mast
er slapd[2961]: conn=55902 op=3 PROXYAUTHZ dn="uid=user1,ou=people,dc=example,dc=com"
> Sep 6 09:23:57 ldap
mast
er slapd[2961]: conn=55902 op=3 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:23:57 ldap
mast
er slapd[2961]: conn=55902 op=3 MOD attr=mail
> Sep 6 09:23:57 ldap
mast
er slapd[2961]: conn=55902 op=3 RESULT tag=103 err=0 text=
Note: You can clearly see the PROXYAUTHZ line on the
mast
er, indicating the
proper identity assertion for the update on the
mast
er. Also note the
slave
immediately receiving the Syncrepl update from the
mast
er.
> Sep 6 09:27:25
replica
1 slapd[29274]: conn=11 fd=31 ACCEPT from IP=143.199.102.216:45181 (IP=143.199.102.216:389)
> Sep 6 09:27:25
replica
1 slapd[29274]: conn=11 op=0 STARTTLS
> Sep 6 09:27:25
replica
1 slapd[29274]: conn=11 op=0 RESULT oid= err=0 text=
> Sep 6 09:27:25
replica
1 slapd[29274]: conn=11 fd=31 TLS established tls_ssf=256 ssf=256
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=people,dc=example,dc=com" method=128
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=People,dc=example,dc=com" mech=SIMPLE ssf=0
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=1 RESULT tag=97 err=0 text=
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=2 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=2 MOD attr=mail
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=2 RESULT tag=103 err=0 text=
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 op=3 UNBIND
> Sep 6 09:27:28
replica
1 slapd[29274]: conn=11 fd=31 closed
> Sep 6 09:27:28
replica
1 slapd[29274]: syncrepl_entry: LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY)
> Sep 6 09:27:28
replica
1 slapd[29274]: syncrepl_entry: be_search (0)
> Sep 6 09:27:28
replica
1 slapd[29274]: syncrepl_entry: uid=user1,ou=People,dc=example,dc=com
> Sep 6 09:27:28
replica
1 slapd[29274]: syncrepl_entry: be_modify (0)
And on the
provid
er you will see this:
> Sep 6 09:23:57 ldap
provid
er slapd[2961]: conn=55902 op=3 PROXYAUTHZ dn="uid=user1,ou=people,dc=example,dc=com"
> Sep 6 09:23:57 ldap
provid
er slapd[2961]: conn=55902 op=3 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:23:57 ldap
provid
er slapd[2961]: conn=55902 op=3 MOD attr=mail
> Sep 6 09:23:57 ldap
provid
er slapd[2961]: conn=55902 op=3 RESULT tag=103 err=0 text=
Note: You can clearly see the PROXYAUTHZ line on the
provid
er, indicating the
proper identity assertion for the update on the
provid
er. Also note the
replica
immediately receiving the Syncrepl update from the
provid
er.
H3: Handling Chaining Errors
...
...
@@ -683,8 +683,8 @@ H2: The Proxy Cache Engine
{{TERM:LDAP}} servers typically hold one or more subtrees of a
{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of
entries held by one or more
mast
er servers. Changes are propagated
from the
mast
er server to replica
(slave)
servers using LDAP Sync
entries held by one or more
provid
er servers. Changes are propagated
from the
provid
er server to replica servers using LDAP Sync
replication. An LDAP cache is a special type of replica which holds
entries corresponding to search filters instead of subtrees.
...
...
doc/guide/admin/replication.sdf
View file @
e4067862
...
...
@@ -37,12 +37,12 @@ short, is a consumer-side replication engine that enables the
consumer {{TERM:LDAP}} server to maintain a shadow copy of a
{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer
and executes as one of the {{slapd}}(8) threads. It creates and maintains a
consumer
replica by connecting to the replication provider to perform
replica by connecting to the replication provider to perform
the initial DIT content load followed either by periodic content
polling or by timely updates upon content changes.
Syncrepl uses the LDAP Content Synchronization protocol (or LDAP Sync for
short) as the
replica
synchronization protocol. LDAP Sync provides
short) as the
consumer
synchronization protocol. LDAP Sync provides
a stateful replication which supports both pull-based and push-based
synchronization and does not mandate the use of a history store.
In pull-based replication the consumer periodically
...
...
@@ -58,11 +58,11 @@ maintaining and exchanging synchronization cookies. Because the
syncrepl consumer and provider maintain their content status, the
consumer can poll the provider content to perform incremental
synchronization by asking for the entries required to make the
consumer
replica
up-to-date with the provider content. Syncrepl
also enables convenient management of
replica
s by maintaining replica
status. The consumer
replica
can be constructed from a consumer-side
consumer up-to-date with the provider content. Syncrepl
also enables convenient management of
consumer
s by maintaining replica
tion
status. The consumer
database
can be constructed from a consumer-side
or a provider-side backup at any synchronization status. Syncrepl
can automatically resynchronize the consumer
replica
up-to-date
can automatically resynchronize the consumer
database to be
up-to-date
with the current provider content.
Syncrepl supports both pull-based and push-based synchronization.
...
...
@@ -81,7 +81,7 @@ The provider keeps track of the consumer servers that have requested
a persistent search and sends them necessary updates as the provider
replication content gets modified.
With syncrepl, a consumer
server
can create a replica without
With syncrepl, a consumer can create a replica
tion agreement
without
changing the provider's configurations and without restarting the
provider server, if the consumer server has appropriate access
privileges for the DIT fragment to be replicated. The consumer
...
...
@@ -90,7 +90,7 @@ changes and restart.
Syncrepl supports partial, sparse, and fractional replications. The shadow
DIT fragment is defined by a general search criteria consisting of
base, scope, filter, and attribute list. The
replica
content is
base, scope, filter, and attribute list. The
consumer
content is
also subject to the access privileges of the bind identity of the
syncrepl replication connection.
...
...
@@ -204,13 +204,12 @@ The syncrepl engine utilizes both the present phase and the delete
phase of the refresh synchronization. It is possible to configure
a session log in the provider which stores the
{{EX:entryUUID}}s of a finite number of entries deleted from a
database. Multiple replicas share the same session log. The syncrepl
engine uses the
delete phase if the session log is present and the state of the
consumer server is recent enough that no session log entries are
database. Multiple consumers share the same session log. The syncrepl
engine uses the delete phase if the session log is present and the state
of the consumer server is recent enough that no session log entries are
truncated after the last synchronization of the client. The syncrepl
engine uses the present phase if no session log is configured for
the replication content or if the consumer
replica
is too outdated
the replication content or if the consumer is too outdated
to be covered by the session log. The current design of the session
log store is memory based, so the information contained in the
session log is not persistent over multiple provider invocations.
...
...
@@ -265,9 +264,9 @@ database yielded a greater {{EX:entryCSN}} than was previously
recorded in the suffix entry's {{EX:contextCSN}} attribute, a
checkpoint will be immediately written with the new value.
The consumer also stores its replica state, which is the provider's
The consumer also stores its replica
tion
state, which is the provider's
{{EX:contextCSN}} received as a synchronization cookie, in the
{{EX:contextCSN}} attribute of the suffix entry. The replica state
{{EX:contextCSN}} attribute of the suffix entry. The replica
tion
state
maintained by a consumer server is used as the synchronization state
indicator when it performs subsequent incremental synchronization
with the provider server. It is also used as a provider-side
...
...
@@ -281,8 +280,8 @@ actions.
Because a general search filter can be used in the syncrepl
specification, some entries in the context may be omitted from the
synchronization content. The syncrepl engine creates a glue entry
to fill in the holes in the
replica
context if any part of the
replica
content is subordinate to the holes. The glue entries will
to fill in the holes in the
consumer
context if any part of the
consumer
content is subordinate to the holes. The glue entries will
not be returned in the search result unless {{ManageDsaIT}} control
is provided.
...
...
@@ -320,7 +319,7 @@ multiple objects.
For example, suppose you have a database consisting of 102,400 objects of 1 KB
each. Further, suppose you routinely run a batch job to change the value of
a single two-byte attribute value that appears in each of the 102,400 objects
on the
mast
er. Not counting LDAP and TCP/IP protocol overhead, each time you
on the
provid
er. Not counting LDAP and TCP/IP protocol overhead, each time you
run this job each consumer will transfer and process {{B:100 MB}} of data to
process {{B:200KB of changes!}}
...
...
@@ -338,7 +337,7 @@ situations like the one described above. Delta-syncrepl works by maintaining a
changelog of a selectable depth in a separate database on the provider. The replication consumer
checks the changelog for the changes it needs and, as long as
the changelog contains the needed changes, the consumer fetches the changes
from the changelog and applies them to its database. If, however, a
replica
from the changelog and applies them to its database. If, however, a
consumer
is too far out of sync (or completely empty), conventional syncrepl is used to
bring it up to date and replication then switches back to the delta-syncrepl
mode.
...
...
@@ -351,12 +350,12 @@ it to another machine.
For configuration, please see the {{SECT:Delta-syncrepl}} section.
H3: N-Way Multi-
Mast
er
r
eplication
H3: N-Way Multi-
Provid
er
R
eplication
Multi-
Mast
er replication is a replication technique using Syncrepl to replicate
data to multiple provider ("
Mast
er") Directory servers.
Multi-
Provid
er replication is a replication technique using Syncrepl to replicate
data to multiple provider ("
Provid
er") Directory servers.
H4: Valid Arguments for Multi-
Mast
er replication
H4: Valid Arguments for Multi-
Provid
er replication
* If any provider fails, other providers will continue to accept updates
* Avoids a single point of failure
...
...
@@ -364,21 +363,21 @@ H4: Valid Arguments for Multi-Master replication
the network/globe.
* Good for Automatic failover/High Availability
H4: Invalid Arguments for Multi-
Mast
er replication
H4: Invalid Arguments for Multi-
Provid
er replication
(These are often claimed to be advantages of Multi-
Mast
er replication but
(These are often claimed to be advantages of Multi-
Provid
er replication but
those claims are false):
* It has {{B:NOTHING}} to do with load balancing
* Providers {{B:must}} propagate writes to {{B:all}} the other servers, which
means the network traffic and write load spreads across all
of the servers the same as for single-
mast
er.
of the servers the same as for single-
provid
er.
* Server utilization and performance are at best identical for
Multi-
Mast
er and Single-
Mast
er replication; at worst Single-
Mast
er is
Multi-
Provid
er and Single-
Provid
er replication; at worst Single-
Provid
er is
superior because indexing can be tuned differently to optimize for the
different usage patterns between the provider and the consumers.
H4: Arguments against Multi-
Mast
er replication
H4: Arguments against Multi-
Provid
er replication
* Breaks the data consistency guarantees of the directory model
* {{URL:http://www.openldap.org/faq/data/cache/1240.html}}
...
...
@@ -387,18 +386,18 @@ H4: Arguments against Multi-Master replication
* Typically, a particular machine cannot distinguish between losing contact
with a peer because that peer crashed, or because the network link has failed
* If a network is partitioned and multiple clients start writing to each of the
"
mast
ers" then reconciliation will be a pain; it may be best to simply deny
"
provid
ers" then reconciliation will be a pain; it may be best to simply deny
writes to the clients that are partitioned from the single provider
For configuration, please see the {{SECT:N-Way Multi-
Mast
er}} section below
For configuration, please see the {{SECT:N-Way Multi-
Provid
er}} section below
H3: MirrorMode replication
MirrorMode is a hybrid configuration that provides all of the consistency
guarantees of single-
mast
er replication, while also providing the high
availability of multi-
mast
er. In MirrorMode two providers are set up to
replicate from each other (as a multi-
mast
er configuration), but an
guarantees of single-
provid
er replication, while also providing the high
availability of multi-
provid
er. In MirrorMode two providers are set up to
replicate from each other (as a multi-
provid
er configuration), but an
external frontend is employed to direct all writes to only one of
the two servers. The second provider will only be used for writes if
the first provider crashes, at which point the frontend will switch to
...
...
@@ -417,7 +416,7 @@ can be ready to take over (hot standby)
H4: Arguments against MirrorMode
* MirrorMode is not what is termed as a Multi-
Mast
er solution. This is because
* MirrorMode is not what is termed as a Multi-
Provid
er solution. This is because
writes have to go to just one of the mirror nodes at a time
* MirrorMode can be termed as Active-Active Hot-Standby, therefore an external
server (slapd in proxy mode) or device (hardware load balancer)
...
...
@@ -453,19 +452,19 @@ push mode. Slurpd replication was deprecated in favor of Syncrepl
replication and has been completely removed from OpenLDAP 2.4.
The slurpd daemon was the original replication mechanism inherited from
UMich's LDAP and operated in push mode: the
mast
er pushed changes to the
slave
s. It was replaced for many reasons, in brief:
UMich's LDAP and operated in push mode: the
provid
er pushed changes to the
replica
s. It was replaced for many reasons, in brief:
* It was not reliable
** It was extremely sensitive to the ordering of records in the replog
** It could easily go out of sync, at which point manual intervention was
required to resync the
slave
database with the
mast
er directory
** It wasn't very tolerant of unavailable servers. If a
slave
went down
required to resync the
replica
database with the
provid
er directory
** It wasn't very tolerant of unavailable servers. If a
replica
went down
for a long time, the replog could grow to a size that was too large for
slurpd to process
* It only worked in push mode
* It required stopping and restarting the
mast
er to add new
slave
s
* It only supported single
mast
er replication
* It required stopping and restarting the
provid
er to add new
replica
s
* It only supported single
provid
er replication
Syncrepl has none of those weaknesses:
...
...
@@ -480,7 +479,7 @@ Syncrepl has none of those weaknesses:
* Syncrepl can operate in either direction
* Consumers can be added at any time without touching anything on the
provider
* Multi-
mast
er replication is supported
* Multi-
provid
er replication is supported
H2: Configuring the different replication types
...
...
@@ -492,21 +491,21 @@ H4: Syncrepl configuration
Because syncrepl is a consumer-side replication engine, the syncrepl
specification is defined in {{slapd.conf}}(5) of the consumer
server, not in the provider server's configuration file. The initial
loading of the
replica
content can be performed either by starting
loading of the
consumer
content can be performed either by starting
the syncrepl engine with no synchronization cookie or by populating
the consumer
replica
by loading an {{TERM:LDIF}} file dumped as a
the consumer by loading an {{TERM:LDIF}} file dumped as a
backup at the provider.
When loading from a backup, it is not required to perform the initial
loading from the up-to-date backup of the provider content. The
syncrepl engine will automatically synchronize the initial consumer
replica
to the current provider content. As a result, it is not
required to stop the provider server in order to avoid the replica
to the current provider content. As a result, it is not
required to stop the provider server in order to avoid the replica
tion
inconsistency caused by the updates to the provider content during
the content backup and loading process.
When replicating a large scale directory, especially in a bandwidth
constrained environment, it is advised to load the consumer
replica
constrained environment, it is advised to load the consumer
from a backup instead of performing a full initial load using
syncrepl.
...
...
@@ -580,8 +579,8 @@ A more complete example of the {{slapd.conf}}(5) content is thus:
H4: Set up the consumer slapd
The syncrepl
replication
is specified in the database section of