From abb70f7031b06eb910555cec1fd7f18f1a284a0e Mon Sep 17 00:00:00 2001 From: Quanah Gibson-Mount <quanah@openldap.org> Date: Thu, 24 Mar 2011 21:42:11 +0000 Subject: [PATCH] ITS#6866 --- CHANGES | 1 + doc/guide/admin/replication.sdf | 6 +++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/CHANGES b/CHANGES index 48c9c85e03..5ed17c3a24 100644 --- a/CHANGES +++ b/CHANGES @@ -24,6 +24,7 @@ OpenLDAP 2.4.25 Engineering Documentation admin24 guide ldapi usage (ITS#6839) admin24 guide conversion notes (ITS#6834) + admin24 guide fix drawback math for syncrepl (ITS#6866) admin24 guide note manpages are definitive (ITS#6855) OpenLDAP 2.4.24 Release (2011/02/10) diff --git a/doc/guide/admin/replication.sdf b/doc/guide/admin/replication.sdf index 6baa7b292a..6a0c65a827 100644 --- a/doc/guide/admin/replication.sdf +++ b/doc/guide/admin/replication.sdf @@ -317,11 +317,11 @@ only the final state of the entry is significant. But this approach may have drawbacks when the usage pattern involves single changes to multiple objects. -For example, suppose you have a database consisting of 100,000 objects of 1 KB +For example, suppose you have a database consisting of 102,400 objects of 1 KB each. Further, suppose you routinely run a batch job to change the value of -a single two-byte attribute value that appears in each of the 100,000 objects +a single two-byte attribute value that appears in each of the 102,400 objects on the master. Not counting LDAP and TCP/IP protocol overhead, each time you -run this job each consumer will transfer and process {{B:1 GB}} of data to +run this job each consumer will transfer and process {{B:100 MB}} of data to process {{B:200KB of changes!}} 99.98% of the data that is transmitted and processed in a case like this will -- GitLab