Skip to content
  • Pierangelo Masarati's avatar
    /* · d3ca441a
    Pierangelo Masarati authored
     * The original code performs ( n ) normalizations
     * and ( n * ( n - 1 ) / 2 ) matches, which hide
     * the same number of normalizations.  The new code
     * performs the same number of normalizations ( n )
     * and ( n * ( n - 1 ) / 2 ) mem compares, far less
     * expensive than an entire match, if a match is
     * equivalent to a normalization and a mem compare ...
     *
     * This is far more memory expensive than the previous,
     * but it can heavily improve performances when big
     * chunks of data are added (typical example is a group
     * with thousands of DN-syntax members; on my system:
     * for members of 5-RDN DNs,
    
     members         orig            bvmatch (dirty) new
     1000            0m38.456s       0m0.553s        0m0.608s
     2000            2m33.341s       0m0.851s        0m1.003s
    
     * Moreover, 100 groups with 10000 members each were
     * added in 37m27.933s (an analogous LDIF file was
     * loaded into Active Directory in 38m28.682s, BTW).
     *
     * Maybe we could switch to the new algorithm when
     * the number of values overcomes a given threshold?
     */
    d3ca441a