389 Upstream Testing Framework


Overview

Currently Red Hat Directory Server is tested using TET framework and tests. This framework and its tests have been enhanced for years and offers a high level of QE. For example, the acceptance suite runs thousands of tests, and is convenient to detect regressions. A drawback is the complexity of the tests and the TET framework. It is some what difficult to setup and run. Also, diagnosis of a failure is difficult, and requires expertise to conclude if a failure is due to a Directory Server bug, a invalid test case, a bug in the framework, or an environment issue.

Part of a Continuous Integration project, 389 upstream testing is an effort to push and maintain the testing capability in the upstream 389 repository.

This document will describe the following components

Use case


Prerequisites

minimal version

Environment

You need to


Basic Setup and Testing

The following describes how to setup a testing environment, and run a specific test

Deploy 389 Directory Server under specific directory

The following setup script allows you to checkout/compile and deploy the current version of 389 Directory Server under a specific directory. The path used for DIR_INSTALL will be used throughout the rest of the setup and testing process (installation_prefix, etc)/

Setup Script

#!/bin/bash
PREFIX=${1:-}
DIR_SRC=$HOME/workspaces
DIR_DS_GIT=389-ds-base
DIR_SPEC_GIT=389-ds-base-spec
DIR_RPM=$HOME/rpmbuild
DIR_INSTALL=$HOME/install   # a.k.a /directory/where/389-ds/is/installed
DIR_SRC_DIR=$DIR_SRC/$DIR_DS_GIT
DIR_SRC_PKG=$DIR_SRC/$DIR_SPEC_GIT
TMP=/tmp/tempo$$
SED_SCRIPT=/tmp/script$$

#
# Checkout the source/spec
#
initialize()
{
   for i in $DIR_DS_GIT $DIR_SPEC_GIT
   do
       rm -rf $DIR_SRC/$i
       mkdir $DIR_SRC/$i
   done
   cd $DIR_SRC_DIR
   git clone https://github.com/389ds/389-ds-base.git
   cd $DIR_SRC_PKG
   git clone git://pkgs.fedoraproject.org/389-ds-base
}
#
# Compile 389-DS
#
compile()
{
   cd $DIR_SRC_PKG
   cp $DIR_SRC_PKG/389-ds-base.spec     $DIR_RPM/SPECS
   cp $DIR_SRC_PKG/389-ds-base-git.sh   $DIR_RPM/SOURCES
   cp $DIR_SRC_PKG/389-ds-base-devel.README $DIR_RPM/SOURCES
   cd $DIR_SRC_DIR
   rm -f /tmp/*bz2
   TAG=HEAD sh $DIR_SRC_PKG/389-ds-base-git-local.sh /tmp
   SRC_BZ2=`ls -rt /tmp/*bz2 | tail -1`
   echo "Copy $SRC_BZ2"
   cp $SRC_BZ2 $DIR_RPM/SOURCES
   if [ -n "$PREFIX" -a -d $PREFIX ]
   then
       TARGET="--prefix=$PREFIX"
   else
       TARGET=""
   fi
   echo "Active the debug compilation"
   echo "Compilation start"
       CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic'
       CXXFLAGS=$CFLAGS
   sed -e 's/^\%configure/CFLAGS="$CFLAGS" CXXFLAGS="$CXXFLAGS" \%configure/' $DIR_RPM/SPECS/389-ds-base.spec > $DIR_RPM/SPECS/389-ds-base.spec.new
   cp $DIR_RPM/SPECS/389-ds-base.spec.new $DIR_RPM/SPECS/389-ds-base.spec
   sleep 3
   rpmbuild -ba $DIR_RPM/SPECS/389-ds-base.spec 2>&1 | tee $DIR_RPM/build.output
}
#
# Install it on a private directory $HOME/install
#
install()
{
   cd $DIR_SRC_DIR
   CFLAGS="-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-sign-compare"
   CXXFLAGS="$CFLAGS" $DIR_SRC_DIR/ds/configure --prefix=$DIR_INSTALL --enable-debug --with-openldap 2>&1 > $DIR_RPM/BUILD/build_install.output
   echo "Now install dirsrv"   >> $DIR_RPM/BUILD/build_install.output
   make install 2>&1           >> $DIR_RPM/BUILD/build_install.output
}
if [ ! -d $HOME/.dirsrv ]
then
     mkdir ~/.dirsrv # this is where the instance specific sysconfig files go - dirsrv-instancename
fi
# note: compile is not necessary to deploy
initialize
install

For information with that kind of deployment you can run usual administrative Directory Server commands. For example:

cd $DIR_INSTALL
sbin/setup-ds.pl
sbin/restart-dirsrv
sbin/ldif2db
bin/logconv.pl var/log/dirsrv/slapd-inst/access
bin/dbscan -f var/lib/dirsrv/slapd-inst/db/userRoot/id2entry.db4
etc.

The Lib389 Library provides interfaces to do all administrative tasks .


Run a specific test using python

Open the tests you want to run (e.g. ticketxyz_test.py)

So /home/<your_login>/install, aka “installation prefix”, aka DIR_INSTALL, is a target directory where we deployed a build see, you may define this directory with $PREFIX environment variable, or like in this example force it directly into the test case.

Both method are valid, but it has some interest to set it directly into the test case if you want to run a multi-version test-case. Like in test ticket47788 where two variables installation1_prefix and installation2_prefix allow you to create the instance in different versions

Run a specific test using eclipse IDE

Prerequisite

Run a dedicated test

Open the tests you want to run ‘dirsrvtest->tickets->ticket47490_test.py’

Run a specific test under py.test

Run the following script. If you need more detail on tests processing, uncomment ‘DEBUG=-s’

#!/bin/bash
DIR=$HOME/test
TEST=ticketxyz_test.py
mkdir $DIR
# checkout tests and lib389
cd $DIR
git clone https://github.com/389ds/389-ds-base.git
# define PYTHONPATH
export PYTHONPATH=/usr/lib64/python2.7:/usr/lib64/python2.7/plat-linux2:/usr/lib64/python2.7/lib-dynload:/usr/lib64/python2.7/site-packages:/usr/lib/python2.7/site-packages:/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info
LIB389=$DIR/ds/src/lib389
PROJECT=$DIR/ds/dirsrvtests
DIR_PREFIX=/directory/where/389-ds/is/installed    --> DIR_INSTALL($HOME/install) from the setup script
export PYTHONPATH=$PYTHONPATH:$PROJECT:$LIB389
#DEBUG=-s
PREFIX=$DIR_PREFIX py.test -v $DEBUG $PROJECT/tickets/$TEST

Directory Server deployed with RPM

For the moment, it is not recommended to run the test with this type of deployment because:

How to Write a Test

This document describes the basics for writing a lib389 test

How To Write a Lib389 Test


lib389 Library

Overview

lib389 is a python based library, that offers services to do administrative tasks for Directory Server. This library is intended to be used to develop 389 upstream tests and to develop 389 administrative CLI.

The library is based on early version of https://github.com/richm/dsadmin

lib389 Design

Repos

This library is opened source and is available under https://github.com/389ds/389-ds-base/tree/main/src/lib389

Methodology

The development methodology for the lib389 will follow the same development methodology as 389 Directory Server. The main aspects are described here.

Layout

lib389/

  __init__.py       # implements routines to do online administrative tasks      
  _replication.py   # implements replication related class (CSN/RUV)
  _constants.py     # main definitions (Directory manager, replica type, DNs for config...)
  _entry.py         # implements LDAP 'Entry' and methods 
  _ldif_conn.py     # subclass of LDIFParser. Used to translate a ldif entry (from dse.ldif for example) into an 'Entry'
  agent.py          # implements routines to do remote offline administrative tasks
  agreement.py      # implements replica agreement services
  backend.py        # implements backend services
  brooker.py        # Brooker classes to organize ldap methods
  chaining.py       # implements chaining backend services
  changelog.py      # implements the replication changelog
  index.py          # implements index services
  logs.py           # implements logging services
  mappingTree.py    # implements mapping tree services
  plugins.py        # implements plugin operations(enable/disable)
  properties.py     # Various property helper short cut names
  replica.py        # implements replica services
  schema.py         # implements schema operations
  suffix.py         # implements suffix services (a wrapper around mapping tree)
  tasks.py          # implements task services
  tools.py          # implements routines to do local offline administrative tasks 
  utils.py          # implements miscellaneous routines

test/
    config_test.py
        It contains tests for:
          - replica
          - backend
          - suffix
    dsadmin_basic_test.py
        It contains tests for:
          - changelog
          - log level
          - mapping tree
          - misc (bind)
    dsadmin_create_remove_test.py
        - instance creation
        - instance deletion
    dsadmin_test
        - replica
        - backend
        - start/stop instance
        - ssl
        - replica agreement
    replica_test
        - test various replication objects (changelog, replica, replica agreement, ruv)
    backend_test
        - backend


Modules

DirSrv (__init__.py)


DirSrv state

A DirSrv Object can have the following states: ALLOCATED/OFFLINE/ONLINE.

The graphic below describes the transitions after the various operations.

                             __ (create)__           ___(open)___      __
                            /             \         /  (start)   \    /  \
                           /               V       /              V  /    \
  --(allocate)--> ALLOCATED                 OFFLINE             ONLINE   (all lib389 ops + LDAP(S) ops)
                          ^               /       ^              /  ^    /
                           \___(delete)__/         \___(close)__/    \__/
                                                    (stop/restart)
                                                   (backup/restore)

The online administrative tasks (ldap operations) require that DirSrv is ONLINE. The offline administrative tasks can be issued whether DirSrv is ONLINE or OFFLINE

allocate(args)

Initialize a DirSrv object according to the provided args dictionary. The state change from DIRSRV_STATE_INIT -> DIRSRV_STATE_ALLOCATED. This step is mandatory before calling the others methods of this class.

args contains the following properties of the server. (mandatory properties are in bold)

The instance will be localized under SER_DEPLOYED_DIR.. If SER_DEPLOYED_DIR is not specified, the instance will be stored under /.

If SER_USER_ID is not specified, the instance will run with the caller user id. If caller is ‘root’, it will run as ‘DEFAULT_USER’ user

If SER_GROUP_ID is not specified, the instance will run with the caller group id. If caller is ‘root’, it will run as ‘DEFAULT_USER’ group


exists()

If the instance exists it returns True else it returns False


list([all])

Returns a list dictionary. For a created instance that is on the local file system (e.g. /etc/dirsrv/slapd-\*), it exists a file describing its properties (environment)

<prefix>/etc/sysconfig/dirsrv-<serverid>
or
$HOME/.dirsrv/dirsrv-<serverid>

A dictionary is created with the following properties:

If all=True it builds a list of dictionary for all created instances. Else (default), the list will only contain the dictionary of the calling instance


create()

Creates an instance with the parameters sets in dirsrv (see allocate). DirSrv state must be DIRSRV_STATE_ALLOCATED before calling this function. Its final state will be DIRSRV_STATE_OFFLINE


upgrade(upgradeMode)

Upgrades all the instances that coexist with this DirSrv. This is the same as running “setup-ds.pl –update


delete()

Deletes the instance with the parameters sets in dirsrv (see allocate). If the instance does not exist it raise TBD.


open()

It opens a ldap connection to dirsrv so that online administrative tasks are possible


close(None)

It closes the ldap connection to dirsrv. Online administrative tasks are no longer possible upon completion.


start([timeout])

It starts the instance dirsrv. If the instance is already running, it does nothing.


stop([timeout])

It stops the instance dirsrv. If the instance is already stopped, it does nothing.


restart([timeout])

It restarts the instance dirsrv. If the instance is already stopped, it just starts it.


getDir()

Get the full system path to the local data directory(ds/dirsrvtests/data)

clearTmpDir(__file__)

Removes all the files from the /tmp dir (ds/dirsrvtest/tmp/). This should be in the setup phase of the test script.


getEntry(base, scope, filter, [attrlist])

Wrapper around SimpleLDAPObject.search. It is common to just get one entry.


getProperties([properties])

Returns a dictionary of properties of the server. If no property are specified, it returns all the properties

Supported properties are:

Property name  server attribute name
pid N/A
port nsslapd-port
sport nsslapd-secureport
version TBD
owner user/group id
db-* DB related properties (cache, checkpoint, txn batch…) TBD
db-stats statistics from:
cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config
cn=monitor,cn=ldbm database,cn=plugins,cn=config
conn-* Connection related properties (idle, ioblock, thread/conn, max bersize) TBD
pwd-* Password policy properties (retry, lock,…)
security-* Security properties (ciphers, client/server auth., )


setProperties(properties)

TBD


checkBackupFS()

Return the file name of the backup file. If it does not exist it returns None


backupFS()

It creates a full instance backup file under /tmp/slapd-.bck/backup\_HHMMSS.tar.gz and return the archive file name.

The backups are stored under BACKUPDIR environment variable (by default /tmp).

If it already exists a such file, it assumes it is a valid backup and returns its name. Instance ‘dirsrv’ must be stopped prior the call, else backup file may be corrupted


restoreFS()

Restore a directory from a backup file


clearBackupFS(backup_file)

Remove a backup_file or all backup up of a given instance


Replica


create_repl_manager([repl_manager_dn], [repl_manager_pw])

Create an entry that will be used to bind as replica manager. The entry properties will be no idletime (nsIdletimeout=0) and long time for password expiration (passwordExpirationTime).

Example:

create_repl_manager()

dn: cn=replrepl,cn=config
cn: bind dn pseudo user
cn: replrepl
objectClass: top
objectClass: person
passwordExpirationTime: 20381010000000Z
sn: bind dn pseudo user
nsIdleTimeout: 0
userPassword:: e1NTSEF9aGxLRFptSVY2cXlvRmV0S0ZCOS84cFBNY1RaeXFkV
 DZzNXRFQlE9PQ==
creatorsName: cn=directory manager
modifiersName: cn=directory manager
modifyTimestamp: 20131121131644Z


changelog([dbname])

Adds the replication changelog entry (cn=changelog5,cn=config), if it does not already exist. Then it returns the entry. This entry specifies the directory where the changelog’s database file will be stored. The directory name is in the attribute nsslapd-changelogdir.

If ‘changelog()’ was called when configuring the first supplier replica. It is not necessary to call it again when configuring the others (if any) supplier replicas, unless we want their changelog to go to an other directory.

Example:

self.supplier.replica.changelog()

dn: cn=changelog5,cn=config
objectClass: top
objectClass: extensibleobject
cn: changelog5
nsslapd-changelogdir: \<install\>/var/lib/dirsrv/slapd-supplier/changelogdb


list([suffix], [replica_dn])

Lists and returns the replicas under the mapping tree (cn=mapping tree,cn=config). If ‘suffix’ is provided, it returns the replica (in a list of entry) that is configured for that ‘suffix’. If ‘replica_dn’ is specified it returns the replica with that DN.

If ‘suffix’ and ‘replica_dn’ are specified, it uses ‘replica_dn’.

Example:

self.replica.list()

dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=example,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-240cb6d3
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0
dn: cn=replica,cn=dc\3Dredhat\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=redhat,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-3d6bc042
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0

or

self.replica.list('dc=example,dc=com')

dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=example,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-240cb6d3
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0


create(suffix, role, [rid], [args])

Create a replica entry on an existing suffix.


delete(suffix)

Delete a replica related to the provided suffix. If this replica role was REPLICAROLE_HUB or REPLICAROLE_CONSUMER, it also deletes the changelog associated to that replica. If it exists some replication agreement below that replica, they are deleted.


enableReplication(suffix, role, [replicaId], [binddn])

Enable replication for a given suffix. If the role is REPLICAROLE_SUPPLIER or REPLICAROLE_HUB, it also creates the changelog. If the entry “cn=replrepl,cn=config” (default replication manager) does not exist, it creates it.


disableReplication(suffix)

This is a wrapper of the ‘delete’ function. See delete function.


getProperties([suffix], [replica_dn], [replica_entry], [properties])

Returns a dictionary containing the requested properties value of the replica. If ‘properties’ is missing it returns all the supported properties

At least one parameter suffix/replica_dn/replica_entry needs to be specified It uses first (if specified) ‘replica_entry’, then ‘replica_dn’, then ‘suffix’

Supported properties are:

Property name Replica attribute name
legacy nsds5replicalegacyconsumer [ off ]
binddn nsds5replicabinddn’ [ REPLICATION_BIND_DN in constants.py ]
referral nsds5ReplicaReferral
purge-delay nsds5ReplicaPurgeDelay
purge-interval nsds5replicatombstonepurgeinterval


setProperties([suffix], [replica_dn], [replica_entry], properties)

set properties defined in ‘properties’ in the the replica entry with the corresponding RHDS attribute name.

Some properties have default value, described in italic below

The property name, may be prefixed in order to specify the operation:

Supported properties are:

Property name Replica attribute name
legacy nsds5replicalegacyconsumer [ off ]
binddn nsds5replicabinddn’ [ REPLICATION_BIND_DN in constants.py ]
referral nsds5ReplicaReferral
purge-delay nsds5ReplicaPurgeDelay
purge-interval nsds5replicatombstonepurgeinterval


ruv(suffix, [tryrepl])

Return a replica update vector for the given suffix. It tries first to retrieve the RUV tombstone entry stored in the replica database. If it can not retrieve it and if ‘tryrepl’ is True, it tries to retrieve the in memory RUV stored in the replica (e.g. cn=replica,cn=,cn=mapping tree, cn=config).


Replication Agreements


status(agreement_dn)

Return a formatted string with the replica agreement status

Example:

print topo.supplier.agreement.status(replica_agreement_dn)

Status for meTo_localhost.localdomain:50389 agmt localhost.localdomain:50389
Update in progress: TRUE
Last Update Start: 20131121132756Z
Last Update End: 0
Num. Changes Sent: 1:10/0
Num. changes Skipped: None
Last update Status: 0 Replica acquired successfully: Incremental update started
Init in progress: None
Last Init Start: 0
Last Init End: 0
Last Init Status: None
Reap Active: 0


status_total_update(agreement_dn)

Returns tuple with done/errors status:


schedule(agreement_dn, [interval])

Schedule the replication agreement

Example:

topo.supplier.agreement.schedule(agreement_dn)          # to start the replication agreement
topo.supplier.agreement.schedule(agreement_dn, 'stop')  # to stop the replication agreement
topo.supplier.agreement.schedule(agreement_dn, '1800-1900 01234')  # to schedule the replication agreement all week days from 6PM-7PM


resume(agmtdn, [interval])

Resume a paused replication agreement, paused with the “pause” method. It tries to enabled the replica agreement. If it fails (not implemented in all version), it uses the schedule() with interval ‘0000-2359 0123456’


pause(agmtdn, [interval])

Pause this replication agreement. This replication agreement will send no more changes. Use the resume() method to “unpause”. It tries to disable the replica agreement. If it fails (not implemented in all version), it uses the schedule() with interval ‘2358-2359 0’


getProperties(agreement_dn, [properties])

returns a dictionary of the requested properties. If properties is missing, it returns all the properties.

Supported properties are:

Property name  Replication Agreement attribute name
schedule nsds5replicaupdateschedule
fractional-exclude-attrs-inc nsDS5ReplicatedAttributeList
fractional-exclude-attrs-total nsDS5ReplicatedAttributeListTotal
fractional-strip-attrs nsds5ReplicaStripAttrs
transport-prot nsds5replicatransportinfo
consumer-port nsds5replicaport
consumer-total-init nsds5BeginReplicaRefresh


setProperties(agreement_dn, properties)

Checks that properties defined in ‘properties’ are valid and set the replica agreement entry with the corresponding RHDS attribute name.

The property name,may be prefixed in order to specify the operation:

Some properties have default value, described in italic below

Supported properties are:

Property name  Replication Agreement attribute name
schedule nsds5replicaupdateschedule
fractional-exclude-attrs-inc nsDS5ReplicatedAttributeList
fractional-exclude-attrs-total nsDS5ReplicatedAttributeListTotal
fractional-strip-attrs nsds5ReplicaStripAttrs
transport-prot nsds5replicatransportinfo
consumer-port nsds5replicaport
consumer-total-init nsds5BeginReplicaRefresh

Example:

entry = Entry(dn_agreement)
args = {'transport-prot': 'LDAP',
        'consumer-port' : 10389}
try:
  setProperties(entry, args)
except:
  pass


list([suffix], [consumer_host], [consumer_port], [agmtdn])

Returns the search result of the replica agreement(s) under the replica (replicaRoot is ‘suffix’).

Either ‘suffix’ or ‘agmtdn’ need to be specified. ‘consumer_host’ and ‘consumer_port’ are either not specified or specified both.

If ‘agmtdn’ is specified, it returns the search result entry of that replication agreement. else if consumer host/port are specified it returns the replica agreements toward that consumer host:port. Finally if neither ‘agmtdn’ nor ‘consumer host/port’ are specifies it returns all the replica agreements under the replica (replicaRoot is ‘suffix’).


create(consumer, suffix, [binddn], [bindpw], [cn_format], [description_format], [timeout], [auto_init], [bindmethod], [starttls], [args])

Create a replication agreement from self to consumer and returns its DN

Example:

repl_agreement = supplier.agreement.create(consumer, SUFFIX, binddn=defaultProperties[REPLICATION_BIND_DN], bindpw=defaultProperties[REPLICATION_BIND_PW])


init([suffix], [consumer_host], [consumer_port], [agmtdn])

Trigger a total update of the consumer replica. If ‘agmtdn’ is specified it triggers the total update of this replica.

If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It triggers total update of replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’


wait_total_update([suffix], [consumer_host], [consumer_port], [agmtdn])

Wait for the completion of the total update or an error condition of the selected replica agreement.

If ‘agmtdn’ is specified it triggers the total update of this replica.

If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It triggers total update of replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’


changes([suffix], [consumer_host], [consumer_port], [agmtdn])

Send a tuple with

If ‘agmtdn’ is specified it reads the info from this entry.

If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It retrieves the replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’, and reads the info from it


Logs


setProperties(type, args)

set the properties (if valid) for logging type.

Supported properties are

Property name type (type = access|error|audit)
max-logs nsslapd-typelog-maxlogsperdir
max-size nsslapd-typelog-maxlogsize
max-diskspace nsslapd-typelog-logmaxdiskspace
min-freespace nsslapd-typelog-logminfreediskspace
rotation-time nsslapd-typelog-logrotationtime
TBC  


getProperties(type, [args])

Returns in a dictionary (prop:value) the requested set of properties for the logging type. If ‘args’ is missing, it returns all the properties for the logging type.

Supported properties are

Property name type (type = access|error|audit)
max-logs nsslapd-typelog-maxlogsperdir
max-size nsslapd-typelog-maxlogsize
max-diskspace nsslapd-typelog-logmaxdiskspace
min-freespace nsslapd-typelog-logminfreediskspace
rotation-time nsslapd-typelog-logrotationtime
TBC  


Suffix


list()

It returns the list of suffixes DN for which it exists a mapping tree entry


toBackend(suffix)

It returns the backend entry that stores the provided suffix


getParent(suffix)

It returns the DN of a suffix that is the parent of the provided ‘suffix’. If ‘suffix’ has no parent, it returns None


MappingTree


list([suffix], [benamebase])

Returns a search result of the mapping tree entries with all their attributes

If ‘suffix’/’benamebase’ are specified. It uses ‘benamebase’ first, then ‘suffix’.

If neither ‘suffix’ and ‘benamebase’ are specified, it returns all the mapping tree entries


create(suffix, benamebase,[parent])

Create a mapping tree entry (under “cn=mapping tree,cn=config”), for the ‘suffix’ and that is stored in ‘benamebase’ backend. ‘benamebase’ backend must exists before creating the mapping tree entry. If a ‘parent’ is provided that means that we are creating a sub-suffix mapping tree.


delete([suffix], [benamebase],[name])

Delete a mapping tree entry (under “cn=mapping tree,cn=config”), for the ‘suffix’ and that is stored in ‘benamebase’ backend. ‘benamebase’ backend is not changed by the mapping tree deletion.

If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to delete


getProperties([suffix], [benamebase],[name], [properties])

returns a dictionary of the requested properties. If properties is missing, it returns all the properties.

The returned properties are those of the ‘suffix’ and that is stored in ‘benamebase’ backend.

If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to

If ‘name’, ‘benamebase’ and ‘suffix’ are missing it raise an exception

Supported properties are:

Property name Mapping Tree attribute name
state nsslapd-state
backend nsslapd-backend
referral nsslapd-referral
chain-plugin-path nsslapd-distribution-plugin
chain-plugin-fct nsslapd-distribution-funct
chain-update-policy nsslapd-distribution-root-update


setProperties([suffix], [benamebase],[name], properties)

Set the requested properties if they are valid. The property name (see getProperties for the supported properties), may be prefixed in order to specify the operation:

The properties are those of the ‘suffix’ and that is stored in ‘benamebase’ backend.

If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to

If ‘name’, ‘benamebase’ and ‘suffix’ are missing it raise an exception


toSuffix([entry], [name])

Return, for a given mapping tree entry, the suffix values. Suffix values are identical from a LDAP point of views. Suffix values may be surrounded by “, or containing ‘\’ escape characters.


Backend


list([suffix], [backend_dn],[benamebase])

Returns a search result of the backend(s) entries with all their attributes

If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’.

If neither ‘suffix’, ‘backend_dn’ and ‘benamebase’ are specified, it returns all the backend entries


create(suffix, [properties])

Creates backend entry and returns its dn. If the properties ‘chain-bind-pwd’ and ‘chain-bind-dn’ and ‘chain-urls’ are specified the backend is a chained backend. A chaining backend is created under ‘cn=chaining database,cn=plugins,cn=config’. A local backend is created under ‘cn=ldbm database,cn=plugins,cn=config’


delete([suffix], [backend_dn],[benamebase])

Deletes the backend entry with the following steps:

If a mapping tree entry uses this backend (nsslapd-backend), it raises UnwillingToPerformError

If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’.

If neither ‘suffix’, ‘backend_dn’ and ‘benamebase’ are specified, it raise InvalidArgumentError


getProperties([suffix], [backend_dn],[benamebase], [properties])

returns a dictionary of the requested properties. If properties is missing, it returns all the properties.

If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’. At least one of ‘suffix’/’backend_dn’/’benamebase’ must be specified.

Supported properties are:

Property name  Backend attribute name
entry-cache-size nsslapd-cachememsize
entry-cache-number nsslapd-cachesize
dn-cache-size nsslapd-dncachememsize
read-only nsslapd-readonly
require-index nsslapd-require-index
suffix nsslapd-suffix (read only)
directory nsslapd-directory (once set it is read only)
db-deadlock nsslapd-db-deadlock-policy
chain-bind-dn nsmultiplexorbinddn
chain-bind-pwd nsmultiplexorcredentials
chain-urls nsfarmserverurl
stats ** (read only)

stats return the stats related to this backend that are available under “cn=monitor,cn=benamebase,cn=ldbm database,cn=plugins,cn=config”


setProperties([suffix], [backend_dn],[benamebase], properties)

set backend entry properties as defined in ‘properties’. If all the properties are valid it updates the backend entry, else it raises an exception. The supported properties are described in getProperties().

The property name, may be prefixed in order to specify the operation:

If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’. At least one of ‘suffix’/’backend_dn’/’benamebase’ must be specified.


toSuffix(backend_dn)

return the mapping tree entry of the suffix that is stored in ‘backend_dn’.


Index


list([suffix],[benamebase], [system])

Returns a search result of the indexes for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

If ‘system’ is specified and is True, it returns index entries under “cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config”. Else it returns index entries under ‘cn=index,cn=**,cn=ldbm database,cn=plugins,cn=config’.


create([suffix],[benamebase],attrname, properties )

Create a new index entry for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

If the ‘attrname’ index already exists (system or not), it raises TBD


delete([suffix],[benamebase],[attrname])

Delete an index entry for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

If the ‘attrname’ is provided and index does not exist (system or not), it raises TBD. If ‘attrname’ is not provided, it deletes all the indexes (system or not) under the suffix


getProperties([suffix],[benamebase],attrname, [properties])

returns a dictionary containing the index properties. The index is for a given ‘suffix’ (or ‘benamebase’ that stores that suffix). If ‘properties’ is missing it returns all the properties.

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. I both ‘suffix’ and ‘benamebase’ are missing it raise TBD

If the ‘attrname’ index does not exist (system or not), it raises TBD

Supported properties are:

Property name Index attribute name/value
equality-indexed nsIndexType: eq
presence-indexed nsIndexType: pres
approx-indexed nsIndexType: approx
subtree-indexed nsIndexType: subtree
substring-indexed nsIndexType: sub
system nsSystemIndex (read only)
matching-rules nsMatchingRule


setProperties([suffix],[benamebase],attrname, properties)

Set the properties defined in ‘properties’. If all the properties are valid it updates the index entry, else it raises an exception. The supported properties are described in getProperties().

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. I both ‘suffix’ and ‘benamebase’ are missing it raise TBD

If the ‘attrname’ index does not exist (system or not), it raises TBD

Supported properties are:

Property name Index attribute name/value
equality-indexed nsIndexType: eq
presence-indexed nsIndexType: pres
approx-indexed nsIndexType: approx
subtree-indexed nsIndexType: subtree
substring-indexed nsIndexType: sub
system nsSystemIndex (read only)


Tasks


export ([suffix], [benamebase], ldif_output, [args])

Export in a LDIF format a given ‘suffix’ (or ‘benamebase’ that stores that suffix). It uses an internal task to achieve this request.

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

‘ldif_output’ is the output file of the export


import ([suffix], [benamebase], ldif_input, [args])

Import from a LDIF format a given ‘suffix’ (or ‘benamebase’ that stores that suffix). It uses an internal task to achieve this request.

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

‘ldif_input’ is the input file


reindex ([suffix], [benamebase], attrname, [args])

Reindex a ‘suffix’ (or ‘benamebase’ that stores that suffix) for a given ‘attrname’. It uses an internal task to achieve this request.

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD


db2bak(backup_dir, args)

Perform a backup by creating a db2bak task


bak2db(bename, backup_dir, args)

Restore a backup by creating a bak2db task


fixupMemberOf ([suffix], [benamebase], [filt], [args])

Trigger a fixup task on ‘suffix’ (or ‘benamebase’ that stores that suffix) related to the entries ‘memberof’ of groups. It uses an internal task to achieve this request.

If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD

‘filt’ is a filter that will select all the entries (under ‘suffix’) that we need to evaluate/fix. If missing, the default value is ”(|(objectclass=inetuser)(objectclass=inetadmin))”


fixupTombstones(be_name, args)

Trigger a tombstone fixup task on the specified backend


Schema


list([args])

returns the search result on suffix ‘cn=schema’. The returned attributes are specified in ‘args’ list. By default it returns SCHEMA_OBJECTCLASSES, SCHEMA_ATTRIBUTES and SCHEMA_CSN.

property name  schema attribute name
SCHEMA_OBJECTCLASSES objectclasses
SCHEMA_ATTRIBUTES attributes
SCHEMA_CSN nsSchemaCSN


create(propname, value)

Update the schema doing a MODIFY/ADD of the attribute specified by ‘propname’ with ‘value’.

property name  schema attribute name
SCHEMA_OBJECTCLASSES objectclasses
SCHEMA_ATTRIBUTES attributes


delete(propname, value)

Update the schema doing a MODIFY/DEL of the attribute specified by ‘propname’ with ‘value’.

property name  schema attribute name
SCHEMA_OBJECTCLASSES objectclasses
SCHEMA_ATTRIBUTES attributes


Chaining


list([suffix], [type])

Returns a search result of the local or chained backend(s) entries with all their attributes.

If suffix is specified, only that suffix backend is returned. Else, it returns all the backend (according to the type).

If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it is equivalent to backend.list(suffix). If type is CHAINING_MUX, it returns all the backend entries under ‘’ cn=chaining database,cn=plugins,cn=config’’


create(suffix, [type], binddn, [bindpw], [urls])

Create a local or chained backend for the suffix. If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it creates a local backend (under cn=ldbm database,cn=plugins,cn=config), else (CHAINING_MUX) it creates a chained backend (under cn=chaining database,cn=plugins,cn=config).

If this is a local backend (CHAINING_FARM) it adds an ‘proxy’ allowed ACI at the suffix level:

(targetattr = "*")(version 3.0; acl "Proxied authorization for database links"; allow (proxy) userdn = "ldap:///<binddn>";)

If this is a local backend (CHAINING_FARM) and bindpw is specified, it creates the proxy entry: binddn/bindpw

If this is a chained backend (CHAINING_MUX), then bindpw and urls are mandatory


delete(suffix, [type])

Delete a local or chained backend(s) entry, implementing suffix.

If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it is equivalent to backend.delete(suffix). If type is CHAINING_MUX, it deletes the entry cn=<bebasename>,cn=chaining database,cn=plugins,cn=config where <bebasename> is the backend common name.

If a mapping tree entry uses this backend (nsslapd-backend), it raise TBD


getProperties([properties])

Returns a dictionary with the requested chaining plugin properties. If the properties is not specified, it returns all the properties. If the properties are not set at the server level, the default returned value is off

Supported properties are

property name chaining plugin attribute
proxy-authorization nsTransmittedControl
loop-detection nsTransmittedControl


setProperties(properties)

Set (if they are valid) the properties in the ‘chaining plugin’.

Supported properties are

property name chaining plugin attribute
proxy-authorization nsTransmittedControl
loop-detection nsTransmittedControl


Server


getProperties([properties])

Returns a dictionary of properties of the server. If no property are specified, it returns all the properties

Supported properties are:

Property name server attribute name
pid N/A
port nsslapd-port
sport nsslapd-secureport
version TBD
owner user/group id
dbstats statistics from
cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config
cn=monitor,cn=ldbm database,cn=plugins,cn=config


389 Upstream Tests Suites

Overview

389 upstream test suites are tests located inside the 389 Directory Server source code. In a continuous integration effort, those tests will be used in a non regression test phase of the continuous integration. The tests are written in python and are based on lib389. The tests are organized in individual bug test cases (tickets) and in functional test suites. In a first phase, only ticket test cases will be described.

Design

Repos

The 389 upstream tests will be pushed to the 389 Directory Server repository https://github.com/389ds/389-ds-base.git

Layout

The tests layout is

ds/dirsrvtests
    data/
       # location to store static files used by individual tests, things LDIF files, etc
    tickets/
       # Contains the test cases for ticket xxx
       ticketxxx_test.py  
       ticketyyy_test.py
       ...
       ticketzzz_test.py
       finalizer.py       # this module contains the cleanup function to remove the created instances
    suites/               
       # functional tests
       aci.py
       replication.py
       ...
       index.py
    tmp/
       # location used to exported ldif files, backups, etc

Ticket Test Suites

Test suite for a given ticket are stored in a single python module named ticketxxx_test.py. The name contains _test so that the ticket will be selected by test scheduler (nose, py.test…). The test suite creates/reinit the instances they need for their test purpose and stop the instances at the end of the test suite. A test suite is a series of test functions named test_ticketxxx_<no>. The string test in the function name is required so that the test scheduler will select them. They will select them in the order they appear in the module.

A ticket test suites looks like:

installation_prefix = None
<local helper functions>*
@pytest.fixture(scope=module)
def *topology*(request):
     if installation_prefix:
          <add 'PREFIX' to the creation of all instances>
      <creation of the test topology>
def test_ticketxxx_one(*topology*):
     <test case one>
...
def test_ticketxxx_NNN(*topology*):
     <test case NNN>
def test_ticketxxx_final(*topology*):
      <stop the instance in the topology>
def run_isolated():
     installation_prefix = /directory/where/389-ds/is/deployed (DIR_INSTALL)
     topo = topology(True)
     test_ticketxxx_one(topo)
     ...
     test_ticketxxx_NNN(topo)
     test_ticketxxx_final(topo)
if __name__ == '__main__':
   run_isolated()

Topology Fixture

see ‘Fixture’ in Test framework chapter

Test case

A test case would contain assert. This test case will be reported PASSED or FAILED if assert succeeds or not.

Run_isolated

It is a method to run the test under a python script rather that under a test scheduler.

How to run a single test

How to deploy Directory Server on a specific directory

How to run under eclipse

Prerequisite

Run a dedicated test

Open the tests you want to run ‘dirsrvtest->tickets->ticket47490_test.py’


Test framework

Overview

To run 389 upstream tests we are using py.test (http://pytest.org/). This framework offers an two features that we are using:

Selection of the tests

Auto discovery

This is achieved running the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v

The test modules will be named like ticketxxx_test.py or fractional_replication_test.py. The name containing the test pattern, py.test will select the module as a test module.

The test module will contain test suites implemented as functions named like *test_ticketxxx_* or *test\_fractional\_replication\_\<no\>*. Each functions will be called by py.test in the order they appear in the module. They will be called independently of the test result of the previous function.

Specified test

This is achieve running the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v $SPECIFIED_TEST

The test module will be executed even if its name does not contain test pattern.

We will use this ability to run a finalizer module. This module will be executed after the auto discovery mode, so after all tests completed. Its job will be to cleanup the environment (remove instances/backup…)

Fixture

Each test suite will contain a topology fixture (scope module), that will create or reinitialize the topology to support the tests (standalone, MMR…)

Each test in the suite will have topology argument.

The algorithm is :

       At the beginning, It may exists already some instances
       It may also exists a backup for the instances

       Principle:
           If instances exists:
               restart them
           If it exists backup for all instances:
               restore them from backup
           else:
               remove instances and backups
               Create instances
               Initialize topology
               Create backups

Silent mode vs debug mode

By default py.test run silently returning PASSED or FAILED whether or not an assert fails in the test.

It is possible to have more detail on the execution of the test with the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v -s

Last modified on 1 March 2024