Currently Red Hat Directory Server is tested using TET framework and tests. This framework and its tests have been enhanced for years and offers a high level of QE. For example, the acceptance suite runs thousands of tests, and is convenient to detect regressions. A drawback is the complexity of the tests and the TET framework. It is some what difficult to setup and run. Also, diagnosis of a failure is difficult, and requires expertise to conclude if a failure is due to a Directory Server bug, a invalid test case, a bug in the framework, or an environment issue.
Part of a Continuous Integration project, 389 upstream testing is an effort to push and maintain the testing capability in the upstream 389 repository.
This document will describe the following components
If you launch tests with py.test, the minimal version is 2.3 or later.
yum install pytest
You need to
Before launching the tests you need to check that localhost.localdomain is the first hostname in /etc/hosts. The scripts checks that localhost.localdomain <–> IPaddress. A limitation in setup-ds.pl and setup-ds-admin.pl require the following setting:
# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4 localhost4.localdomain4
::1 localhost.localdomain localhost localhost6 localhost6.localdomain6
...
You need to install python-ldap and pytest
yum install python-ldap pytest
The following describes how to setup a testing environment, and run a specific test
The following setup script allows you to checkout/compile and deploy the current version of 389 Directory Server under a specific directory. The path used for DIR_INSTALL will be used throughout the rest of the setup and testing process (installation_prefix, etc)/
Setup Script
#!/bin/bash
PREFIX=${1:-}
DIR_SRC=$HOME/workspaces
DIR_DS_GIT=389-ds-base
DIR_SPEC_GIT=389-ds-base-spec
DIR_RPM=$HOME/rpmbuild
DIR_INSTALL=$HOME/install # a.k.a /directory/where/389-ds/is/installed
DIR_SRC_DIR=$DIR_SRC/$DIR_DS_GIT
DIR_SRC_PKG=$DIR_SRC/$DIR_SPEC_GIT
TMP=/tmp/tempo$$
SED_SCRIPT=/tmp/script$$
#
# Checkout the source/spec
#
initialize()
{
for i in $DIR_DS_GIT $DIR_SPEC_GIT
do
rm -rf $DIR_SRC/$i
mkdir $DIR_SRC/$i
done
cd $DIR_SRC_DIR
git clone https://github.com/389ds/389-ds-base.git
cd $DIR_SRC_PKG
git clone git://pkgs.fedoraproject.org/389-ds-base
}
#
# Compile 389-DS
#
compile()
{
cd $DIR_SRC_PKG
cp $DIR_SRC_PKG/389-ds-base.spec $DIR_RPM/SPECS
cp $DIR_SRC_PKG/389-ds-base-git.sh $DIR_RPM/SOURCES
cp $DIR_SRC_PKG/389-ds-base-devel.README $DIR_RPM/SOURCES
cd $DIR_SRC_DIR
rm -f /tmp/*bz2
TAG=HEAD sh $DIR_SRC_PKG/389-ds-base-git-local.sh /tmp
SRC_BZ2=`ls -rt /tmp/*bz2 | tail -1`
echo "Copy $SRC_BZ2"
cp $SRC_BZ2 $DIR_RPM/SOURCES
if [ -n "$PREFIX" -a -d $PREFIX ]
then
TARGET="--prefix=$PREFIX"
else
TARGET=""
fi
echo "Active the debug compilation"
echo "Compilation start"
CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic'
CXXFLAGS=$CFLAGS
sed -e 's/^\%configure/CFLAGS="$CFLAGS" CXXFLAGS="$CXXFLAGS" \%configure/' $DIR_RPM/SPECS/389-ds-base.spec > $DIR_RPM/SPECS/389-ds-base.spec.new
cp $DIR_RPM/SPECS/389-ds-base.spec.new $DIR_RPM/SPECS/389-ds-base.spec
sleep 3
rpmbuild -ba $DIR_RPM/SPECS/389-ds-base.spec 2>&1 | tee $DIR_RPM/build.output
}
#
# Install it on a private directory $HOME/install
#
install()
{
cd $DIR_SRC_DIR
CFLAGS="-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-sign-compare"
CXXFLAGS="$CFLAGS" $DIR_SRC_DIR/ds/configure --prefix=$DIR_INSTALL --enable-debug --with-openldap 2>&1 > $DIR_RPM/BUILD/build_install.output
echo "Now install dirsrv" >> $DIR_RPM/BUILD/build_install.output
make install 2>&1 >> $DIR_RPM/BUILD/build_install.output
}
if [ ! -d $HOME/.dirsrv ]
then
mkdir ~/.dirsrv # this is where the instance specific sysconfig files go - dirsrv-instancename
fi
# note: compile is not necessary to deploy
initialize
install
For information with that kind of deployment you can run usual administrative Directory Server commands. For example:
cd $DIR_INSTALL
sbin/setup-ds.pl
sbin/restart-dirsrv
sbin/ldif2db
bin/logconv.pl var/log/dirsrv/slapd-inst/access
bin/dbscan -f var/lib/dirsrv/slapd-inst/db/userRoot/id2entry.db4
etc.
The Lib389 Library provides interfaces to do all administrative tasks .
Open the tests you want to run (e.g. ticketxyz_test.py)
Save and run the following script
Test Script
#!/bin/bash
DIR=$HOME/test
TEST=ticketxyz_test.py
mkdir $DIR
# checkout tests and lib389
cd $DIR
git clone https://github.com/389ds/389-ds-base.git
# define PYTHONPATH
export PYTHONPATH=/usr/lib64/python2.7:/usr/lib64/python2.7/plat-linux2:/usr/lib64/python2.7/lib-dynload:/usr/lib64/python2.7/site-packages:/usr/lib/python2.7/site-packages:/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info
LIB389=$DIR/lib389
PROJECT=$DIR/ds/dirsrvtests
DIR_PREFIX=/directory/where/389-ds/is/installed --> this would be DIR_INSTALL($HOME/install) from the setup script
export PYTHONPATH=$PYTHONPATH:$PROJECT:$LIB389
PREFIX=$DIR_PREFIX python -v $PROJECT/tickets/$TEST
For example if you already cloned Directory Server and lib389 you may want to give a try running an existing testcase like the one for ticket 47560
#!/bin/ksh
export PYTHONPATH=/home/<your_login>/workspaces/tests-389/framework/lib389:/home/<your_login>/workspaces/389-main-branch/ds/dirsrvtests/tickets
echo "diff /tmp/ticket47560_test.py /home/<your_login>/workspaces/389-main-branch/ds/dirsrvtests/tickets/ticket47560_test.py"
diff /tmp/ticket47560_test.py /home/<your_login>/workspaces/389-main-branch/ds/dirsrvtests/tickets/ticket47560_test.py
python /tmp/ticket47560_test.py
You can see that the launched test case is under /tmp/ticket47560_test.py, this is because you need to modify it slightly for fixture and prefix. The output of the diff is
diff /tmp/ticket47560_test.py /home/<your_login>/workspaces/389-main-branch/ds/dirsrvtests/tickets/ticket47560_test.py
26c26
< #@pytest.fixture(scope="module")
---
> @pytest.fixture(scope="module")
296c296
< installation_prefix = '/home/<your_login>/install'
---
> installation_prefix = None
So /home/<your_login>/install, aka “installation prefix”, aka DIR_INSTALL, is a target directory where we deployed a build see, you may define this directory with $PREFIX environment variable, or like in this example force it directly into the test case.
Both method are valid, but it has some interest to set it directly into the test case if you want to run a multi-version test-case. Like in test ticket47788 where two variables installation1_prefix and installation2_prefix allow you to create the instance in different versions
Install eclipse http://www.eclipse.org
yum install eclipse-platform
yum search eclipse --> will show many useful plugins
Then do the following steps
cd $HOME/test mkdir 389-ds mkdir lib389 cd $HOME/test/389-ds git clone https://github.com/389ds/389-ds-base.git launch eclipse… -> in ’select workspace’ entry $HOME/test -> File->New->Project selects PyDev Project -> Project name ’lib389’ (Directory will be $HOME/test/389-ds/ds/src/lib389) -> File->New->Project selects PyDev Project -> Project name ’dirsrvtest’ (Directory will be $HOME/test/389-ds/ds/dirsrvtests ’Referenced projects’ selects lib389
Open the tests you want to run ‘dirsrvtest->tickets->ticket47490_test.py’
Run the following script. If you need more detail on tests processing, uncomment ‘DEBUG=-s’
#!/bin/bash
DIR=$HOME/test
TEST=ticketxyz_test.py
mkdir $DIR
# checkout tests and lib389
cd $DIR
git clone https://github.com/389ds/389-ds-base.git
# define PYTHONPATH
export PYTHONPATH=/usr/lib64/python2.7:/usr/lib64/python2.7/plat-linux2:/usr/lib64/python2.7/lib-dynload:/usr/lib64/python2.7/site-packages:/usr/lib/python2.7/site-packages:/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info
LIB389=$DIR/ds/src/lib389
PROJECT=$DIR/ds/dirsrvtests
DIR_PREFIX=/directory/where/389-ds/is/installed --> DIR_INSTALL($HOME/install) from the setup script
export PYTHONPATH=$PYTHONPATH:$PROJECT:$LIB389
#DEBUG=-s
PREFIX=$DIR_PREFIX py.test -v $DEBUG $PROJECT/tickets/$TEST
For the moment, it is not recommended to run the test with this type of deployment because:
This document describes the basics for writing a lib389 test
lib389 is a python based library, that offers services to do administrative tasks for Directory Server. This library is intended to be used to develop 389 upstream tests and to develop 389 administrative CLI.
The library is based on early version of https://github.com/richm/dsadmin
This library is opened source and is available under https://github.com/389ds/389-ds-base/tree/main/src/lib389
The development methodology for the lib389 will follow the same development methodology as 389 Directory Server. The main aspects are described here.
lib389/
__init__.py # implements routines to do online administrative tasks
_replication.py # implements replication related class (CSN/RUV)
_constants.py # main definitions (Directory manager, replica type, DNs for config...)
_entry.py # implements LDAP 'Entry' and methods
_ldif_conn.py # subclass of LDIFParser. Used to translate a ldif entry (from dse.ldif for example) into an 'Entry'
agent.py # implements routines to do remote offline administrative tasks
agreement.py # implements replica agreement services
backend.py # implements backend services
brooker.py # Brooker classes to organize ldap methods
chaining.py # implements chaining backend services
changelog.py # implements the replication changelog
index.py # implements index services
logs.py # implements logging services
mappingTree.py # implements mapping tree services
plugins.py # implements plugin operations(enable/disable)
properties.py # Various property helper short cut names
replica.py # implements replica services
schema.py # implements schema operations
suffix.py # implements suffix services (a wrapper around mapping tree)
tasks.py # implements task services
tools.py # implements routines to do local offline administrative tasks
utils.py # implements miscellaneous routines
test/
config_test.py
It contains tests for:
- replica
- backend
- suffix
dsadmin_basic_test.py
It contains tests for:
- changelog
- log level
- mapping tree
- misc (bind)
dsadmin_create_remove_test.py
- instance creation
- instance deletion
dsadmin_test
- replica
- backend
- start/stop instance
- ssl
- replica agreement
replica_test
- test various replication objects (changelog, replica, replica agreement, ruv)
backend_test
- backend
A DirSrv Object can have the following states: ALLOCATED/OFFLINE/ONLINE.
The graphic below describes the transitions after the various operations.
__ (create)__ ___(open)___ __
/ \ / (start) \ / \
/ V / V / \
--(allocate)--> ALLOCATED OFFLINE ONLINE (all lib389 ops + LDAP(S) ops)
^ / ^ / ^ /
\___(delete)__/ \___(close)__/ \__/
(stop/restart)
(backup/restore)
The online administrative tasks (ldap operations) require that DirSrv is ONLINE. The offline administrative tasks can be issued whether DirSrv is ONLINE or OFFLINE
Initialize a DirSrv object according to the provided args dictionary. The state change from DIRSRV_STATE_INIT -> DIRSRV_STATE_ALLOCATED. This step is mandatory before calling the others methods of this class.
args contains the following properties of the server. (mandatory properties are in bold)
The instance will be localized under SER_DEPLOYED_DIR.. If SER_DEPLOYED_DIR is not specified, the instance will be stored under /.
If SER_USER_ID is not specified, the instance will run with the caller user id. If caller is ‘root’, it will run as ‘DEFAULT_USER’ user
If SER_GROUP_ID is not specified, the instance will run with the caller group id. If caller is ‘root’, it will run as ‘DEFAULT_USER’ group
If the instance exists it returns True else it returns False
Returns a list dictionary. For a created instance that is on the local file system (e.g.
<prefix>/etc/sysconfig/dirsrv-<serverid>
or
$HOME/.dirsrv/dirsrv-<serverid>
A dictionary is created with the following properties:
If all=True it builds a list of dictionary for all created instances. Else (default), the list will only contain the dictionary of the calling instance
Creates an instance with the parameters sets in dirsrv (see allocate). DirSrv state must be DIRSRV_STATE_ALLOCATED before calling this function. Its final state will be DIRSRV_STATE_OFFLINE
Upgrades all the instances that coexist with this DirSrv. This is the same as running “setup-ds.pl –update”
Deletes the instance with the parameters sets in dirsrv (see allocate). If the instance does not exist it raise TBD.
It opens a ldap connection to dirsrv so that online administrative tasks are possible
It closes the ldap connection to dirsrv. Online administrative tasks are no longer possible upon completion.
It starts the instance dirsrv. If the instance is already running, it does nothing.
It stops the instance dirsrv. If the instance is already stopped, it does nothing.
It restarts the instance dirsrv. If the instance is already stopped, it just starts it.
Get the full system path to the local data directory(ds/dirsrvtests/data)
Removes all the files from the /tmp dir (ds/dirsrvtest/tmp/). This should be in the setup phase of the test script.
Wrapper around SimpleLDAPObject.search. It is common to just get one entry.
Returns a dictionary of properties of the server. If no property are specified, it returns all the properties
Supported properties are:
Property name | server attribute name |
---|---|
pid | N/A |
port | nsslapd-port |
sport | nsslapd-secureport |
version | TBD |
owner | user/group id |
db-* | DB related properties (cache, checkpoint, txn batch…) TBD |
db-stats | statistics from: cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config cn=monitor,cn=ldbm database,cn=plugins,cn=config |
conn-* | Connection related properties (idle, ioblock, thread/conn, max bersize) TBD |
pwd-* | Password policy properties (retry, lock,…) |
security-* | Security properties (ciphers, client/server auth., ) |
TBD
Return the file name of the backup file. If it does not exist it returns None
It creates a full instance backup file under /tmp/slapd-
The backups are stored under BACKUPDIR environment variable (by default /tmp).
If it already exists a such file, it assumes it is a valid backup and returns its name. Instance ‘dirsrv’ must be stopped prior the call, else backup file may be corrupted
self.changelogdir: directory where is stored the changelog (e.g. /var/lib/dirsrv/slapd-supplier/changelogdb)
Restore a directory from a backup file
Remove a backup_file or all backup up of a given instance
Create an entry that will be used to bind as replica manager. The entry properties will be no idletime (nsIdletimeout=0) and long time for password expiration (passwordExpirationTime).
Example:
create_repl_manager()
dn: cn=replrepl,cn=config
cn: bind dn pseudo user
cn: replrepl
objectClass: top
objectClass: person
passwordExpirationTime: 20381010000000Z
sn: bind dn pseudo user
nsIdleTimeout: 0
userPassword:: e1NTSEF9aGxLRFptSVY2cXlvRmV0S0ZCOS84cFBNY1RaeXFkV
DZzNXRFQlE9PQ==
creatorsName: cn=directory manager
modifiersName: cn=directory manager
modifyTimestamp: 20131121131644Z
Adds the replication changelog entry (cn=changelog5,cn=config), if it does not already exist. Then it returns the entry. This entry specifies the directory where the changelog’s database file will be stored. The directory name is in the attribute nsslapd-changelogdir.
If ‘changelog()’ was called when configuring the first supplier replica. It is not necessary to call it again when configuring the others (if any) supplier replicas, unless we want their changelog to go to an other directory.
Example:
self.supplier.replica.changelog()
dn: cn=changelog5,cn=config
objectClass: top
objectClass: extensibleobject
cn: changelog5
nsslapd-changelogdir: \<install\>/var/lib/dirsrv/slapd-supplier/changelogdb
Lists and returns the replicas under the mapping tree (cn=mapping tree,cn=config). If ‘suffix’ is provided, it returns the replica (in a list of entry) that is configured for that ‘suffix’. If ‘replica_dn’ is specified it returns the replica with that DN.
If ‘suffix’ and ‘replica_dn’ are specified, it uses ‘replica_dn’.
Example:
self.replica.list()
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=example,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-240cb6d3
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0
dn: cn=replica,cn=dc\3Dredhat\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=redhat,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-3d6bc042
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0
or
self.replica.list('dc=example,dc=com')
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
cn: replica
nsDS5Flags: 1
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsDS5ReplicaType: 3
nsDS5ReplicaRoot: dc=example,dc=com
nsds5ReplicaLegacyConsumer: off
nsDS5ReplicaId: 1
nsDS5ReplicaBindDN: cn=replrepl,cn=config
nsState:: AQAAAAAAAABcCo5SAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
nsDS5ReplicaName: 284aec0a-52af11e3-91fd8ff3-240cb6d3
nsds5ReplicaChangeCount: 11
nsds5replicareapactive: 0
Create a replica entry on an existing suffix.
Delete a replica related to the provided suffix. If this replica role was REPLICAROLE_HUB or REPLICAROLE_CONSUMER, it also deletes the changelog associated to that replica. If it exists some replication agreement below that replica, they are deleted.
Enable replication for a given suffix. If the role is REPLICAROLE_SUPPLIER or REPLICAROLE_HUB, it also creates the changelog. If the entry “cn=replrepl,cn=config” (default replication manager) does not exist, it creates it.
This is a wrapper of the ‘delete’ function. See delete function.
Returns a dictionary containing the requested properties value of the replica. If ‘properties’ is missing it returns all the supported properties
At least one parameter suffix/replica_dn/replica_entry needs to be specified It uses first (if specified) ‘replica_entry’, then ‘replica_dn’, then ‘suffix’
Supported properties are:
Property name | Replica attribute name |
---|---|
legacy | nsds5replicalegacyconsumer [ off ] |
binddn | nsds5replicabinddn’ [ REPLICATION_BIND_DN in constants.py ] |
referral | nsds5ReplicaReferral |
purge-delay | nsds5ReplicaPurgeDelay |
purge-interval | nsds5replicatombstonepurgeinterval |
set properties defined in ‘properties’ in the the replica entry with the corresponding RHDS attribute name.
Some properties have default value, described in italic below
The property name, may be prefixed in order to specify the operation:
<propname>: <value> => MOD/REPLACE <value>. If <value> = ”“, then the related attribute is deleted (MOD/DEL).
Supported properties are:
Property name | Replica attribute name |
---|---|
legacy | nsds5replicalegacyconsumer [ off ] |
binddn | nsds5replicabinddn’ [ REPLICATION_BIND_DN in constants.py ] |
referral | nsds5ReplicaReferral |
purge-delay | nsds5ReplicaPurgeDelay |
purge-interval | nsds5replicatombstonepurgeinterval |
Return a replica update vector for the given suffix. It tries first to retrieve the RUV tombstone entry stored in the replica database. If it can not retrieve it and if ‘tryrepl’ is True, it tries to retrieve the in memory RUV stored in the replica (e.g. cn=replica,cn=
Return a formatted string with the replica agreement status
Example:
print topo.supplier.agreement.status(replica_agreement_dn)
Status for meTo_localhost.localdomain:50389 agmt localhost.localdomain:50389
Update in progress: TRUE
Last Update Start: 20131121132756Z
Last Update End: 0
Num. Changes Sent: 1:10/0
Num. changes Skipped: None
Last update Status: 0 Replica acquired successfully: Incremental update started
Init in progress: None
Last Init Start: 0
Last Init End: 0
Last Init Status: None
Reap Active: 0
Returns tuple with done/errors status:
Schedule the replication agreement
Example:
topo.supplier.agreement.schedule(agreement_dn) # to start the replication agreement
topo.supplier.agreement.schedule(agreement_dn, 'stop') # to stop the replication agreement
topo.supplier.agreement.schedule(agreement_dn, '1800-1900 01234') # to schedule the replication agreement all week days from 6PM-7PM
Resume a paused replication agreement, paused with the “pause” method. It tries to enabled the replica agreement. If it fails (not implemented in all version), it uses the schedule() with interval ‘0000-2359 0123456’
Pause this replication agreement. This replication agreement will send no more changes. Use the resume() method to “unpause”. It tries to disable the replica agreement. If it fails (not implemented in all version), it uses the schedule() with interval ‘2358-2359 0’
returns a dictionary of the requested properties. If properties is missing, it returns all the properties.
Supported properties are:
Property name | Replication Agreement attribute name |
---|---|
schedule | nsds5replicaupdateschedule |
fractional-exclude-attrs-inc | nsDS5ReplicatedAttributeList |
fractional-exclude-attrs-total | nsDS5ReplicatedAttributeListTotal |
fractional-strip-attrs | nsds5ReplicaStripAttrs |
transport-prot | nsds5replicatransportinfo |
consumer-port | nsds5replicaport |
consumer-total-init | nsds5BeginReplicaRefresh |
Checks that properties defined in ‘properties’ are valid and set the replica agreement entry with the corresponding RHDS attribute name.
The property name,may be prefixed in order to specify the operation:
Some properties have default value, described in italic below
Supported properties are:
Property name | Replication Agreement attribute name |
---|---|
schedule | nsds5replicaupdateschedule |
fractional-exclude-attrs-inc | nsDS5ReplicatedAttributeList |
fractional-exclude-attrs-total | nsDS5ReplicatedAttributeListTotal |
fractional-strip-attrs | nsds5ReplicaStripAttrs |
transport-prot | nsds5replicatransportinfo |
consumer-port | nsds5replicaport |
consumer-total-init | nsds5BeginReplicaRefresh |
Example:
entry = Entry(dn_agreement)
args = {'transport-prot': 'LDAP',
'consumer-port' : 10389}
try:
setProperties(entry, args)
except:
pass
Returns the search result of the replica agreement(s) under the replica (replicaRoot is ‘suffix’).
Either ‘suffix’ or ‘agmtdn’ need to be specified. ‘consumer_host’ and ‘consumer_port’ are either not specified or specified both.
If ‘agmtdn’ is specified, it returns the search result entry of that replication agreement. else if consumer host/port are specified it returns the replica agreements toward that consumer host:port. Finally if neither ‘agmtdn’ nor ‘consumer host/port’ are specifies it returns all the replica agreements under the replica (replicaRoot is ‘suffix’).
Create a replication agreement from self to consumer and returns its DN
@param args - further args dict optional values. Allowed keys: - schedule - fractional-exclude-attrs-inc - fractional-exclude-attrs-total -fractional-strip-attrs -winsync
Example:
repl_agreement = supplier.agreement.create(consumer, SUFFIX, binddn=defaultProperties[REPLICATION_BIND_DN], bindpw=defaultProperties[REPLICATION_BIND_PW])
Trigger a total update of the consumer replica. If ‘agmtdn’ is specified it triggers the total update of this replica.
If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It triggers total update of replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’
Wait for the completion of the total update or an error condition of the selected replica agreement.
If ‘agmtdn’ is specified it triggers the total update of this replica.
If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It triggers total update of replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’
Send a tuple with
If ‘agmtdn’ is specified it reads the info from this entry.
If ‘agmtdn’ is not specified, then ‘suffix’, ‘consumer_host’ and ‘consumer_port’ are mandatory. It retrieves the replica agreement under replica ‘suffix’ toward consumer ‘host’:’port’, and reads the info from it
set the properties (if valid) for logging type.
Supported properties are
Property name | type (type = access|error|audit) |
---|---|
max-logs | nsslapd-typelog-maxlogsperdir |
max-size | nsslapd-typelog-maxlogsize |
max-diskspace | nsslapd-typelog-logmaxdiskspace |
min-freespace | nsslapd-typelog-logminfreediskspace |
rotation-time | nsslapd-typelog-logrotationtime |
TBC |
Returns in a dictionary (prop:value) the requested set of properties for the logging type. If ‘args’ is missing, it returns all the properties for the logging type.
Supported properties are
Property name | type (type = access|error|audit) |
---|---|
max-logs | nsslapd-typelog-maxlogsperdir |
max-size | nsslapd-typelog-maxlogsize |
max-diskspace | nsslapd-typelog-logmaxdiskspace |
min-freespace | nsslapd-typelog-logminfreediskspace |
rotation-time | nsslapd-typelog-logrotationtime |
TBC |
It returns the list of suffixes DN for which it exists a mapping tree entry
It returns the backend entry that stores the provided suffix
It returns the DN of a suffix that is the parent of the provided ‘suffix’. If ‘suffix’ has no parent, it returns None
Returns a search result of the mapping tree entries with all their attributes
If ‘suffix’/’benamebase’ are specified. It uses ‘benamebase’ first, then ‘suffix’.
If neither ‘suffix’ and ‘benamebase’ are specified, it returns all the mapping tree entries
Create a mapping tree entry (under “cn=mapping tree,cn=config”), for the ‘suffix’ and that is stored in ‘benamebase’ backend. ‘benamebase’ backend must exists before creating the mapping tree entry. If a ‘parent’ is provided that means that we are creating a sub-suffix mapping tree.
Delete a mapping tree entry (under “cn=mapping tree,cn=config”), for the ‘suffix’ and that is stored in ‘benamebase’ backend. ‘benamebase’ backend is not changed by the mapping tree deletion.
If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to delete
returns a dictionary of the requested properties. If properties is missing, it returns all the properties.
The returned properties are those of the ‘suffix’ and that is stored in ‘benamebase’ backend.
If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to
If ‘name’, ‘benamebase’ and ‘suffix’ are missing it raise an exception
Supported properties are:
Property name | Mapping Tree attribute name |
---|---|
state | nsslapd-state |
backend | nsslapd-backend |
referral | nsslapd-referral |
chain-plugin-path | nsslapd-distribution-plugin |
chain-plugin-fct | nsslapd-distribution-funct |
chain-update-policy | nsslapd-distribution-root-update |
Set the requested properties if they are valid. The property name (see getProperties for the supported properties), may be prefixed in order to specify the operation:
The properties are those of the ‘suffix’ and that is stored in ‘benamebase’ backend.
If ‘name’ is specified. It uses it to retrieve the mapping tree to delete Else if ‘suffix’/’benamebase’ are specified. It uses both to retrieve the mapping tree to
If ‘name’, ‘benamebase’ and ‘suffix’ are missing it raise an exception
Return, for a given mapping tree entry, the suffix values. Suffix values are identical from a LDAP point of views. Suffix values may be surrounded by “, or containing ‘\’ escape characters.
Returns a search result of the backend(s) entries with all their attributes
If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’.
If neither ‘suffix’, ‘backend_dn’ and ‘benamebase’ are specified, it returns all the backend entries
Creates backend entry and returns its dn. If the properties ‘chain-bind-pwd’ and ‘chain-bind-dn’ and ‘chain-urls’ are specified the backend is a chained backend. A chaining backend is created under ‘cn=chaining database,cn=plugins,cn=config’. A local backend is created under ‘cn=ldbm database,cn=plugins,cn=config’
Deletes the backend entry with the following steps:
If a mapping tree entry uses this backend (nsslapd-backend), it raises UnwillingToPerformError
If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’.
If neither ‘suffix’, ‘backend_dn’ and ‘benamebase’ are specified, it raise InvalidArgumentError
returns a dictionary of the requested properties. If properties is missing, it returns all the properties.
If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’. At least one of ‘suffix’/’backend_dn’/’benamebase’ must be specified.
Supported properties are:
Property name | Backend attribute name |
---|---|
entry-cache-size | nsslapd-cachememsize |
entry-cache-number | nsslapd-cachesize |
dn-cache-size | nsslapd-dncachememsize |
read-only | nsslapd-readonly |
require-index | nsslapd-require-index |
suffix | nsslapd-suffix (read only) |
directory | nsslapd-directory (once set it is read only) |
db-deadlock | nsslapd-db-deadlock-policy |
chain-bind-dn | nsmultiplexorbinddn |
chain-bind-pwd | nsmultiplexorcredentials |
chain-urls | nsfarmserverurl |
stats | * |
stats return the stats related to this backend that are available under “cn=monitor,cn=benamebase,cn=ldbm database,cn=plugins,cn=config”
set backend entry properties as defined in ‘properties’. If all the properties are valid it updates the backend entry, else it raises an exception. The supported properties are described in getProperties().
The property name, may be prefixed in order to specify the operation:
If ‘suffix’/’backend_dn’/’benamebase’ are specified. It uses ‘backend_dn’ first, then ‘suffix’, then ‘benamebase’. At least one of ‘suffix’/’backend_dn’/’benamebase’ must be specified.
return the mapping tree entry of the suffix that is stored in ‘backend_dn’.
Returns a search result of the indexes for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
If ‘system’ is specified and is True, it returns index entries under “cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config”. Else it returns index entries under ‘cn=index,cn=*
Create a new index entry for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
If the ‘attrname’ index already exists (system or not), it raises TBD
Delete an index entry for a given ‘suffix’ (or ‘benamebase’ that stores that suffix).
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
If the ‘attrname’ is provided and index does not exist (system or not), it raises TBD. If ‘attrname’ is not provided, it deletes all the indexes (system or not) under the suffix
returns a dictionary containing the index properties. The index is for a given ‘suffix’ (or ‘benamebase’ that stores that suffix). If ‘properties’ is missing it returns all the properties.
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. I both ‘suffix’ and ‘benamebase’ are missing it raise TBD
If the ‘attrname’ index does not exist (system or not), it raises TBD
Supported properties are:
Property name | Index attribute name/value |
---|---|
equality-indexed | nsIndexType: eq |
presence-indexed | nsIndexType: pres |
approx-indexed | nsIndexType: approx |
subtree-indexed | nsIndexType: subtree |
substring-indexed | nsIndexType: sub |
system | nsSystemIndex (read only) |
matching-rules | nsMatchingRule |
Set the properties defined in ‘properties’. If all the properties are valid it updates the index entry, else it raises an exception. The supported properties are described in getProperties().
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. I both ‘suffix’ and ‘benamebase’ are missing it raise TBD
If the ‘attrname’ index does not exist (system or not), it raises TBD
Supported properties are:
Property name | Index attribute name/value |
---|---|
equality-indexed | nsIndexType: eq |
presence-indexed | nsIndexType: pres |
approx-indexed | nsIndexType: approx |
subtree-indexed | nsIndexType: subtree |
substring-indexed | nsIndexType: sub |
system | nsSystemIndex (read only) |
Export in a LDIF format a given ‘suffix’ (or ‘benamebase’ that stores that suffix). It uses an internal task to achieve this request.
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
‘ldif_output’ is the output file of the export
Import from a LDIF format a given ‘suffix’ (or ‘benamebase’ that stores that suffix). It uses an internal task to achieve this request.
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
‘ldif_input’ is the input file
Reindex a ‘suffix’ (or ‘benamebase’ that stores that suffix) for a given ‘attrname’. It uses an internal task to achieve this request.
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
Perform a backup by creating a db2bak task
Restore a backup by creating a bak2db task
Trigger a fixup task on ‘suffix’ (or ‘benamebase’ that stores that suffix) related to the entries ‘memberof’ of groups. It uses an internal task to achieve this request.
If ‘suffix’ and ‘benamebase’ are specified, it uses ‘benamebase’ first else ‘suffix’. If both ‘suffix’ and ‘benamebase’ are missing it raise TBD
‘filt’ is a filter that will select all the entries (under ‘suffix’) that we need to evaluate/fix. If missing, the default value is ”(|(objectclass=inetuser)(objectclass=inetadmin))”
Trigger a tombstone fixup task on the specified backend
returns the search result on suffix ‘cn=schema’. The returned attributes are specified in ‘args’ list. By default it returns SCHEMA_OBJECTCLASSES, SCHEMA_ATTRIBUTES and SCHEMA_CSN.
property name | schema attribute name |
---|---|
SCHEMA_OBJECTCLASSES | objectclasses |
SCHEMA_ATTRIBUTES | attributes |
SCHEMA_CSN | nsSchemaCSN |
Update the schema doing a MODIFY/ADD of the attribute specified by ‘propname’ with ‘value’.
property name | schema attribute name |
---|---|
SCHEMA_OBJECTCLASSES | objectclasses |
SCHEMA_ATTRIBUTES | attributes |
Update the schema doing a MODIFY/DEL of the attribute specified by ‘propname’ with ‘value’.
property name | schema attribute name |
---|---|
SCHEMA_OBJECTCLASSES | objectclasses |
SCHEMA_ATTRIBUTES | attributes |
Returns a search result of the local or chained backend(s) entries with all their attributes.
If suffix is specified, only that suffix backend is returned. Else, it returns all the backend (according to the type).
If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it is equivalent to backend.list(suffix). If type is CHAINING_MUX, it returns all the backend entries under ‘’ cn=chaining database,cn=plugins,cn=config’’
Create a local or chained backend for the suffix. If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it creates a local backend (under cn=ldbm database,cn=plugins,cn=config), else (CHAINING_MUX) it creates a chained backend (under cn=chaining database,cn=plugins,cn=config).
If this is a local backend (CHAINING_FARM) it adds an ‘proxy’ allowed ACI at the suffix level:
(targetattr = "*")(version 3.0; acl "Proxied authorization for database links"; allow (proxy) userdn = "ldap:///<binddn>";)
If this is a local backend (CHAINING_FARM) and bindpw is specified, it creates the proxy entry: binddn/bindpw
If this is a chained backend (CHAINING_MUX), then bindpw and urls are mandatory
Delete a local or chained backend(s) entry, implementing suffix.
If type is missing, by default its value is CHAINING_FARM. If type is CHAINING_FARM, it is equivalent to backend.delete(suffix). If type is CHAINING_MUX, it deletes the entry cn=<bebasename>,cn=chaining database,cn=plugins,cn=config where <bebasename> is the backend common name.
If a mapping tree entry uses this backend (nsslapd-backend), it raise TBD
Returns a dictionary with the requested chaining plugin properties. If the properties is not specified, it returns all the properties. If the properties are not set at the server level, the default returned value is off
Supported properties are
property name | chaining plugin attribute |
---|---|
proxy-authorization | nsTransmittedControl |
loop-detection | nsTransmittedControl |
Set (if they are valid) the properties in the ‘chaining plugin’.
Supported properties are
property name | chaining plugin attribute |
---|---|
proxy-authorization | nsTransmittedControl |
loop-detection | nsTransmittedControl |
Returns a dictionary of properties of the server. If no property are specified, it returns all the properties
Supported properties are:
Property name | server attribute name |
---|---|
pid | N/A |
port | nsslapd-port |
sport | nsslapd-secureport |
version | TBD |
owner | user/group id |
dbstats | statistics from cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config cn=monitor,cn=ldbm database,cn=plugins,cn=config |
389 upstream test suites are tests located inside the 389 Directory Server source code. In a continuous integration effort, those tests will be used in a non regression test phase of the continuous integration. The tests are written in python and are based on lib389. The tests are organized in individual bug test cases (tickets) and in functional test suites. In a first phase, only ticket test cases will be described.
The 389 upstream tests will be pushed to the 389 Directory Server repository https://github.com/389ds/389-ds-base.git
The tests layout is
ds/dirsrvtests
data/
# location to store static files used by individual tests, things LDIF files, etc
tickets/
# Contains the test cases for ticket xxx
ticketxxx_test.py
ticketyyy_test.py
...
ticketzzz_test.py
finalizer.py # this module contains the cleanup function to remove the created instances
suites/
# functional tests
aci.py
replication.py
...
index.py
tmp/
# location used to exported ldif files, backups, etc
Test suite for a given ticket are stored in a single python module named ticketxxx_test.py. The name contains _test so that the ticket will be selected by test scheduler (nose, py.test…). The test suite creates/reinit the instances they need for their test purpose and stop the instances at the end of the test suite. A test suite is a series of test functions named test_ticketxxx_<no>. The string test in the function name is required so that the test scheduler will select them. They will select them in the order they appear in the module.
A ticket test suites looks like:
installation_prefix = None
<local helper functions>*
@pytest.fixture(scope=module)
def *topology*(request):
if installation_prefix:
<add 'PREFIX' to the creation of all instances>
<creation of the test topology>
def test_ticketxxx_one(*topology*):
<test case one>
...
def test_ticketxxx_NNN(*topology*):
<test case NNN>
def test_ticketxxx_final(*topology*):
<stop the instance in the topology>
def run_isolated():
installation_prefix = /directory/where/389-ds/is/deployed (DIR_INSTALL)
topo = topology(True)
test_ticketxxx_one(topo)
...
test_ticketxxx_NNN(topo)
test_ticketxxx_final(topo)
if __name__ == '__main__':
run_isolated()
see ‘Fixture’ in Test framework chapter
A test case would contain assert. This test case will be reported PASSED or FAILED if assert succeeds or not.
It is a method to run the test under a python script rather that under a test scheduler.
Setup Script - Set up a lib389 testing environment
Test Script - Run a specific test
Then do the following steps
cd $HOME/test mkdir 389-ds mkdir lib389 cd $HOME/test/389-ds git clone https://github.com/389ds/389-ds-base.git launch eclipse -> in ’select workspace’ entry $HOME/test -> File->New->Project selects PyDev Project -> Project name ’lib389’ (Directory will be $HOME/test/389-ds/ds/src/lib389) -> File->New->Project selects PyDev Project -> Project name ’dirsrvtest’ (Directory will be $HOME/test/389-ds/ds/dirsrvtests ’Referenced projects’ selects lib389
Open the tests you want to run ‘dirsrvtest->tickets->ticket47490_test.py’
To run 389 upstream tests we are using py.test (http://pytest.org/). This framework offers an two features that we are using:
This is achieved running the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v
The test modules will be named like ticketxxx_test.py or fractional_replication_test.py. The name containing the test pattern, py.test will select the module as a test module.
The test module will contain test suites implemented as functions named like *test_ticketxxx_
This is achieve running the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v $SPECIFIED_TEST
The test module will be executed even if its name does not contain test pattern.
We will use this ability to run a finalizer module. This module will be executed after the auto discovery mode, so after all tests completed. Its job will be to cleanup the environment (remove instances/backup…)
Each test suite will contain a topology fixture (scope module), that will create or reinitialize the topology to support the tests (standalone, MMR…)
Each test in the suite will have topology argument.
The algorithm is :
At the beginning, It may exists already some instances
It may also exists a backup for the instances
Principle:
If instances exists:
restart them
If it exists backup for all instances:
restore them from backup
else:
remove instances and backups
Create instances
Initialize topology
Create backups
By default py.test run silently returning PASSED or FAILED whether or not an assert fails in the test.
It is possible to have more detail on the execution of the test with the following command: PREFIX=/directory/where/389-ds/is/deployed py.test -v -s