lgtunit
ï
The lgtunit
tool provides testing support for Logtalk. It can also
be used for testing plain Prolog code and Prolog module code.
This tool is inspired by the xUnit frameworks architecture and by the
works of Joachim Schimpf (ECLiPSe library test_util
) and Jan
Wielemaker (SWI-Prolog plunit
package).
Tests are defined in objects, which represent a test set or test suite. In simple cases, we usually define a single object containing the tests. But it is also possible to use parametric test objects or multiple objects defining parametrizable tests or test subsets for testing more complex units and facilitate tests maintenance. Parametric test objects are specially useful to test multiple implementations of the same protocol using a single set of tests by passing the implementation object as a parameter value.
Main filesï
The lgtunit.lgt
source file implements a framework for defining and
running unit tests in Logtalk. The lgtunit_messages.lgt
source file
defines the default translations for the messages printed when running
unit tests. These messages can be intercepted to customize output, e.g.
to make it less verbose, or for integration with e.g. GUI IDEs and
continuous integration servers.
Other files part of this tool provide support for alternative output formats of test results and are discussed below.
API documentationï
This tool API documentation is available at:
Loadingï
This tool can be loaded using the query:
| ?- logtalk_load(lgtunit(loader)).
Testingï
To test this tool, load the tester.lgt
file:
| ?- logtalk_load(lgtunit(tester)).
Writing and running testsï
In order to write your own unit tests, define objects extending the
lgtunit
object. You may start by copying the tests-sample.lgt
file (at the root of the Logtalk distribution) to a tests.lgt
file
in your project directory and edit it to add your tests:
:- object(tests,
extends(lgtunit)).
% test definitions
...
:- end_object.
The section on test dialects below describes in
detail how to write tests. See the tests
top directory for examples
of actual unit tests. Other sources of examples are the library
and
examples
directories.
The tests must be term-expanded by the lgtunit
object by compiling
the source files defining the test objects using the option
hook(lgtunit)
. For example:
| ?- logtalk_load(lgtunit(loader)),
logtalk_load(tests, [hook(lgtunit)]).
As the term-expansion mechanism applies to all the contents of a source
file, the source files defining the test objects should preferably not
contain entities other than the test objects. Additional code necessary
for the tests should go to separate files. In general, the tests
themselves can be compiled in optimized mode. Assuming thatâs the
case, also use the optimize(on)
compiler option for faster tests
execution.
The term-expansion performed by the lgtunit
object sets the test
object source_data
flag to on
and the
context_switching_calls
flag to allow
for code coverage and
debugging support. But these settings can always be overriden in the
test objects.
The tester-sample.lgt
file (at the root of the Logtalk distribution)
exemplifies how to compile and load lgtunit
tool, the source code
under testing, the unit tests, and for automatically run all the tests
after loading:
:- initialization((
% minimize compilation reports to the essential ones (errors and warnings)
set_logtalk_flag(report, warnings),
% load any necessary library files for your application; for example
logtalk_load(basic_types(loader)),
% load the unit test tool
logtalk_load(lgtunit(loader)),
% load your application files (e.g. "source.lgt") enabling support for
% code coverage, which requires compilation in debug mode and collecting
% source data information; if code coverage is not required, remove the
% "debug(on)" option for faster execution
logtalk_load(source, [source_data(on), debug(on)]),
% compile the unit tests file expanding it using "lgtunit" as the hook
% object to preprocess the tests; if you have failing tests, add the
% option debug(on) to debug them (see "tools/lgtunit/NOTES.md" for
% debugging advice); tests should be loaded after the code being tested
% is loaded to avoid warnings such as references to unknown entities
logtalk_load(tests, [hook(lgtunit)]),
% run all the unit tests; assuming your tests object is named "tests"
tests::run
)).
You may copy this sample file to a tester.lgt
file in your project
directory and edit it to load your project and tests files. The
logtalk_tester
testing automation script defaults to look for test
driver files named tester.lgt
or tester.logtalk
(if you have
work-in-progress test sets that you donât want to run by default, simply
use a different file name such as tester_wip.lgt
; you can still run
them automated by using logtalk_tester -n tester_wip
).
Debugged test sets should preferably be compiled in optimal mode, specially when containing deterministic tests and when using the utility benchmarking predicates.
Assuming a tester.lgt
driver file as exemplified above, the tests
can be run by simply loading this file:
| ?- logtalk_load(tester).
Assuming your test object is named tests
, you can re-run the tests
by typing:
| ?- tests::run.
You can also re-run a single test (or a list of tests) using the
run/1
predicate:
| ?- tests::run(test_identifier).
When testing complex units, it is often desirable to split the tests between several test objects or using parametric test objects to be able to run the same tests using different parameters (e.g. different data sets or alternative implementations of the same protocol). In this case, you can run all test subsets using the goal:
| ?- lgtunit::run_test_sets([test_set_1, test_set_2, ...]).
where the run_test_sets/1
predicate argument is a list of two or
more test object identifiers. This predicate makes possible to get a
single code coverage report that takes into account all the tests.
Itâs also possible to automatically run loaded tests when using the
make
tool by calling the goal that runs the tests from a definition
of the hook predicate logtalk_make_target_action/1
. For example, by
adding to the tests tester.lgt
driver file the following code:
% integrate the tests with logtalk_make/1
:- multifile(logtalk_make_target_action/1).
:- dynamic(logtalk_make_target_action/1).
logtalk_make_target_action(check) :-
tests::run.
Alternatively, you can define the predicate make/1
inside the test
set object. For example:
:- object(tests, extends(lgtunit)).
make(check).
...
:- end_object.
This clause will cause all tests to be run when calling the
logtalk_make/1
predicate with the target check
(or its top-level
shortcut, {?}
). The other possible target is all
(with top-level
shortcut {*}
).
Note that you can have multiple test driver files. For example, one driver file that runs the tests collecting code coverage data and a quicker driver file that skips code coverage and compiles the code to be tested in optimized mode.
Automating running testsï
You can use the scripts/logtalk_tester.sh
Bash shell script or the
scripts/logtalk_tester.ps1
PowerShell script for automating running
unit tests (e.g. from a CI/CD pipeline). For example, assuming your
current directory (or sub-directories) contain one or more
tester.lgt
files:
$ logtalk_tester -p gnu
The only required argument is the identifier of the backend Prolog
system. For other options, see the scripts/NOTES.md
file or type:
$ logtalk_tester -h
On POSIX systems, you can also access extended documentation by consulting the script man page:
$ man logtalk_tester
The scripts support the same set of options but the option for passing additional arguments to the tests use different syntax. For example:
$ logtalk_tester -p gnu -- foo bar baz
PS> logtalk_tester -p gnu -a foo,bar,baz
On POSIX systems, assuming Logtalk was installed using one of the
provided installers or installation scripts, there is also a man
page for the script:
$ man logtalk_tester
Alternatively, an HTML version of this man page can be found at:
https://logtalk.org/man/logtalk_tester.html
The logtalk_tester.ps1
PowerShell script timeout option requires
that Git for Windows is also installed as it requires the GNU timeout
command bundled with it.
In alternative to using the logtalk_tester.ps1
PowerShell script,
the Bash shell version of the automation script can also be used in
Windows operating-systems with selected backends by using the Bash shell
included in the Git for Windows installer. That requires defining a
.profile
file setting the paths to the Logtalk scripts and the
Prolog backend executables. For example:
$ cat ~/.profile
# YAP
export PATH="/C/Program Files/Yap64/bin":$PATH
# GNU Prolog
export PATH="/C/GNU-Prolog/bin":$PATH
# SWI/Prolog
export PATH="/C/Program Files/swipl/bin":$PATH
# ECLiPSe
export PATH="/C/Program Files/ECLiPSe 7.0/lib/x86_64_nt":$PATH
# SICStus Prolog
export PATH="/C/Program Files/SICStus Prolog VC16 4.6.0/bin":$PATH
# Logtalk
export PATH="$LOGTALKHOME/scripts":"$LOGTALKHOME/integration":$PATH
The Git for Windows installer also includes GNU coreutils
and its
timeout
command, which is used by the logtalk_tester
script
-t
option.
Note that some tests may give different results when run from within the Bash shell compared with running the tests manually using a Windows GUI version of the Prolog backend. Some backends may also not be usable for automated testing due to the way their are made available as Windows applications.
Additional advice on testing and on automating testing using continuous integration servers can be found at:
Parametric test objectsï
Parameterized unit tests can be easily defined by using parametric test
objects. A typical example is testing multiple implementations of the
same protocol. In this case, we can use a parameter to pass the specific
implementation being tested. For example, assume that we want to run the
same set of tests for the library random_protocol
protocol. We can
write:
:- object(tests(_RandomObject_),
extends(lgtunit)).
:- uses(_RandomObject_, [
random/1, between/3, member/2,
...
]).
test(between_3_in_interval) :-
between(1, 10, Random),
1 =< Random, Random =< 10.
...
:- end_object.
We can then test a specific implementation by instantiating the parameter. For example:
| ?- tests(fast_random)::run.
Or use the lgtunit::run_test_sets/1
predicate to test all the
implementations:
| ?- lgtunit::run_test_sets([
tests(backend_random),
tests(fast_random),
tests(random)
]).
Test dialectsï
Multiple test dialects are supported by default. See the next section on how to define your own test dialects. In all dialects, a ground callable term, usually an atom, is used to uniquely identify a test. This simplifies reporting failed tests and running tests selectively. An error message is printed if invalid or duplicated test identifiers are found. These errors must be corrected otherwise the reported test results can be misleading. Ideally, tests should have descriptive names that clearly state the purpose of the test and what is being tested.
Unit tests can be written using any of the following predefined dialects:
test(Test) :- Goal.
This is the most simple dialect, allowing the specification of tests
that are expected to succeed. The argument of the test/1
predicate
is the test identifier, which must be unique. A more versatile dialect
is:
succeeds(Test) :- Goal.
deterministic(Test) :- Goal.
fails(Test) :- Goal.
throws(Test, Ball) :- Goal.
throws(Test, Balls) :- Goal.
This is a straightforward dialect. For succeeds/1
tests, Goal
is
expected to succeed. For deterministic/1
tests, Goal
is expected
to succeed once without leaving a choice-point. For fails/1
tests,
Goal
is expected to fail. For throws/2
tests, Goal
is
expected to throw the exception term Ball
or one of the exception
terms in the list Balls
. The specified exception must subsume the
actual exception for the test to succeed.
An alternative test dialect that can be used with more expressive power is:
test(Test, Outcome) :- Goal.
The possible values of the outcome argument are:
true
The test is expected to succeed.true(Assertion)
The test is expected to succeed and satisfy theAssertion
goal.deterministic
The test is expected to succeed once without leaving a choice-point.deterministic(Assertion)
The test is expected to succeed once without leaving a choice-point and satisfy theAssertion
goal.subsumes(Expected, Result)
The test is expected to succeed bindingResult
to a term that is subsumed by theExpected
term.variant(Term1, Term2)
The test is expected to succeed bindingTerm1
to a term that is a variant of theTerm2
term.exists(Assertion)
A solution exists for the test goal that satisfies theAssertion
goal.all(Assertion)
All test goal solutions satisfy theAssertion
goal.fail
The test is expected to fail.false
The test is expected to fail.error(Error)
The test is expected to throw the exception termerror(ActualError, _)
whereActualError
is subsumedError
.errors(Errors)
The test is expected to throw an exception termerror(ActualError, _)
whereActualError
is subsumed by an element of the listErrors
.ball(Ball)
The test is expected to throw the exception termActualBall
whereActualBall
is subsumedBall
.balls(Balls)
The test is expected to throw an exception termActualBall
whereActualBall
is subsumed by an element of the listBalls
.
In the case of the true(Assertion)
, deterministic(Assertion)
,
and all(Assertion)
outcomes, a message that includes the assertion
goal is printed for assertion failures and errors to help to debug
failed unit tests. Same for the subsumes(Expected, Result)
and
variant(Term1, Term2)
assertions. Note that this message is only
printed when the test goal succeeds as its failure will prevent the
assertion goal from being called. This allows distinguishing between
test goal failure and assertion failure.
Note that the all(Assertion)
outcome simplifies pinpointing which
test goal solution failed the assertion. See also the section below on
testing non-deterministic predicates.
The fail
and false
outcomes are better reserved to cases where
there is a single test goal. With multiple test goals, the test will
succeed when any of those goals fail.
Some tests may require individual condition, setup, or cleanup goals. In this case, the following alternative test dialect can be used:
test(Test, Outcome, Options) :- Goal.
The currently supported options are (non-recognized options are ignored):
condition(Goal)
Condition for deciding if the test should be run or skipped (default goal istrue
).setup(Goal)
Setup goal for the test (default goal istrue
).cleanup(Goal)
Cleanup goal for the test (default goal istrue
).flaky
Declare the test as a flaky test.note(Term)
Annotation to print (between parenthesis by default) after the test result (default is''
); the annotation term can share variables with the test goal, which can be used to pass additional information about the test result.
Also supported is QuickCheck testing where random tests are automatically generated and run given a predicate mode template with type information for each argument (see the section below for more details):
quick_check(Test, Template, Options).
quick_check(Test, Template).
The valid options are the same as for the test/3
dialect plus all
the supported QuickCheck specific options (see the QuickCheck section
below for details).
For examples of how to write unit tests, check the tests
folder or
the testing
example in the examples
folder in the Logtalk
distribution. Most of the provided examples also include unit tests,
some of them with code coverage.
User-defined test dialectsï
Additional test dialects can be easily defined by extending the
lgtunit
object and by term-expanding the new dialect into one of the
default dialects. As an example, suppose that you want a dialect where
you can simply write a file with tests defined by clauses using the
format:
test_identifier :-
test_goal.
First, we define an expansion for this file into a test object:
:- object(simple_dialect,
implements(expanding)).
term_expansion(begin_of_file, [(:- object(tests,extends(lgtunit)))]).
term_expansion((Head :- Body), [test(Head) :- Body]).
term_expansion(end_of_file, [(:- end_object)]).
:- end_object.
Then we can use this hook object to expand and run tests written in this
dialect by using a tester.lgt
driver file with contents such as:
:- initialization((
set_logtalk_flag(report, warnings),
logtalk_load(lgtunit(loader)),
logtalk_load(library(hook_flows_loader)),
logtalk_load(simple_dialect),
logtalk_load(tests, [hook(hook_pipeline([simple_dialect,lgtunit]))]),
tests::run
)).
The hook pipeline first applies our simple_dialect
expansion
followed by the default lgtunit
expansion. This solution allows
other hook objects (e.g. required by the code being tested) to also be
used by updating the pipeline.
QuickCheckï
QuickCheck was originally developed for Haskell. Implementations for
several other programming languages soon followed. QuickCheck provides
support for property-based testing. The idea is to express properties
that predicates must comply with and automatically generate tests for
those properties. The lgtunit
tool supports both quick_check/2-3
test dialects, as described above, and quick_check/1-3
public
predicates for interactive use:
quick_check(Template, Result, Options).
quick_check(Template, Options).
quick_check(Template).
The following options are supported:
n/1
: number of random tests that will be generated and run (default is 100).s/1
: maximum number of shrink operations when a counter-example is found (default is 64).ec/1
: boolean option deciding if type edge cases are tested before generating random tests (default istrue
).rs/1
: starting seed to be used when generating the random tests (no default).pc/1
: pre-condition closure for generated tests (extended with the test arguments; no default).l/1
: label closure for classifying the generated tests (extended with the test arguments plus the label argument; no default).v/1
: boolean option for verbose reporting of generated random tests (default isfalse
).pb/2
: progress bar option for executed random tests when the verbose option is false (first argument is a boolean, default isfalse
; second argument is the tick number, a positive integer).
The quick_check/1
predicate uses the default option values. The
quick_check/1-2
predicates print the test results and are thus
better reserved for testing at the top-level interpreter. The
quick_check/3
predicate returns results in reified form:
passed(SequenceSeed, Discarded, Labels)
failed(Goal, SequenceSeed, TestSeed)
error(Error, Goal, SequenceSeed, TestSeed)
broken(Why, Culprit)
The broken(Why, Culprit)
result only occurs when the user-defined
testing setup is broken. For example, a non-callable template (e.g. a
non-existing predicate), an invalid option, a problem with the
pre-condition closure or with the label closure (e.g. a pre-condition
that always fails or a label that fails to classify a generated test),
or errors/failures when generating tests (e.g. due to an unknown type
being used in the template or a broken custom type arbitrary value
generator).
The Goal
argument is the random test that failed.
The SequenceSeed
argument is the starting seed used to generate the
sequence of random tests. The TestSeed
is the seed used to generate
the test that failed. Both seems should be regarded as opaque terms.
When the test seed equal to the sequence seed, this means means that the
failure or error occurred while using only type edge cases. See below
how to use the seeds when testing bug fixes.
The Discarded
argument returns the number of generated tests that
were discarded for failing to comply a pre-condition specified using the
pc/1
option. This option is specially useful when constraining or
enforcing a relation between the generated arguments and is often used
as an alternative to define a custom type. For example, if we define the
following predicate:
condition(I) :-
between(0, 127, I).
we can then use it to filter the generated tests:
| ?- lgtunit::quick_check(integer(+byte), [pc(condition)]).
% 100 random tests passed, 94 discarded
% starting seed: seed(416,18610,17023)
yes
The Labels
argument returns a list of pairs Label-N
where N
is the number of generated tests that are classified as Label
by a
closure specified using the l/1
option. For example, assuming the
following predicate definition:
label(I, Label) :-
( I mod 2 =:= 0 ->
Label = even
; Label = odd
).
we can try:
| ?- lgtunit::quick_check(integer(+byte), [l(label), n(10000)]).
% 10000 random tests passed, 0 discarded
% starting seed: seed(25513,20881,16407)
% even: 5037/10000 (50.370000%)
% odd: 4963/10000 (49.630000%)
yes
The label statistics are key to verify that the generated tests provide
the necessary coverage. The labelling predicates can return a single
test label or a list of test labels. Labels should be ground and are
typically atoms. To examine the generated tests themselves, you can use
the verbose option, v/1
. For example:
| ?- lgtunit::quick_check(integer(+integer), [v(true), n(7), pc([I]>>(I>5))]).
% Discarded: integer(0)
% Passed: integer(786)
% Passed: integer(590)
% Passed: integer(165)
% Discarded: integer(-412)
% Passed: integer(440)
% Discarded: integer(-199)
% Passed: integer(588)
% Discarded: integer(-852)
% Discarded: integer(-214)
% Passed: integer(196)
% Passed: integer(353)
% 7 random tests passed, 5 discarded
% starting seed: seed(23671,3853,29824)
yes
When a counter-example is found, the verbose option also prints the shrink steps. For example:
| ?- lgtunit::quick_check(atom(+atomic), [v(true), ec(false)]).
% Passed: atom('dyO=Xv_MX-3b/U4KH U')
* Failure: atom(-198)
* Shrinked: atom(-99)
* Shrinked: atom(-49)
* Shrinked: atom(-24)
* Shrinked: atom(-12)
* Shrinked: atom(-6)
* Shrinked: atom(-3)
* Shrinked: atom(-1)
* Shrinked: atom(0)
* quick check test failure (at test 2 after 8 shrinks):
* atom(0)
* starting seed: seed(3172,9814,20125)
* test seed: seed(7035,19506,18186)
no
The template can be a (::)/2
, (<<)/2
, or (:)/2
qualified
callable term. When the template is an unqualified callable term, it
will be used to construct a goal to be called in the context of the
sender using the (<<)/2
debugging control construct. Another
simple example by passing a template that will trigger a failed test (as
the random::random/1
predicate always returns non-negative floats):
| ?- lgtunit::quick_check(random::random(-negative_float)).
* quick check test failure (at test 1 after 0 shrinks):
* random::random(0.09230089279334841)
* starting seed: seed(3172,9814,20125)
* test seed: seed(3172,9814,20125)
no
When QuickCheck exposes a bug in the tested code, we can use the
reported counter-example to help diagnose it and fix it. As tests are
randomly generated, we can use the starting seed reported with the
counter-example to confirm the bug fix by calling the
quick_check/2-3
predicates with the rs(Seed)
option. For
example, assume the following broken predicate definition:
every_other([], []).
every_other([_, X| L], [X | R]) :-
every_other(L, R).
The predicate is supposed to construct a list by taking every other element of an input list. Cursory testing may fail to notice the bug:
| ?- every_other([1,2,3,4,5,6], List).
List = [2, 4, 6]
yes
But QuickCheck will report a bug with lists with an odd number of elements with a simple property that verifies that the predicate always succeed and returns a list of integers:
| ?- lgtunit::quick_check(every_other(+list(integer), -list(integer))).
* quick check test failure (at test 2 after 0 shrinks):
* every_other([0],A)
* starting seed: seed(3172,9814,20125)
* test seed: seed(3172,9814,20125)
no
We could fix this particular bug by rewriting the predicate:
every_other([], []).
every_other([H| T], L) :-
every_other(T, H, L).
every_other([], X, [X]).
every_other([_| T], X, [X| L]) :-
every_other(T, L).
By retesting with the same test seed that uncovered the bug, the same random test that found the bug will be generated and run again:
| ?- lgtunit::quick_check(
every_other(+list(integer), -list(integer)),
[rs(seed(3172,9814,20125))]
).
% 100 random tests passed, 0 discarded
% starting seed: seed(3172,9814,20125)
yes
Still, after verifying the bug fix, is also a good idea to re-run the tests using the sequence seed instead as bug fixes sometimes cause regressions elsewhere.
When retesting using the logtalk_tester
automation script, the
starting seed can be set using the -r
option. For example:
$ logtalk_tester -r "seed(3172,9814,20125)"
We could now move to other properties that the predicate should comply (e.g. all elements in the output list being present in the input list). Often, both traditional unit tests and QuickCheck tests are used, complementing each other to ensure the required code coverage.
Another example using a Prolog module predicate:
| ?- lgtunit::quick_check(
pairs:pairs_keys_values(
+list(pair(atom,integer)),
-list(atom),
-list(integer)
)
).
% 100 random tests passed, 0 discarded
% starting seed: seed(3172,9814,20125)
yes
As illustrated by the examples above, properties are expressed using predicates. In the most simple cases, that can be the predicate that we are testing itself. But, in general, it will be an auxiliary predicate calling the predicate or predicates being tested and checking properties that the results must comply with.
The QuickCheck test dialects and predicates take as argument the mode
template for a property, generate random values for each input argument
based on the type information, and check each output argument. For
common types, the implementation tries first (by default) common edge
cases (e.g. empty atom, empty list, or zero) before generating arbitrary
values. When the output arguments check fails, the QuickCheck
implementation tries (by default) up to 64 shrink operations of the
counter-example to report a simpler case to help debugging the failed
test. Edge cases, generating of arbitrary terms, and shrinking terms
make use of the library arbitrary
category via the type
object
(both entities can be extended by the user by defining clauses for
multifile predicates).
The mode template syntax is the same used in the info/2
predicate
directives with an additional notation, {}/1
, for passing argument
values as-is instead of generating random values for these arguments.
For example, assume that we want to verify the type::valid/2
predicate, which takes as first argument a type. Randomly generating
random types would be cumbersome at best but the main problem is that we
need to generate random values for the second argument according to the
first argument. Using the {}/1
notation we can solve this problem
for any specific type, e.g. integer, by writing:
| ?- lgtunit::quick_check(type::valid({integer}, +integer)).
We can also test all (ground, i.e. non-parametric) types with arbitrary value generators by writing:
| ?- forall(
(type::type(Type), ground(Type), type::arbitrary(Type)),
lgtunit::quick_check(type::valid({Type}, +Type))
).
You can find the list of the basic supported types for using in the
template in the API documentation for the library entities type
and
arbitrary
. Note that other library entities, including third-party
or your own, can contribute with additional type definitions as both
type
and arbitrary
entities are user extensible by defining
clauses for their multifile predicates.
The user can define new types to use in the property mode templates to
use with its QuickCheck tests by defining clauses for the type
library object and the arbitrary
library category multifile
predicates. QuickCheck will use the later to generate arbitrary input
arguments and the former to verify output arguments. As a toy example,
assume that the property mode template have an argument of type bit
with possible values 0
and 1
. We would then need to define:
:- multifile(type::type/1).
type::type(bit).
:- multifile(type::check/2).
type::check(bit, Term) :-
once((Term == 0; Term == 1)).
:- multifile(arbitrary::arbitrary/1).
arbitrary::arbitrary(bit).
:- multifile(arbitrary::arbitrary/2).
arbitrary::arbitrary(bit, Arbitrary) :-
random::member(Arbitrary, [0, 1]).
Skipping testsï
A test object can define the condition/0
predicate (which defaults
to true
) to test if some necessary condition for running the tests
holds. The tests are skipped if the call to this predicate fails or
generates an error.
Individual tests that for some reason should be unconditionally skipped
can have the test clause head prefixed with the (-)/1
operator. For
example:
- test(not_yet_ready) :-
...
In this case, itâs a good idea to use the test/3
dialect with a
note/1
option that briefly explains why the test is being skipped.
For example:
- test(xyz_reset, true, [note('Feature xyz reset not yet implemented')]) :-
...
The number of skipped tests is reported together with the numbers of
passed and failed tests. To skip a test depending on some condition, use
the test/3
dialect and the condition/1
option. For example:
test(test_id, true, [condition(current_prolog_flag(bounded,true))) :-
...
The test is skipped if the condition goal fails or generates an error. The conditional compilation directives can also be used in alternative but note that in this case there will be no report on the number of skipped tests.
Selecting testsï
While debugging an application, we often want to temporarily run just a
selection of relevant tests. This is specially useful when running all
the tests slows down and distracts from testing fixes for a specific
issue. This can be accomplished by prefixed the clause heads of the
selected tests with the (+)/1
operator. For example:
:- object(tests,
extends(lgtunit)).
cover(ack).
test(ack_1, true(Result == 11)) :-
ack::ack(2, 4, Result).
+ test(ack_2, true(Result == 61)) :-
ack::ack(3, 3, Result).
test(ack_3, true(Result == 125)) :-
ack::ack(3, 4, Result).
:- end_object.
In this case, only the ack_2
would run. Just be careful to remove
all (+)/1
test prefixes when done debugging the issue that prompted
you to run just the selected tests. After, be sure to run all the tests
to ensure there are no regressions introduced by your fixes.
Checking test goal resultsï
Checking test goal results can be performed using the test/2-3
supported outcomes such as true(Assertion)
and
deterministic(Assertion)
. For example:
test(compare_3_order_less, deterministic(Order == (<))) :-
compare(Order, 1, 2).
For the other test dialects, checking test goal results can be performed
by calling the assertion/1-2
utility predicates or by writing the
checking goals directly in the test body. For example:
test(compare_3_order_less) :-
compare(Order, 1, 2),
^^assertion(Order == (<)).
or:
succeeds(compare_3_order_less) :-
compare(Order, 1, 2),
Order == (<).
Using assertions is, however, preferable to directly check test results in the test body as it facilitates debugging by printing the unexpected results when the assertions fail.
The assertion/1-2
utility predicates are also useful for the
test/2-3
dialects when we want to check multiple assertions in the
same test. For example:
test(dictionary_clone_4_01, true) :-
as_dictionary([], Dictionary),
clone(Dictionary, DictionaryPairs, Clone, ClonePairs),
empty(Clone),
^^assertion(original_pairs, DictionaryPairs == []),
^^assertion(clone_pairs, ClonePairs == []).
Ground results can be compared using the standard ==/2
term equality
built-in predicate. Non-ground results can be compared using the
variant/2
predicate provided by lgtunit
. The standard
subsumes_term/2
built-in predicate can be used when testing a
compound term structure while abstracting some of its arguments.
Floating-point numbers can be compared using the =~=/2
,
approximately_equal/3
, essentially_equal/3
, and
tolerance_equal/4
predicates provided by lgtunit
. Using the
=/2
term unification built-in predicate is almost always an error as
it would mask test goals failing to bind output arguments. The
lgtunit
tool implements a linter check for the use of unification
goals in test outcome assertions. In the rare cases that a unification
goal is intended, wrapping the (=)/2
goal using the {}/1
control
construct avoids the linter warning.
When the meta-argument of the assertion/1-2
predicates is call to a
local predicate (in the tests object), you need to call them using the
(::)/2
message-sending control construct instead of the (^^)/2
super call control construct. This is necessary as super calls
preserve the sender and the tests are implicitly run by the
lgtunit
object sending a message to the tests object. For example:
:- uses(lgtunit, [
assertion/1
]).
test(my_test_id, true) :-
foo(X, Y),
assertion(consistent(X, Y)).
consistent(X, Y) :-
...
In this case, the sender is the tests object and the assertion/1
meta-predicate will call the local consistent/2
predicate in the
expected context.
Testing local predicatesï
The (<<)/2
debugging control construct can be used to access and
test object local predicates (i.e. predicates without a scope
directive). In this case, make sure that the context_switching_calls
compiler flag is set to allow
for those objects. This is seldom
required, however, as local predicates are usually auxiliary predicates
called by public predicates and thus tested when testing those public
predicates. The code coverage support can pinpoint any local predicate
clause that is not being exercised by the tests.
Testing non-deterministic predicatesï
For testing non-deterministic predicates (with a finite and manageable
number of solutions), you can wrap the test goal using the standard
findall/3
predicate to collect all solutions and check against the
list of expected solutions. When the expected solutions are a set, use
in alternative the standard setof/3
predicate.
If you want to check that all solutions of a non-deterministic predicate
satisfy an assertion, use the test/2
or test/3
test dialect with
the all(Assertion)
outcome. For example:
test(atom_list, all(atom(Item))) :-
member(Item, [a, b, c]).
See also the next section on testing generators.
If you want to check that a solution exists for a non-deterministic
predicate that satisfies an assertion, use the test/2
or test/3
test dialect with the exists(Assertion)
outcome. For example:
test(at_least_one_atom, exists(atom(Item))) :-
member(Item, [1, foo(2), 3.14, abc, 42]).
Testing generatorsï
To test all solutions of a predicate that acts as a generator, we can
use either the all/1
outcome or the forall/2
predicate as the
test goal with the assertion/2
predicate called to report details on
any solution that fails the test. For example:
test(test_solution_generator, all(test(X,Y,Z))) :-
generator(X, Y, Z).
or:
:- uses(lgtunit, [assertion/2]).
...
test(test_solution_generator_2) :-
forall(
generator(X, Y, Z),
assertion(generator(X), test(X,Y,Z))
).
While using the all/1
outcome results in a more compact test
definition, using the forall/2
predicate allows customizing the
assertion description. In the example above, we use the generator(X)
description instead of the test(X,Y,Z)
description implicit when we
use the all/1
outcome.
Testing input/output predicatesï
Extensive support for testing input/output predicates is provided, based on similar support found on the Prolog conformance testing framework written by Péter Szabó and Péter Szeredi.
Two sets of predicates are provided, one for testing text input/output
and one for testing binary input/output. In both cases, temporary files
(possibly referenced by a user-defined alias) are used. The predicates
allow setting, checking, and cleaning text/binary input/output. These
predicate are declared as protected and thus called using the (^^/1)
control construct.
As an example of testing an input predicate, consider the standard
get_char/1
predicate. This predicate reads a single character (atom)
from the current input stream. Some test for basic functionality could
be:
test(get_char_1_01, true(Char == 'q')) :-
^^set_text_input('qwerty'),
get_char(Char).
test(get_char_1_02, true(Assertion)) :-
^^set_text_input('qwerty'),
get_char(_Char),
^^text_input_assertion('werty', Assertion).
As you can see in the above example, the testing pattern consist on
setting the input for the predicate being tested, calling it, and then
checking the results. It is also possible to work with streams other
than the current input/output streams by using the lgtunit
predicate
variants that take a stream alias as argument. For example, when testing
the standard get_char/2
predicate, we could write:
test(get_char_2_01, true(Char == 'q')) :-
^^set_text_input(in, 'qwerty'),
get_char(in, Char).
test(get_char_2_02, true(Assertion)) :-
^^set_text_input(in, 'qwerty'),
get_char(in, _Char),
^^text_input_assertion(in, 'werty', Assertion).
Testing output predicates follows a similar pattern by using instead the
set_text_output/1-2
and text_output_assertion/2-3
predicates.
For example:
test(put_char_2_02, true(Assertion)) :-
^^set_text_output(out, 'qwert'),
put_char(out, y),
^^text_output_assertion(out, 'qwerty', Assertion).
The set_text_output/1
predicate diverts only the standard output
stream (to a temporary file) using the standard set_output/1
predicate. Most backend Prolog systems also support writing to the de
facto standard error stream. But thereâs no standard solution to divert
this stream. However, several systems provide a set_stream/2
or
similar predicate that can be used for stream redirection. For example,
assume that you wanted to test a backend Prolog system warning when an
initialization/1
directive fails that is written to user_error
.
An hypothetical test could be:
test(singletons_warning, true(Assertion)) :-
^^set_text_output(''),
current_output(Stream),
set_stream(Stream, alias(user_error)),
consult(broken_file),
^^text_output_assertion('WARNING: initialization/1 directive failed', Assertion).
For testing binary input/output predicates, equivalent testing predicates are provided. There is also a small set of helper predicates for dealing with stream handles and stream positions. For testing with files instead of streams, testing predicates are provided that allow creating text and binary files with given contents and check text and binary files for expected contents.
For more practical examples, check the included tests for Prolog standard conformance of built-in input/output predicates.
Suppressing tested predicates outputï
Sometimes predicates being tested output text or binary data that at
best clutters testing logs and at worse can interfere with parsing of
test logs. If that output itself is not under testing, you can suppress
it by using the goals ^^suppress_text_output
or
^^suppress_binary_output
at the beginning of the tests. For example:
test(proxies_04, true(Color == yellow)) :-
^^suppress_text_output,
{circle('#2', Color)}::print.
The suppress_text_output/0
and suppress_binary_output/0
predicates work by redirecting standard output to the operating-system
null device. But the application may also output to e.g. user_error
and other streams. If this output must also be suppressed, several
alternatives are described next.
Output of expected warnings can be suppressed by turning off the corresponding linter flags. In this case, it is advisable to restrict the scope of the flag value changes as much as possible.
Output of expected compiler errors can be suppressed by defining
suitable clauses for the logtalk::message_hook/4
hook predicate. For
example:
:- multifile(logtalk::message_hook/4).
:- dynamic(logtalk::message_hook/4).
% ignore expected domain error
logtalk::message_hook(compiler_error(_,_,error(domain_error(foo,bar),_)), error, core, _).
In this case, it is advisable to restrict the scope of the clauses as
much as possible to exact exception terms. For the exact message terms,
see the core_messages
category source file. Defining this hook
predicate can also be used to suppress all messages from a given
component. For example:
:- multifile(logtalk::message_hook/4).
:- dynamic(logtalk::message_hook/4).
logtalk::message_hook(_Message, _Kind, code_metrics, _Tokens).
Note that thereâs no portable solution to suppress all output.
However, several systems provide a set_stream/2
or similar predicate
that can be used for stream redirection. Check the documentation of the
backend Prolog systems youâre using for details.
Tests with timeout limitsï
Thereâs no portable way to call a goal with a timeout limit. However, some backend Prolog compilers provide this functionality:
B-Prolog:
time_out/3
built-in predicateECLiPSe:
timeout/3
andtimeout/7
library predicatesXVM:
call_with_timeout/2-3
built-in predicatesSICStus Prolog:
time_out/3
library predicateSWI-Prolog:
call_with_time_limit/2
library predicateTrealla Prolog:
call_with_time_limit/2
andtime_out/3
library predicatesXSB:
timed_call/2
built-in predicateYAP:
time_out/3
library predicate
Logtalk provides a timeout
portability library implementing a simple
abstraction for those backend Prolog compilers.
The logtalk_tester
automation script accepts a timeout option that
can be used to set a limit per test set.
Setup and cleanup goalsï
A test object can define setup/0
and cleanup/0
goals. The
setup/0
predicate is called, when defined, before running the object
unit tests. The cleanup/0
predicate is called, when defined, after
running all the object unit tests. The tests are skipped when the setup
goal fails or throws an error. For example:
cleanup :-
this(This),
object_property(This, file(_,Directory)),
atom_concat(Directory, serialized_objects, File),
catch(ignore(os::delete_file(File)), _, true).
Per test setup and cleanup goals can be defined using the test/3
dialect and the setup/1
and cleanup/1
options. The test is
skipped when the setup goal fails or throws an error. Note that a broken
test cleanup goal doesnât affect the test but may adversely affect any
following tests. Variables in the setup and cleanup goals are shared
with the test body.
Test annotationsï
Itâs possible to define per unit and per test annotations to be printed
after the test results or when tests are skipped. This is particularly
useful when some units or some unit tests may be run while still being
developed. Annotations can be used to pass additional information to a
user reviewing test results. By intercepting the unit test framework
message printing calls (using the message_hook/4
hook predicate),
test automation scripts and integrating tools can also access these
annotations.
Units can define a global annotation using the predicate note/1
. To
define per test annotations, use the test/3
dialect and the
note/1
option. For example, you can inform why a test is being
skipped by writing:
- test(foo_1, true, [note('Waiting for Deep Thought answer')]) :-
...
Another common use is to return the execution time of one of the test sub-goals. For example:
test(foobar, true, [note(bar(seconds-Time))]) :-
foo(...),
benchmark(bar(...), Time).
Annotations are written, by default, between parenthesis after and in the same line as the test results.
Test execution times and memory usageï
Individual test CPU and wall execution times (in seconds) are reported
by default when running the tests. Total CPU and wall execution times
for passed and failed tests are reported after the tests complete.
Starting and ending date and time when running a set of tests is also
reported by default. The lgtunit
object also provides several public
benchmarking predicates that can be useful for e.g. reporting test
sub-goals execution times using either CPU or wall clocks. When running
multi-threaded code, the CPU time may or may not include all threads CPU
time depending on the backend.
Test memory usage is not reported by default due to the lack of a
portable solution to access memory data. However, several backend Prolog
systems provide a statistics/2
or similar predicate that can be used
for a custom solution. Depending on the system, individual keys may be
provided for each memory area (heap, trail, atom table, â¦).
Aggregating keys may also be provided. As an hypothetical example,
assume youâre running Logtalk with a backend providing a
statistics/2
predicate with a memory_used
key:
test(ack_3, true(Result == 125), [note(memory-Memory)]) :-
statistics(memory_used, Memory0),
ack::ack(3, 4, Result),
statistics(memory_used, Memory1),
Memory is Memory1 - Memory0.
Consult the documentation of the backend Prolog systems for actual details.
Working with test data filesï
Frequently tests make use of test data files that are usually stored in
the test set directory or in sub-directories. These data files are
referenced using their relative paths. But to allow the tests to run
independently of the Logtalk process current directory, the relative
paths often must be expanded into an absolute path before being passed
to the predicates being tested. The file_path/2
protected predicate
can be used in the test definitions to expand the relative paths. For
example:
% check that the encoding/1 option is accepted
test(lgt_unicode_open_4_01, true) :-
^^file_path(sample_utf_8, Path),
open(Path, write, Stream, [encoding('UTF-8')]),
close(Stream).
The absolute path is computed relative to the path of self, i.e. relative to the path of the test object that received the message that runs the tests.
Itâs also common for tests to create temporary files and directories
that should be deleted after the tests completion. The clean_file/1
and clean_directory/1
protected predicates can be used for this
purpose. For example, assuming that the tests create a foo.txt
text
file and a tmp
directory in the same directory of the tests object:
cleanup :-
^^clean_file('foo.txt'),
^^clean_directory('tmp').
Similar to the file_path/2
predicate, relative paths are interpreted
as relative to the path of the test object. This predicate also closes
any open stream connected to the file before deleting it.
Flaky testsï
Flaky tests are tests that pass or fail non-deterministically, usually
due to external conditions (e.g. computer or network load). Thus, flaky
tests often donât result from bugs in the code being tested itself but
from test execution conditions that are not predictable. The flaky/0
test option declares a test to be flaky. For example:
test(foo, true, [flaky]) :-
...
For backawards compatibility, the note/1
annotation can also be used
to alert that a test failure is for a flaky test when its argument is an
atom containing the sub-atom flaky
.
The testing automation support outputs the text [flaky]
when
reporting failed flaky tests. Moreover, the logtalk_tester
automation script will ignore failed flaky tests when setting its exit
status.
Mockingï
Sometimes the code being tested performs complex tasks that are not feasible or desirable when running tests. For example, the code may perform a login operation requiring the user to provide a username and a password using some GUI widget. In this case, the tests may required the login operation to still be performed but using canned data (also simplifying testing automation). I.e. we want to mock (as in imitate) the login procedure. Ideally, this should be accomplished without requiring any changes to the code being tested. Logtalk provides two solutions that can be used for mocking: term-expansion and hot patching. A third solution is possible if the code we want to mock uses the message printing mechanism.
Using the term-expansion mechanism, we would define a hook object that expands the login predicate into a fact:
:- object(mock_login,
implements(expanding)).
term_expansion((login(_, _) :- _), login(jdoe, test123)).
:- end_object.
The tests driver file would then load the application object responsible for user management using this hook object:
:- initialization((
...,
logtalk_load(mock_login),
logtalk_load(user_management, [hook(mock_login)]),
...
)).
Using hot patching, we would define a complementing category patching the object that defines the login predicate:
:- category(mock_login,
complements(user_management)).
login(jdoe, test123).
:- end_category.
The tests driver file would then set the complements
flag to
allow
and load the patch after loading application code:
:- initialization((
...,
set_logtalk_flag(complements, allow),
logtalk_load(application),
logtalk_load(mock_login),
...
)).
There are pros and cons for each solution. Term-expansion works by defining hook objects that are used at compile time while hot patching happens at runtime. Complementing categories can also be dynamically created, stacked, and abolished. Hot patching disables static binding optimizations but thatâs usually not a problem as the code being tested if often compiled in debug mode to collect code coverage data. Two advantages of the term-expansion solution is that it allows defining conditions for expanding terms and goals and can replace both predicate definitions and predicate calls. Limitations in the current Prolog standards prevent patching callers to local predicates being patched. But often both solutions can be used with the choice depending on code clarity and user preference. See the Handbook sections on term-expansion and hot patching for more details on these mechanisms.
In those cases where the code we want to mock uses the message printing
mechanism, the solution is to intercept and rewrite the messages being
printed and/or the questions being asked using the
logtalk::message_hook/4
and logtalk::question_hook/6
hook
predicates.
Debugging messages in testsï
Sometimes is useful to write debugging or logging messages from tests when running them manually. But those messages are better suppressed when running the tests automated. A common solution is to use debug meta-messages. For example:
:- uses(logtalk, [
print_message(debug, my_app, Message) as dbg(Message)
]).
test(some_test_id, ...) :-
...,
dbg('Some intermediate value'-Value),
...,
dbg([Stream]>>custom_print_goal(Stream, ...)),
...
The messages are only printed (and the user-defined printing goals are
only called) when the debug
flag is turned on. Note that this
doesnât require compiling the tests in debug mode: you simply toggle the
flag to toggle the debug messages. Also note that the
print_message/3
goals are suppressed by compiler when compiling with
the optimize
flag turned on.
Debugging failed testsï
Debugging of failed unit tests is simplified by using test assertions as
the reason for the assertion failures is printed out. Thus, use
preferably the test/2-3
dialects with true(Assertion)
,
deterministic(Assertion)
, subsumes(Expected, Result)
, or
variant(Term1, Term2)
outcomes. If a test checks multiple
assertions, you can use the predicate assertion/2
in the test body.
In the case of QuickCheck tests, the v(true)
verbose option can be
used to print the generated test case that failed if necessary.
If the assertion failures donât provide enough information, you can use
the debugger
tool to debug failed unit tests. Start by compiling the
unit test objects and the code being tested in debug mode. Load the
debugger and trace the test that you want to debug. For example,
assuming your tests are defined in a tests
object and that the
identifier of test to be debugged is test_foo
:
| ?- logtalk_load(debugger(loader)).
...
| ?- debugger::trace.
...
| ?- tests::run(test_foo).
...
You can also compile the code and the tests in debug mode but without
using the hook/1
compiler option for the tests compilation. Assuming
that the context_switching_calls
flag is set to allow
, you can
then use the (<<)/2
debugging control construct to debug the tests.
For example, assuming that the identifier of test to be debugged is
test_foo
and that you used the test/1
dialect:
| ?- logtalk_load(debugger(loader)).
...
| ?- debugger::trace.
...
| ?- tests<<test(test_foo).
...
In the more complicated cases, it may be worth to define
loader_debug.lgt
and tester_debug.lgt
driver files that load
code and tests in debug mode and also load the debugger.
Code coverageï
If you want entity predicate clause coverage information to be collected
and printed, you will need to compile the entities that youâre testing
using the flags debug(on)
and source_data(on)
. Be aware,
however, that compiling in debug mode results in a performance penalty.
A single test object may include tests for one or more entities
(objects, protocols, and categories). The entities being tested by a
unit test object for which code coverage information should be collected
must be declared using the cover/1
predicate. For example, to
collect code coverage data for the objects foo
and bar
include
in the tests object the two clauses:
cover(foo).
cover(bar).
Code coverage is listed using the predicates clause indexes (counting
from one). For example, using the points
example in the Logtalk
distribution:
% point: default_init_option/1 - 2/2 - (all)
% point: instance_base_name/1 - 1/1 - (all)
% point: move/2 - 1/1 - (all)
% point: position/2 - 1/1 - (all)
% point: print/0 - 1/1 - (all)
% point: process_init_option/1 - 1/2 - [1]
% point: position_/2 - 0/0 - (all)
% point: 7 out of 8 clauses covered, 87.500000% coverage
The numbers after the predicate indicators represents the clauses
covered and the total number of clauses. E.g. for the
process_init_option/1
predicate, the tests cover 1 out of 2 clauses.
After these numbers, we either get (all)
telling us that all clauses
are covered or a list of indexes for the covered clauses. E.g. only the
first clause for the process_init_option/1
predicate, [1]
.
Summary clause coverage numbers are also printed for entities and for
clauses across all entities.
In the printed predicate clause coverage information, you may get a total number of clauses smaller than the covered clauses. This results from the use of dynamic predicates with clauses asserted at runtime. You may easily identify dynamic predicates in the results as their clauses often have an initial count equal to zero.
The list of indexes of the covered predicate clauses can be quite long. Some backend Prolog compilers provide a flag or a predicate to control the depth of printed terms that can be useful:
CxProlog:
write_depth/2
predicateECLiPSe:
print_depth
flagXVM 3.2.0 or later:
answer_write_options
flagSICStus Prolog:
toplevel_print_options
flagSWI-Prolog 7.1.10 or earlier:
toplevel_print_options
flagSWI-Prolog 7.1.11 or later:
answer_write_options
flagTrealla Prolog:
answer_write_options
flagXSB:
set_file_write_depth/1
predicateYAP:
write_depth/2-3
predicates
Code coverage is only available when testing Logtalk code. But Prolog
modules can often be compiled as Logtalk objects and plain Prolog code
may be wrapped in a Logtalk object. For example, assuming a
module.pl
module file, we can compile and load the module as an
object by simply calling:
| ?- logtalk_load(module).
...
The module exported predicates become object public predicates. For a
plain Prolog file, say plain.pl
, we can define a Logtalk object that
wraps the code using an include/1
directive:
:- object(plain).
:- include('plain.pl').
:- end_object.
The object can also declare as public the top Prolog predicates to
simplify writing the tests. In alternative, we can use the
object_wrapper_hook
provided by the hook_objects
library:
| ?- logtalk_load(hook_objects(loader)).
...
| ?- logtalk_load(plain, [hook(object_wrapper_hook)]).
...
These workarounds may thus allow generating code coverage data also for
Prolog code by defining tests that use the (<<)/2
debugging control
construct to call the Prolog predicates.
See also the section below on exporting code coverage results to XML files, which can be easily converted and published as e.g. HTML reports.
Utility predicatesï
The lgtunit
tool provides several public utility predicates to
simplify writing unit tests and for general use:
variant(Term1, Term2)
To check when two terms are a variant of each other (e.g. to check expected test results against actual results when they contain variables).assertion(Goal)
To generate an exception in case the goal argument fails or throws an error.assertion(Description, Goal)
To generate an exception in case the goal argument fails or throws an error (the first argument allows assertion failures to be distinguished when using multiple assertions).approximately_equal(Number1, Number2)
For number approximate equality using theepsilon
arithmetic constant value.approximately_equal(Number1, Number2, Epsilon)
For number approximate equality. Weaker equality than essential equality.essentially_equal(Number1, Number2, Epsilon)
For number essential equality. Stronger equality than approximate equality.tolerance_equal(Number1, Number2, RelativeTolerance, AbsoluteTolerance)
For number equality within tolerances.Number1 =~= Number2
For number (or list of numbers) close equality (usually floating-point numbers).benchmark(Goal, Time)
For timing a goal.benchmark_reified(Goal, Time, Result)
Reified version ofbenchmark/2
.benchmark(Goal, Repetitions, Time)
For finding the average time to prove a goal.benchmark(Goal, Repetitions, Clock, Time)
For finding the average time to prove a goal using acpu
or awall
clock.deterministic(Goal)
For checking that a predicate succeeds without leaving a choice-point.deterministic(Goal, Deterministic)
Reified version of thedeterministic/1
predicate.
The assertion/1-2
predicates can be used in the body of tests where
using two or more assertions is convenient or in the body of tests
written using the test/1
, succeeds/1
, and deterministic/1
dialects to help differentiate between the test goal and checking the
test goal results and to provide more informative test failure messages.
When the assertion, benchmarking, and deterministic meta-predicates call
a local predicate of the tests object, you must call them using an
implicit or explicit message instead of a using super call. For
example, to use an implicit message to call the assertion/1-2
meta-predicates, add the following directive to the tests object:
:- uses(lgtunit, [assertion/1, assertion/2]).
The reason this is required is that meta-predicates goals arguments are
always called in the context of the sender, which would be the
lgtunit
object in the case of a (^^)/2
call (as it preserves
both self and sender and the tests are internally run by a message
sent from the lgtunit
object to the tests object).
As the benchmark/2-4
predicates are meta-predicates, turning on the
optimize
compiler flag is advised to avoid runtime compilation of
the meta-argument, which would add an overhead to the timing results.
But this advice conflicts with collecting code coverage data, which
requires compilation in debug mode. The solution is to use separate test
objects for benchmarking and for code coverage. Note that the CPU and
wall execution times (in seconds) for each individual test are reported
by default when running the tests.
The (=~=)/2
predicate is typically used by adding the following
directive to the object (or category) calling it:
:- uses(lgtunit, [
op(700, xfx, =~=), (=~=)/2
]).
Consult the lgtunit
object API documentation for more details on
these predicates.
Exporting test results in xUnit XML formatï
To output test results in the xUnit XML format (from JUnit; see e.g.
https://github.com/windyroad/JUnit-Schema or
https://llg.cubic.org/docs/junit/), simply load the xunit_output.lgt
file before running the tests. This file defines an object,
xunit_output
, that intercepts and rewrites unit test execution
messages, converting them to the xUnit XML format.
To export the test results to a file using the xUnit XML format, simply
load the xunit_report.lgt
file before running the tests. A file
named xunit_report.xml
will be created in the same directory as the
object defining the tests. When running a set of test suites as a single
unified suite (using the run_test_sets/1
predicate), the single
xUnit report is created in the directory of the first test suite object
in the set.
To use in alternative the xUnit.net v2 XML format
(https://xunit.net/docs/format-xml-v2), load either the
xunit_net_v2_output.lgt
file or the xunit_net_v2_report.lgt
file.
When using the logtalk_tester
automation script, use either the
-f xunit
option or the -f xunit_net_v2
option to generate the
xunit_report.xml
files on the test set directories.
There are several third-party xUnit report converters that can generate HTML files for easy browsing. For example:
https://docs.qameta.io/allure-report/ (supports multiple reports)
https://github.com/Zir0-93/xunit-to-html (supports multiple test sets in a single report)
Exporting test results in the TAP output formatï
To output test results in the TAP (Test Anything Protocol) format,
simply load the tap_output.lgt
file before running the tests. This
file defines an object, tap_output
, that intercepts and rewrites
unit test execution messages, converting them to the TAP output format.
To export the test results to a file using the TAP (Test Anything
Protocol) output format, load instead the tap_report.lgt
file before
running the tests. A file named tap_report.txt
will be created in
the same directory as the object defining the tests.
When using the logtalk_tester
automation script, use the -f tap
option to generate the tap_report.xml
files on the test set
directories.
When using the test/3
dialect with the TAP format, a note/1
option whose argument is an atom starting with a TODO
or todo
word results in a test report with a TAP TODO directive.
When running a set of test suites as a single unified suite, the single TAP report is created in the directory of the first test suite object in the set.
There are several third-party TAP report converters that can generate HTML files for easy browsing. For example:
Generating Allure reportsï
A logtalk_allure_report.pl
Bash shell script and a
logtalk_allure_report.ps1
PowerShell script are provided for
generating Allure reports
(version 2.26.0 or later required). This requires exporting test results
in xUnit XML format. A simple usage example (assuming a current
directory containing tests):
$ logtalk_tester -p gnu -f xunit
$ logtalk_allure_report
$ allure open
The logtalk_allure_report
script supports command-line options to
pass the tests directory (i.e. the directory where the
logtalk_tester
script was run), the directory where to collect all
the xUnit report files for generating the report, the directory where
the report is to be saved, and the report title (see the script man page
or type logtalk_allure_report -h
). The script also supports saving
the history of past test runs. In this case, a persistant location for
both the results and report directories must be used.
Itâs also possible to use the script just to collect the xUnit report
files generated by lgtunit
and delegate the actual generation of the
report to e.g. an Allure Docker container or to a Jenkins plug-in. Two
examples are:
In this case, we would use the logtalk_allure_report
script option
to only perform the preprocessing step:
$ logtalk_allure_report -p
The scripts also supports passing environment pairs, which are displayed in the generated Allure reports in the environment pane. This feature can be used to pass e.g. the backend name and the backend version or git commit hash. The option syntax differs, however, between the two scripts. For example, using the Bash script:
$ logtalk_allure_report -- Backend='GNU Prolog' Version=1.5.0
Or:
$ logtalk_allure_report -- Project='Deep Thought' Commit=`git rev-parse --short HEAD`
In the case of the PowerShell script, the pairs are passed comma separated inside a string:
PS> logtalk_allure_report -e "Backend='GNU Prolog',Version=1.5.0"
Or:
PS> logtalk_allure_report -e "Project='Deep Thought',Commit=bf166b6"
To show tests run trends in the report (e.g. when running the tests for each application source code commit), save the processed test results and the report data to permanent directories. For example:
$ logtalk_allure_report \
-i "$HOME/my_project/allure-results" \
-o "$HOME/my_project/allure-report"
$ allure open "$HOME/my_project/allure-report"
Note that Allure cleans the report directory when generating a new report. Be careful to always specify a dedicated directory to prevent accidental data loss.
The generated reports can include with links to the tests source code.
This requires using the logtalk_tester
shell script option that
allows passing the base URL for those links. This option needs to be
used together with the option to suppress the tests directory prefix so
that the links can be constructed by appending the tests file relative
path to the base URL. For example, assuming that you want to generate a
report for the tests included in the Logtalk distribution when using the
GNU Prolog backend:
$ cd $LOGTALKUSER
$ logtalk_tester \
-p gnu \
-f xunit \
-s "$LOGTALKUSER" \
-u "https://github.com/LogtalkDotOrg/logtalk3/tree/3e4ea295986fb09d0d4aade1f3b4968e29ef594e"
The use of a git hash in the base URL ensures that the generated links will always show the exact versions of the tests that were run. The links include the line number for the tests in the tests files (assuming that the git repo is stored in a BitBucket, GitHub, or GitLab server). But note that not all supported backends provide accurate line numbers.
Itâs also possible to generate single file reports. For example:
$ logtalk_allure_report -s -t "My Amazing Tests Report"
There are some caveats when generating Allure reports that users must be
aware. First, Allure expects test names to be unique across different
tests sets. If there are two test with the same name in two different
test sets, only one of them will be reported. Second, when using the
xunit
format, dates are reported as MM/DD/YYYY. Finally, when using
the xunit_net_v2
format, tests are reported in a random order
instead of their run order and dates are displayed as âunknownâ in the
overview page.
Exporting code coverage results in XML formatï
To export code coverage results in XML format, load the
coverage_report.lgt
file before running the tests. A file named
coverage_report.xml
will be created in the same directory as the
object defining the tests.
The XML file can be opened in most web browsers (with the notorious
exception of Google Chrome) by copying to the same directory the
coverage_report.dtd
and coverage_report.xsl
files found in the
tools/lgtunit
directory (when using the logtalk_tester
script,
these two files are copied automatically). In alternative, an XSLT
processor can be used to generate an XHTML file instead of relying on a
web browser for the transformation. For example, using the popular
xsltproc
processor:
$ xsltproc -o coverage_report.html coverage_report.xml
On Windows operating-systems, this processor can be installed using e.g. Chocolatey. On a POSIX operating-systems (e.g. Linux, macOS, â¦) use the system package manager to install it if necessary.
The coverage report can include links to the source code when hosted on
Bitbucket, GitHub, or GitLab. This requires passing the base URL as the
value for the url
XSLT parameter. The exact syntax depends on the
XSLT processor, however. For example:
$ xsltproc \
--stringparam url https://github.com/LogtalkDotOrg/logtalk3/blob/master \
-o coverage_report.html coverage_report.xml
Note that the base URL should preferably be a permanent link (i.e. it
should include the commit SHA1) so that the links to source code files
and lines remain valid if the source code is later updated. Itâs also
necessary to suppress the local path prefix in the generated
coverage_report.xml
file. For example:
$ logtalk_tester -c xml -s $HOME/logtalk/
Alternatively, you can pass the local path prefix to be suppressed to
the XSLT processor (note that the logtalk_tester
script suppresses
the $HOME
prefix by default):
$ xsltproc \
--stringparam prefix logtalk/ \
--stringparam url https://github.com/LogtalkDotOrg/logtalk3/blob/master \
-o coverage_report.html coverage_report.xml
If you are using Bitbucket, GitHub, or GitLab hosted in your own
servers, the url
parameter may not contain a bitbucket
,
github
, or gitlab
string. In this case, you can use the XSLT
parameter host
to indicate which service are you running.
Automatically creating bug reports at issue trackersï
To automatically create bug report issues for failed tests in GitHub or
GitLab servers, see the issue_creator
tool.
Minimizing test results outputï
To minimize the test results output, simply load the
minimal_output.lgt
file before running the tests. This file defines
an object, minimal_output
, that intercepts and summarizes the unit
test execution messages.
Help with warningsï
Load the tutor
tool to get help with selected warnings printed by
the lgtunit
tool.
Known issuesï
Deterministic unit tests are currently not available when using Quintus Prolog as it lacks built-in support that cannot be sensibly defined in Prolog.
Parameter variables (_VariableName_
) cannot currently be used in the
definition of the condition/1
, setup/1
, and cleanup/1
test
options when using the test/3
dialect. For example, the following
condition will not work:
test(some_id, true, [condition(_ParVar_ == 42)]) :-
...
The workaround is to define an auxiliary predicate called from those options. For example:
test(check_xyz, true, [condition(xyz_condition)]) :-
...
xyz_condition :-
_ParVar_ == 42.