Bims (Bayesian inference over model structures) implements MCMC learning over statistical models defined in the Dlp (Distributional logic programming) probabilistic language.
Bims is released under GPL2, or Artistic 2.0
Currently there are 2 model spaces supported:
Additional model spaces can be easily implemented by defining new likelihood plug-ins and programming appropriate priors.
?- bims([]). ?- bims([data(carts),models(carts),likelihood(carts)]).
The above are two equivalent ways to run the Carts example provided.
This runs 3 chains each of length 100 on the default Carts data using the default likelihood. The default dataset is the breast cancer Winsconsin (BCW) data from the machine learning repository. There are 2 categories, 9 variables and 683 data points in this dataset. You can view the data with
?- edit( pack(bims/data/carts) ).
The default likelihood is an implementation of the classification likelihood function presented in: H Chipman, E George, and R McCulloch. Bayesian CART model search (with discussion). J. of the American Statistical Association, 93:935â960, 1998.
?- bims([models(bns)]). ?- bims([data(bns),models(bns),likelihood(bns)]).
The above are two equivalent ways to run the Bns example provided.
This runs 3 chains each of length 100 on the default bns data using default likelihood. The dataset is a sampled dataset from the ASIA network and it comprises of 8 variables and 2295 datapoints. You can view the data with
?- edit( pack(bims/data/bns) ).
The default BN likelihood is an instance of the BDeu metric for scoring BN structures.
W. L. Buntine. Theory refinement of Bayesian networks. In Bruce DâAmbrosio, Philippe Smets, and Piero Bonissone, editors, Proceedings of the Seventh Annual Conference on Uncertainty in Artificial Intelligence (UAIâ1991), pages 52â60, 1991
David Heckerman, Dan Geiger, and David M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197â243, 1995.
An easy way to run Bims on your data is to create a new directory and within that sub-directory data/ copy your data there and pass options data/1 to the basename of the data file.
For example,
?- bims(data(mydata)).
By defining a new likelihood function and new priors the system can be used on new statistical models.
In addition to model structure learning Bims implements two way of performing resolution over DLPs: stochastic sampling resolution (SSD) and SLD-based probabilisic inference.
These predicates allow to sample from a loaded distributional logic program (Dlp). The resolution strategy here are that of chosing between probabilistic choices according to their relative values. The main idea is that when sampling many times from a top goal will in the long run sample each derivation path in proportion to the probability of the derivation. The probability of a derivation/refutation, is simply the product of all the probabilities attached to resolution steps during the derivation.
See
These predicates allow standard SLD exploration of a stochastic query against a DLP. Predicates here allow to explore what is derivable and often attach a probability and ather information to each derivation.
Note that in probabilistic inference we often are more interested in failures than in standard LP. This is because there is a probability mass loss which each failed probabilistic branch.
Probabilistic inference predicates
If the argument (File) corresponds to an existing file, then it is taken to be a settings file. Each argument should be a fact correspond to a known option. For example
chains(3). iterations(100). seeds([1,2,3]).
If the argument (Opts) does not correspond to a file is take to be a list of option terms.
The simplest way to use the software is to make a new directory and run some MCMC chains. The default call,
?- bims. % equivalent to ?- bims([]).
runs a 3 chains (R=3, below) 100 iterations (I=100) MCMC simulation. The models learnt are classifications trees (carts) based on the default prior and the data are the BCW dataset. The above call is equivalent to:
?- bims([models(carts)]).
To run a toy BN learning example run
?- bims([models(bns)]).
This runs 3 chains on some synthetic data of the 8-nodal Asia BN.
To get familiar on how to run bims on private data, make a new directory,
create a subdirecory data
and copy file bims(data/asia.pl)
to
data/test_local.pl
.
?- bims([data(test_local)]).
Opts
bns
.debug(bims)
to get debuging messages. If Dbg==false, nodebug(bims)
is called.seeds(1)
when chains(3)
is given expands to seeds([1,2,3])
.top_goal(Top)
) expects. In general the dependency is with the likelihood,
with the prior expected to be compatible with what the likelihood dictates in terms of data.
In the likelihoods provided, Data is the stem of a filename that is loaded in memory.
The file is looked for in Dir/Data[.pl] where Dir is looked for in [./data,bims(Model/data/)].bims(dlps)
.prefix(Pfx)
recognition)all
is expanded to reporting all known reportable terms.All file name based options: Lk, Data, Prior or Rdir, are passed through absolute_file_name/2.
The predicate generates one results directory (Rdir) and files recording information about each run (R) are placed in Rdir.
date(Y,M,D)
.
?- bims_version(Vers, Date). Vers = 3:0:0, Date = date(2023, 5, 8).
bibtex(Type,Key,Pairs)
term of the same publication.
On backtracking it produces all publications in reverse chronological order.
?- bims_citation(A, G), write(A), nl. Distributional Logic Programming for Bayesian Knowledge Representation. Nicos Angelopoulos and James Cussens. International Journal of Approximate Reasoning (IJAR). Volume 80, January 2017, pages 52-66.
In total
?- findall( A, bims_citation(A,G), Pubs ), length( Pubs, Length ). Pubs = [...], Length = 5.
The predicate loads two versions of the Dlp file. One in module dlp_sld
(suitable
for SLD resolution, see dlp_call/2) and one in module dlp_ssd
, which is suitable for stochastic
resolution (see dlp_sample/1).
Dlp files are looked for in ./dlp and pack(bims/dlp/). So dlp_load(coin)
will load file pack(bims/dlp/coin.dlp from the local pack(bims)
installation.
Opts
Succeeds at most once.
Instead of using linear (SLP) clausal selection the predicate using stochastic selection- where
clauses are selected proportionally to the probabilistic values attached to them.
Thus a clause with probability label of 1/2
will be selected twice as often as its sister
clause that has probability label of 1/4
.
?- dlp_load(coin). ?- dlp_seed. ?- dlp_sample(coin(Flip)). Flip = head. ?- dlp_sample(coin(Flip)). Flip = tail. ?- dlp_seed. ?- dlp_sample(coin(Flip),Path,Prb). Flip = head, Path = [1/0.5], Prb = 0.5. ?- dlp_sample(coin(Flip),Path,Prb). Flip = tail, Path = [2/0.5], Prb = 0.5.
Uniform selection of a list member:
?- dlp_sload(umember). ?- dlp_seed. ?- dlp_sample(umember([a,b,c,d],X) ). X = d.
Assuming packs, mlu, b_real and Real are installed, then plots can be created with sampling outputs
?- dlp_load(umember). ?- lib(mlu) ?- mlu_sample( dlp_sample(umember([a,b,c,d,e,f,g,h],X)), 1000, X, Freqs ), mlu_frequency_plot( Freqs, [interface(barplot),outputs(svg),las = 2]).
Produces file: real_plot.svg
Succeeds for all possible derivations of Goal.
?- dlp_load(coin). ?- dlp_seed. ?- dlp_call(coin(Flip)). Flip = head ; Flip = tail ; false. ?- dlp_call(coin(Flip), Path, Prb). Flip = head, Path = [1/0.5], Prb = 0.5 ; Flip = tail, Path = [2/0.5], Prb = 0.5 ; false.
Uniform selection of a list member:
?- dlp_load(umemmber). ?- dlp_call( umember([a,b,c],X), _Path, Prb ). X = a, Prb = 0.3333333333333333 ; X = b, Prb = 0.33333333333333337 ; X = c, Prb = 0.33333333333333337 ;
Standard SLD resolution is used to derive all refutations.
?- dlp_load(doubles). ?- dlp_call_sum(coin(Flip), Prb). Prb = 1.0. ?- dlp_call_sum(coin(head), Prb). Prb = 0.5. ?- dlp_call_sum(doubles(head), Prb). Prb = 0.25. ?- dlp_call_sum(doubles(_), Prb). Prb = 0.5.
A more interesting example
?- dlp_load(umember). ?- dlp_call_sum( umember([a,b,c,d],X), Prb ). Prb = 1.0. ?- dlp_call_sum( umember([a,b,c,d],a), Prb ). Prb = 0.25. ?- dlp_call_sum( umember([a,b,c,d],b), Prb ). Prb = 0.25. ?- dlp_call_sum( umember([a,b,c,d],c), Prb ). Prb = 0.25. ?- dlp_call_sum( umember([a,b,c,d],d), Prb ). Prb = 0.25.
A convenience predicate for running the examples from a common starting point for the random seed.
Specifically it unfolds to
?- set_random(seed(101)).
?- dlp_load(coin). ?- dlp_seed. ?- dlp_sample(coin(Flip)). Flip = head. ?- set_random(seed(101)). ?- dlp_sample(coin(Flip)). Flip = head. ?- dlp_sample(coin(Flip)). Flip = tail.
Part can be a starter value, typically Part is 1.
?- dlp_load(coin). ?- dlp_seed, dlp_sample(coin(Flip),Path,Prb), dlp_path_prob(Path,AgainPrb).