Please note, this is archived content and is no longer being updated. Please update your bookmarks and links to point to this permanent URL:

Reasoner submissions

We invite reasoner developers to submit their reasoners for participation in the OWL reasoner competition. The outline for the competition is as follows: 

Ontology Selection

We will use a random sample of ontologies from the following sets:

  • NCBO BioPortal snapshot (~300 ontologies)
  • Web crawl of OWL ontologies (~3000 ontologies)
  • Oxford ontology repository (~800 ontologies)
  • User-submitted “hard” ontologies (see the “ontology submissions” page for information on how to submit an ontology)

Where applicable, all imports of each ontology will be merged into a single file. The ontologies will be serialized in OWL Functional Syntax, and alternatively OWL/XML. However, if your system cannot handle that (directly or via the OWL API), we are happy to provide another alternative syntax. Please specify upon submission which syntax should be used.


The reasoning tasks under test will be: Classification, Consistency, Satisfiability (of atomic concepts), and Query Answering. When submitting the reasoner, please specify which task(s) should be evaluated. The query answering competition will only take place if we get at least 2 system submissions. We also encourage the submission of query answering test data, i.e., ontologies and SPARQL queries.

System Evaluation

Reasoners will be evaluated based on:

  • Whether the problem was solved within the timeout
  • Soundness and completeness of the solution
  • Time elapsed

System Submission

*** A test-ready version of the benchmarking framework is available from our repository! ***

Developers should send an executable “wrapper” script that triggers the system, and the system itself, to the competition organisers:

The wrapper script should be executable by a single command line, which takes as input parameters:

  1. Name of reasoning task, one of: sat, query, classification, consistency (<Operation>)
  2. Input ontology path (<OntologyFile>)
  3. Result output path (<Output>)
  4. Where applicable: the concept name URI (for SAT checks) or SPARQL query file

The output (to stdout) should conform to the following rules:

  1. Start message: “Started <Operation> on <OntologyFile>”
  2. Operation time: “Operation time: <Time>”
  3. (Optional) Operation CPU time: “Operation CPU time: <CPUTime>”
  4. Completion message: “Completed <Operation> on <OntologyFile>”

The operation time (<Time>) should be measured in wall clock time, and should only be the value in milliseconds. Optionally, you may also include the CPU time elapsed (<CPUTime>). Both measurements should represent the time elapsed from the moment preceding reasoner creation to the completion of the task at hand. That is, do not include ontology parsing time (unless some reasoner-specific pre-processing is done at this point), nor file serialization or socket communication time (where applicable).

Systems should enforce a 5-minute timeout per operation, that is, if <Time> exceeds 5 minutes then the system should stop the ongoing operation, and terminate itself. Failure to do so will trigger, after 1-minute of the timeout, a kill command sent to the ongoing process.

The output (to the file specified as a parameter: <Output>) should be as follows:

  • For Consistency: “true” or “false”
  • For Satisfiability [append to given file]: “Concept URI” , “true” or “false” (make sure to use the comma to separate the concept name from the result)
  • For Classification: An OWL file (in OWL Functional Syntax) with the inferred subsumptions
  • For Query Answering: A CSV file conforming to the SPARQL 1.1 Query Results CSV Format
  • If the reasoner throws an error or warning or times-out, this should be piped into a file <Output>_err. That is, the given output file suffixed with “_err”. In the case of a timeout, the expected string in the <Output>_err file should simply be “timeout”.

The result of classification of an ontology O should be all subsumptions A => B, where:

  • A in sig(O) or Top
  • B in sig(O) or Bottom
  • A not equal to B
  • There exists no C s.t.:
    •  entails A => and C => B, where C is not equivalent to or B or Bottom
  • If A is unsatisfiable, the only axiom with A on the left-hand side should be A => Bottom
  • If the given ontology is inconsistent, the output should be the axiom Top => Bottom
*** A test-ready version of the benchmarking framework is available from our repository! ***

One Response to Reasoner submissions
  1. Websites you should visit... [...]below you’ll find the link to some sites that we think y...