Publication List of Boris Motik
In Journals
Michael Benedikt, Maxime Buron, Stefano Germano, Kevin Kappelmann, and Boris Motik. Rewriting the Infinite Chase for Guarded TGDs. ACM Transactions on Database Systems, 49(4):1–44, 2024.
@Article{bbgkm24rewriting-infinite-chase-guarded, author = "Michael Benedikt and Maxime Buron and Stefano Germano and Kevin Kappelmann and Boris Motik", title = "{Rewriting the Infinite Chase for Guarded TGDs}", journal = "ACM Transactions on Database Systems", year = "2024", volume = "49", number = "4", pages = "1--44", }
Guarded tuple-generating dependencies (GTGDs) are a natural extension of
description logics and referential constraints. It has long been known that
queries over GTGDs can be answered by a variant of the chase—a
quintessential technique for reasoning with dependencies. However, there has
been little work on concrete algorithms and even less on implementation. To
address this gap, we revisit Datalog rewriting approaches to query
answering, where a set of GTGDs is transformed to a Datalog program that
entails the same base facts on each base instance. We show that a rewriting
consists of "shortcut'' rules that circumvent certain chase steps, we present
several algorithms that compute a rewriting by deriving such "shortcuts''
efficiently, and we discuss important implementation issues. Finally, we show
empirically that our techniques can process complex GTGDs derived from
synthetic and real benchmarks and are thus suitable for practical use.
Pan Hu and Boris Motik. Accurate Sampling-Based Cardinality Estimation for Complex Graph Queries. ACM Transactions on Database Systems, 49(3):1–46, 2024.
@Article{hm24-accurate-sampling, author = "Pan Hu and Boris Motik", title = "{Accurate Sampling-Based Cardinality Estimation for Complex Graph Queries}", journal = "ACM Transactions on Database Systems", year = "2024", volume = "49", number = "3", pages = "1--46", }
Accurately estimating the cardinality (i.e., the number of answers) of complex
queries plays a central role in database systems. This problem is particularly
difficult in graph databases, where queries often involve a large number of
joins and self-joins. Recently, \citet{DBLP:conf/sigmod/ParkKBKHH20} surveyed
seven state-of-the-art cardinality estimation approaches for graph queries. The
results of their extensive empirical evaluation show that a sampling method
based on the WanderJoin online aggregation algorithm
\cite{DBLP:journals/tods/LiWYZ19} consistently offers superior accuracy.
We extended the framework by \citet{DBLP:conf/sigmod/ParkKBKHH20} with three
additional datasets and repeated their experiments. Our results showed that
WanderJoin is indeed very accurate, but it can often take a large number of
samples and thus be very slow. Moreover, when queries are complex and data
distributions are skewed, it often fails to find valid samples and estimates
the cardinality as zero. Finally, complex graph queries often go beyond simple
graph matching and involve arbitrary nesting of relational operators such as
disjunction, difference, and duplicate elimination. Neither of the methods
considered by \citet{DBLP:conf/sigmod/ParkKBKHH20} is applicable to such
queries.
In this article we present a novel approach for estimating the cardinality of
complex graph queries. Our approach is inspired by WanderJoin, but, unlike all
approaches known to us, it can process complex queries with arbitrary operator
nesting. Our estimator is strongly consistent, meaning that the average of
repeated estimates converges with probability one to the actual cardinality. We
present optimisations of the basic algorithm that aim to reduce the chance of
producing zero estimates and improve accuracy. We show empirically that our
approach is both accurate and quick on complex queries and large datasets.
Finally, we discuss how to integrate our approach into a simple dynamic
programming query planner, and we confirm empirically that our planner produces
high-quality plans that can significantly reduce end-to-end query evaluation
times.
Michael Benedikt, Maxime Buron, Stefano Germano, Kevin Kappelmann, and Boris Motik. Rewriting the Infinite Chase. Proc. of the VLDB Endowment, 15(11):3045–3057, 2022.
@Article{bbgkm22rewriting-infinite-chase, author = "Michael Benedikt and Maxime Buron and Stefano Germano and Kevin Kappelmann and Boris Motik", title = "{Rewriting the Infinite Chase}", journal = "Proc. of the VLDB Endowment", year = "2022", volume = "15", number = "11", pages = "3045--3057", }
Guarded tuple-generating dependencies (GTGDs) are a natural extension of
description logics and referential constraints. It has long been known that
queries over GTGDs can be answered by a variant of the chase—a
quintessential technique for reasoning with dependencies. However, there has
been little work on concrete algorithms and even less on implementation. To
address this gap, we revisit Datalog rewriting approaches to query
answering, where GTGDs are transformed to a Datalog program that entails the
same base facts on each base instance. We show that the rewriting can be seen
as containing "shortcut'' rules that circumvent certain chase steps, we
present several algorithms that compute the rewriting by simulating specific
types of chase steps, and we discuss important implementation issues. Finally,
we show empirically that our techniques can process complex GTGDs derived from
synthetic and real benchmarks and are thus suitable for practical use.
Pan Hu, Boris Motik, and Ian Horrocks. Modular Materialisation of Datalog Programs. Artificial Intelligence, 308:103726, 2022.
@Article{hmh22modular-materialisation, author = "Pan Hu and Boris Motik and Ian Horrocks", title = "{Modular Materialisation of Datalog Programs}", journal = "Artificial Intelligence", year = "2022", volume = "308", pages = "103726", }
Answering queries over large datasets extended with Datalog rules plays a key
role in numerous data management applications, and it has been implemented in
several highly optimised Datalog systems in both academic and commercial
contexts. Many systems implement reasoning via materialisation, which
involves precomputing all consequences of the rules and the dataset in a
preprocessing step. Some systems also use incremental reasoning
algorithms, which can update the materialisation efficiently when the input
dataset changes. Such techniques allow queries to be processed without any
reference to the rules, so they are often used in applications where the
performance of query answering is critical.
Existing materialisation and incremental reasoning techniques enumerate all
possible ways to apply rules to the data in order to derive all relevant
consequences. This, however, can be inefficient because derivations of rules
commonly used in practice are redundant; for example, rules axiomatising a
binary predicate as symmetric and transitive can have a cubic number of
applications, yet they can derive at most a quadratic number of facts. Such
redundancy can be a significant source of overhead in practice and can prevent
Datalog systems from successfully processing large datasets. To address this
issue, in this paper we present a novel framework for modular
materialisation and incremental reasoning. Our key idea is that, for certain
combinations of rules commonly used in practice, all consequences can be
derived using specialised procedures that do not necessarily enumerate all
possible rule applications. Thus, our framework supports materialisation and
incremental reasoning via a collection of modules. Each module is
responsible for deriving consequences of a subset of the program, by using
either standard rule application or proprietary algorithms. We prove that such
an approach is complete as long as each module satisfies certain properties.
Our formalisation of a module is very general, and in fact it allows modules to
keep arbitrary auxiliary information.
We also show how to realise custom procedures for four types of modules:
transitivity, symmetry–transitivity, chain rules, and sequencing elements of a
total order. Finally, we demonstrate empirically that using our custom
procedures can speed up materialisation and incremental reasoning by several
orders of magnitude on several well-known benchmarks. Thus, our technique has
the potential to significantly improve the scalability of Datalog reasoners.
Temitope Ajileye and Boris Motik. Materialisation and Data Partitioning Algorithms for Distributed RDF Systems. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 73:100711, 2022.
@Article{am22distributed-materialisation-partitioning, author = "Temitope Ajileye and Boris Motik", title = "{Materialisation and Data Partitioning Algorithms for Distributed RDF Systems}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", volume = "73", pages = "100711", year = "2022", }
Many RDF systems support reasoning with Datalog rules via emphmaterialisation, where all conclusions of RDF data and the rules are precomputed and explicitly stored in a preprocessing step. As the amount of RDF data used in applications keeps increasing, processing large datasets often requires distributing the data in a cluster of shared-nothing servers. While numerous distributed query answering techniques are known, distributed materialisation is less well understood. In this paper, we present several techniques that facilitate scalable materialisation in distributed RDF systems. First, we present a new distributed materialisation algorithm that aims to minimise communication and synchronisation in the cluster. Second, we present two new algorithms for partitioning RDF data, both of which aim to produce tightly connected partitions, but without loading complete datasets into memory. We evaluate our materialisation algorithm against two state-of-the-art distributed Datalog systems and show that our technique offers competitive performance, particularly when the rules are complex. Moreover, we analyse in depth the effects of data partitioning on reasoning performance and show that our techniques offer performance comparable or superior to the state of the art min-cut partitioning, but computing the partitions requires considerably less time and memory.
Mark Kaminski, Egor V. Kostylev, Bernardo Cuenca Grau, Boris Motik, and Ian Horrocks. The Complexity and Expressive Power of Limit Datalog. Journal of the ACM, 69(1):1–83, 2022.
@Article{kkgmh22complexity-of-limit-datalog, author = "Mark Kaminski and Egor V. Kostylev and Bernardo Cuenca Grau and Boris Motik and Ian Horrocks", title = "{The Complexity and Expressive Power of Limit Datalog}", journal = "Journal of the ACM", volume = "69", number = "1", pages = "1--83", year = "2022", }
Motivated by applications in declarative data analysis, in this paper we study $\DLog$—an extension of
Datalog with stratified negation and arithmetic functions over integers. This language
is known to be undecidable,
so we present the fragment of limit $\DLog$ programs, which is powerful enough to naturally capture
many important data analysis tasks.
In limit $\DLog$, all intensional predicates with a numeric argument are
limit predicates that keep maximal
or minimal bounds on numeric values.
We show that reasoning in limit $\DLog$ is decidable if
a linearity condition restricting the use of multiplication is satisfied.
In particular, limit-linear $\DLog$
is complete for $\deltatwoexp$
and captures $\deltatwop$ over ordered datasets in the sense of descriptive complexity.
We also provide a comprehensive study of several fragments of
limit-linear $\DLog$. We show that
semi-positive limit-linear programs (i.e., programs where negation is allowed only in front of extensional atoms)
capture $\conp$ over ordered datasets; furthermore, reasoning becomes
$\conet$-complete in combined and $\conp$-complete in data complexity, where the lower bounds hold already
for negation-free programs. In order to satisfy the requirements of data-intensive applications,
we also propose an additional stability requirement, which causes the complexity of reasoning to drop
to $\et$ in combined and to $\pt$ in data complexity, thus obtaining the same bounds as for usual Datalog. %
Finally, we compare our formalisms with the
languages underpinning existing Datalog-based approaches for data analysis and show that core fragments of these
languages can be encoded as limit programs; this allows us to transfer decidability and complexity upper bounds from
limit programs to other formalisms.
Therefore, our paper provides
a unified logical framework for declarative data analysis which can be used as a basis for understanding the
impact on expressive power and computational complexity of the key constructs available in existing languages.
Akansha Bhardwaj, Albert Blarer, Philippe Cudré-Mauroux, Vincent Lenders, Boris Motik, Axel Tanner, and Alberto Tonon. Event Detection on Microposts: a Comparison of Four Different Approaches. IEEE Transactions on Knowledge and Data Engineering, 33(4):1467–1478, 2021.
@Article{bbcmlmtt20event-detection-microposts, author = "Akansha Bhardwaj and Albert Blarer and Philippe Cudr{\'e}-Mauroux and Vincent Lenders and Boris Motik and Axel Tanner and Alberto Tonon", title = "{Event Detection on Microposts: a Comparison of Four Different Approaches}", journal = "IEEE Transactions on Knowledge and Data Engineering", volume = "33", number = "4", pages = "1467--1478", year = "2021", }
Microblogging services such as Twitter are important, up-to-date, and live sources of information on a multitude of topics and events. An increasing number of systems use such services to detect and analyze events in real-time as they unfold. In this context, we recently proposed ArmaTweet—a system developed in collaboration among armasuisse and the Universities of Oxford and Fribourg to support semantic event detection on Twitter streams. Our experiments have shown that ArmaTweet is successful at detecting many complex events that cannot be detected by simple keyword-based search methods alone. Building up on this work, we explore in this paper several approaches for event detection on microposts. In particular, we describe and compare four different approaches based on keyword search (emphPlain-Seed-Query), information retrieval (emphTemporal Query Expansion), Word2Vec word embeddings (Embedding), and semantic retrieval (ArmaTweet). We provide an extensive empirical evaluation of these techniques using a benchmark dataset of about 200 million tweets on six event categories that we collected. While the performance of individual systems varies depending on the event category, our results show that ArmaTweet outperforms the other approaches on five out of six categories, and that a combined approach offers highest recall without adversely affecting precision of event detection.
Bernardo Cuenca Grau, Ian Horrocks, Mark Kaminski, Egor V. Kostylev, and Boris Motik. Limit Datalog: A Declarative Query Language for Data Analysis. SIGMOD Record, 48(4):6–17, 2019.
@Article{ghkkm19limit-datalog, author = "Bernardo Cuenca Grau and Ian Horrocks and Mark Kaminski and Egor V. Kostylev and Boris Motik", title = "{Limit Datalog: A Declarative Query Language for Data Analysis}", journal = "SIGMOD Record", volume = "48", number = "4", pages = "6--17", year = "2019", }
Motivated by applications in declarative data analysis, we study $\DLog$—an
extension of Datalog with stratified negation and arithmetics over integers.
Reasoning in this language is undecidable, so we present a fragment, called
limit $\DLog$, that is powerful enough to naturally capture many important data
analysis tasks. In limit $\DLog$, all intensional predicates with a numeric
argument are limit predicates that keep only the maximal or minimal
bounds on numeric values. Reasoning in limit $\DLog$ is decidable if
multiplication is used in a way that satisfies our linearity condition.
Moreover, fact entailment in limit-linear $\DLog$ is $\deltatwoexp$-complete in
combined and $\deltatwop$-complete in data complexity, and it drops to $\conet$
and $\conp$, respectively, if only (semi-)positive programs are considered. We
also propose an additional stability requirement, for which the complexity
drops to $\et$ and $\pt$, matching the bounds for usual Datalog. Limit $\DLog$
thus provides us with a unified logical framework for declarative data analysis
and can be used as a basis for understanding the expressive power of the key
data analysis constructs.
Boris Motik, Yavor Nenov, Robert Piro, and Ian Horrocks. Maintenance of Datalog Materialisations Revisited. Artificial Intelligence, 269:76–136, 2019.
@Article{mnph19maintenance-revisited, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks", title = "{Maintenance of Datalog Materialisations Revisited}", journal = "Artificial Intelligence", year = "2019", volume = "269", pages = "76--136", }
Datalog is a rule-based formalism that can axiomatise recursive properties such
as reachability and transitive closure. Datalog implementations often
materialise (i.e., precompute and store) all facts entailed by a datalog
program and a set of explicit facts. Queries can thus be answered directly in
the materialised facts, which is beneficial to the performance of query
answering, but the materialised facts must be updated whenever the explicit
facts change. Rematerialising all facts `from scratch' can be very inefficient,
so numerous materialisation maintenance algorithms have been developed
that aim to efficiently identify the facts that require updating and thus
reduce the overall work. Most such approaches are variants of the
counting or Delete/Rederive (DRed) algorithms. Algorithms in the
former group maintain additional data structures and are usually applicable
only if datalog rules are not recursive, which limits their applicability in
practice. Algorithms in the latter group do not require additional data
structures and can handle recursive rules, but they can be inefficient when
facts have multiple derivations. Finally, to the best of our knowledge, these
approaches have not been compared and their practical applicability has not
been investigated. Datalog is becoming increasingly important in practice, so a
more comprehensive understanding of the tradeoffs between different approaches
to materialisation maintenance is needed. In this paper we present three such
algorithms for datalog with stratified negation: a new counting algorithm that
can handle recursive rules, an optimised variant of the DRed algorithm that
does not repeat derivations, and a new Forward/Backward/Forward (FBF) algorithm
that extends DRed to better handle facts with multiple derivations.
Furthermore, we study the worst-case performance of these algorithms and
compare the algorithms' behaviour on several examples. Finally, we present the
results of an extensive, first-of-a-kind empirical evaluation in which we
investigate the robustness and the scaling behaviour of our algorithms. We thus
provide important theoretical and practical insights into all three algorithms
that will provide invaluable guidance to future implementors of datalog systems.
Andrew Bate, Boris Motik, Bernardo Cuenca Grau, David Tena Cucala, František Simančík, and Ian Horrocks. Consequence-Based Reasoning for Description Logics with Disjunctions and Number Restrictions. Journal of Artificial Intelligence Research, 63:625–690, 2018.
@Article{bmgcsh18consequence-numbers-disjunctions, author = "Andrew Bate and Boris Motik and Bernardo {Cuenca Grau} and David {Tena Cucala} and Franti{\v{s}}ek Siman{\v{c}}{\'i}k and Ian Horrocks", title = "{Consequence-Based Reasoning for Description Logics with Disjunctions and Number Restrictions}", journal = "Journal of Artificial Intelligence Research", volume = "63", pages = "625--690", year = "2018", }
Classification of description logic (DL) ontologies is a key computational
problem in modern data management applications, so considerable effort has been
devoted to the development and optimisation of practical reasoning calculi.
Consequence-based calculi combine ideas from hypertableau and resolution in a
way that has proved very effective in practice. However, existing
consequence-based calculi can handle either Horn DLs (which do not support
disjunction) or DLs without number restrictions. In this paper, we overcome
this important limitation and present the first consequence-based calculus for
deciding concept subsumption in the DL \ALCHIQp. Our calculus runs in
exponential time assuming unary coding of numbers, and on \ELH ontologies it
runs in polynomial time. The extension to disjunctions and number restrictions
is technically involved: we capture the relevant consequences using first-order
clauses, and our inference rules adapt paramodulation techniques from
first-order theorem proving. By using a well-known preprocessing step, the
calculus can also decide concept subsumptions in \SRIQ—a rich DL that covers
all features of OWL 2 DL apart from nominals and datatypes. We have implemented
our calculus in a new reasoner called Sequoia. We present the architecture of
our reasoner and discuss several novel and important implementation techniques
such as clause indexing and redundancy elimination. Finally, we present the
results of an extensive performance evaluation, which revealed Sequoia to be
competitive with existing reasoners. Thus, the calculus and the techniques we
present in this paper provide an important addition to the repertoire of
practical implementation techniques for description logic reasoning.
Anthony Potter, Boris Motik, Yavor Nenov, and Ian Horrocks. Dynamic Data Exchange in Distributed RDF Stores. IEEE Transactions on Knowledge and Data Engineering, 30(12):2312–2325, 2018.
@Article{pmnh18dynamic-rdf-stores, author = "Anthony Potter and Boris Motik and Yavor Nenov and Ian Horrocks", title = "{Dynamic Data Exchange in Distributed RDF Stores}", journal = "IEEE Transactions on Knowledge and Data Engineering", volume = "30", number = "12", pages = "2312--2325", year = "2018", }
When RDF datasets become too large to be managed by centralised systems, they
are often distributed in a cluster of shared-nothing servers, and queries are
answered using a distributed join algorithm. Although such solutions have been
extensively studied in relational and RDF databases, we argue that existing
approaches exhibit two drawbacks. First, they usually decide statically
(i.e., at query compile time) how to shuffle the data, which can lead to missed
opportunities for local computation. Second, they often materialise large
intermediate relations whose size is determined by the entire dataset (and not
the data stored in each server), so these relations can easily exceed the
memory of individual servers. As a possible remedy, we present a novel
distributed join algorithm for RDF. Our approach decides when to shuffle data
dynamically, which ensures that query answers that can be wholly
produced within a server involve only local computation. It also uses a novel
flow control mechanism to ensure that every query can be answered even if each
server has a bounded amount of memory that is much smaller than the
intermediate relations. We complement our algorithm with a new query planning
approach that balances the cost of communication against the cost of local
processing at each server. Moreover, as in several existing approaches, we
distribute RDF data using graph partitioning so as to maximise local
computation, but we refine the partitioning algorithm to produce more balanced
partitions. We show empirically that our techniques can outperform the state of
the art by orders of magnitude in terms of query evaluation times, network
communication, and memory use. In particular, bounding the memory use in
individual servers can mean the difference between success and failure for
answering queries with large answer sets.
Giorgio Stefanoni, Boris Motik, Markus Krötzsch, and Sebastian Rudolph. The Complexity of Answering Conjunctive and Navigational Queries over OWL 2 EL Knowledge Bases. Journal of Artificial Intelligence Research, 51:645–705, 2014.
@Article{smkr14complexity-CQs-EL, author = "Giorgio Stefanoni and Boris Motik and Markus Kr{\"o}tzsch and Sebastian Rudolph", title = "{The Complexity of Answering Conjunctive and Navigational Queries over OWL 2 EL Knowledge Bases}", journal = "Journal of Artificial Intelligence Research", volume = "51", pages = "645--705", year = "2014", }
OWL 2 EL is a popular ontology language that supports role
inclusions—axioms of the form ${S_1 \cdots S_n \ISA S}$ that capture
compositional properties of roles. Role inclusions closely correspond to
context-free grammars, which was used to show that answering conjunctive
queries (CQs) over OWL 2 EL knowledge bases with unrestricted role inclusions
is undecidable. However, OWL 2 EL inherits from OWL 2 DL the syntactic
regularity restriction on role inclusions, which ensures that role
chains implying a particular role can be described using a finite automaton
(FA). This is sufficient to ensure decidability of CQ answering; however, the
FAs can be worst-case exponential in size so the known approaches do not
provide a tight upper complexity bound.
In this paper, we solve this open problem and show that answering CQs over OWL
2 EL knowledge bases is
P
SPACE-complete in combined complexity (i.e., the
complexity measured in the total size of the input). To this end, we use a
novel encoding of regular role inclusions using bounded-stack pushdown
automata—that is, FAs extended with a stack of bounded size. Apart from
theoretical interest, our encoding can be used in practical tableau algorithms
to avoid the exponential blowup due to role inclusions. In addition, we sharpen
the lower complexity bound and show that the problem is P
SPACE-hard even if we
consider only role inclusions as part of the input (i.e., the query and all
other parts of the knowledge base are fixed). Finally, we turn our attention to
navigational queries over OWL 2 EL knowledge bases, and we show that answering
positive, converse-free conjunctive graph XPath queries is P
SPACE-complete as
well; this is interesting since allowing the converse operator in queries is
known to make the problem \EXPTIME-hard. Thus, in this paper we present several
important contributions to the landscape of the complexity of answering
expressive queries over description logic knowledge bases.
Birte Glimm, Ian Horrocks, Boris Motik, Giorgos Stoilos, and Zhe Wang. HermiT: An OWL 2 Reasoner. Journal of Automated Reasoning, 53(3):245–269, 2014.
@Article{ghmsw14HermiT, author = "Birte Glimm and Ian Horrocks and Boris Motik and Giorgos Stoilos and Zhe Wang", title = "{HermiT: An OWL 2 Reasoner}", journal = "Journal of Automated Reasoning", year = "2014", volume = "53", number = "3", pages = "245--269", }
This system description paper introduces the OWL 2 reasoner HermiT. The
reasoner is fully compliant with the OWL 2 Direct Semantics as standardised by
the World Wide Web Consortium (W3C). HermiT is based on the hypertableau
calculus, and it supports a wide range of standard and novel optimisations that
improve the performance of reasoning on real-world ontologies. Apart from the
standard OWL 2 reasoning task of entailment checking, HermiT supports several
specialised reasoning services such as class and property classification, as
well as a range of features outside the OWL 2 standard such as DL-safe rules,
SPARQL queries, and description graphs. We discuss the system's architecture,
and we present an overview of the techniques used to support the mentioned
reasoning tasks. We further compare the performance of reasoning in HermiT with
that of FaCT++ and Pellet—two other popular and widely used OWL 2 reasoners.
František Simančík, Boris Motik, and Ian Horrocks. Consequence-Based and Fixed-Parameter Tractable Reasoning in Description Logics. Artificial Intelligence, 209:29–77, 2014.
@Article{smh14consequence-FPT, author = "Franti{\v{s}}ek Siman{\v{c}}{\'i}k and Boris Motik and Ian Horrocks", title = "{Consequence-Based and Fixed-Parameter Tractable Reasoning in Description Logics}", journal = "Artificial Intelligence", year = "2014", volume = "209", pages = "29--77", }
In this paper we investigate the consequence-based algorithms that are nowadays
commonly used for subsumption reasoning with description logic ontologies,
presenting the following novel results. First, we present a very general
consequence-based reasoning algorithm that can be instantiated so as to capture
the essential features of the known algorithms. Second, we use this algorithm
to develop a novel framework for a quantitative and parametric analysis of the
complexity of subsumption reasoning in description logic ontologies. Our
approach is based on the notion of a decomposition—a graph-like
structure that, roughly speaking, summarizes the models relevant during
ontology reasoning. We identify width and length as decomposition
parameters that determine the "level'' of combinatorial reasoning. We prove
that, when parameterized by a decomposition, our consequence-based algorithm
runs in time that is fixed-parameter tractable in width and length. Third, we
briefly discuss how length and width characterize the behavior of tableau
algorithms. Fourth, we show that the width and length of existing ontologies
are reasonably small, which, we believe, explains the good performance of
consequence-based algorithms in practice. We thus obtain a sound foundation for
a practical implementation of ontology reasoners, as well as a way to analyze
the reasoners' performance.
Bernardo Cuenca Grau, Ian Horrocks, Markus Krötzsch, Clemens Kupke, Despoina Magka, Boris Motik, and Zhe Wang. Acyclicity Notions for Existential Rules and Their Application to Query Answering in Ontologies. Journal of Artificial Intelligence Research, 47:741–808, 2013.
@Article{ghkkmmw13acyclicity-journal, author = "Bernardo Cuenca Grau and Ian Horrocks and Markus Kr{\"o}tzsch and Clemens Kupke and Despoina Magka and Boris Motik and Zhe Wang", title = "{Acyclicity Notions for Existential Rules and Their Application to Query Answering in Ontologies}", journal = "Journal of Artificial Intelligence Research", volume = "47", pages = "741--808", year = "2013", }
Answering conjunctive queries (CQs) over a set of facts extended with
existential rules is a prominent problem in knowledge representation and
databases. This problem can be solved using the chase algorithm, which
extends the given set of facts with fresh facts in order to satisfy the rules.
If the chase terminates, then CQs can be evaluated directly in the resulting
set of facts. The chase, however, does not terminate necessarily, and checking
whether the chase terminates on a given set of rules and facts is undecidable.
Numerous acyclicity notions were proposed as sufficient conditions for
chase termination. In this paper, we present two new acyclicity notions called
model-faithful acyclicity (\MFA) and model-summarising
acyclicity (\MSA). Furthermore, we investigate the landscape of the known
acyclicity notions and establish a complete taxonomy of all notions known to
us. Finally, we show that \MFA and \MSA generalise most of these notions.
Existential rules are closely related to the Horn fragments of the OWL 2
ontology language; furthermore, several prominent OWL 2 reasoners implement CQ
answering by using the chase to materialise all relevant facts. In
order to avoid termination problems, many of these systems handle only the OWL
2 RL profile of OWL 2; furthermore, some systems go beyond OWL 2 RL, but
without any termination guarantees. In this paper we also investigate whether
various acyclicity notions can provide a principled and practical solution to
these problems. On the theoretical side, we show that query answering for
acyclic ontologies is of lower complexity than for general ontologies. On the
practical side, we show that many of the commonly used OWL 2 ontologies are
\MSA, and that the number of facts obtained by materialisation is not too
large. Our results thus suggest that principled development of
materialisation-based OWL 2 reasoners is practically feasible.
Bernardo Cuenca Grau and Boris Motik. Reasoning over Ontologies with Hidden Content: The Import-by-Query Approach. Journal of Artificial Intelligence Research, 45:197–255, 2012.
@Article{gm12ibq-journal, author = "Bernardo Cuenca Grau and Boris Motik", title = "{Reasoning over Ontologies with Hidden Content: The Import-by-Query Approach}", journal = "Journal of Artificial Intelligence Research", volume = "45", pages = "197--255", year = "2012", }
There is currently a growing interest in techniques for hiding parts of the
signature of an ontology $\hid{\KB}$ that is being reused by another ontology
$\vis{\KB}$. Towards this goal, in this paper we propose the
import-by-query framework, which makes the content of $\hid{\KB}$
accessible through a limited query interface. If $\vis{\KB}$ reuses the
symbols from $\hid{\KB}$ in a certain restricted way, one can reason over
${\vis{\KB} \cup \hid{\KB}}$ by accessing only $\vis{\KB}$ and the query
interface. We map out the landscape of the import-by-query problem. In
particular, we outline the limitations of our framework and prove that certain
restrictions on the expressivity of $\hid{\KB}$ and the way in which
$\vis{\KB}$ reuses symbols from $\hid{\KB}$ are strictly necessary to enable
reasoning in our setting. We also identify cases in which reasoning is
possible and we present suitable import-by-query reasoning algorithms.
Bernardo Cuenca Grau, Boris Motik, Giorgos Stoilos, and Ian Horrocks. Completeness Guarantees for Incomplete Ontology Reasoners: Theory and Practice. Journal of Artificial Intelligence Research, 43:419–476, 2012.
@Article{gmsh12completeness-guarantees, author = "Bernardo Cuenca Grau and Boris Motik and Giorgos Stoilos and Ian Horrocks", title = "{Completeness Guarantees for Incomplete Ontology Reasoners: Theory and Practice}", journal = "Journal of Artificial Intelligence Research", volume = "43", pages = "419--476", year = "2012", }
To achieve scalability of query answering, the developers of Semantic Web
applications are often forced to use incomplete OWL 2 reasoners, which
fail to derive all answers for at least one query, ontology, and data set. The
lack of completeness guarantees, however, may be unacceptable for applications
in areas such as health care and defence, where missing answers can adversely
affect the application's functionality. Furthermore, even if an application
can tolerate some level of incompleteness, it is often advantageous to
estimate how many and what kind of answers are being lost.
In this paper, we present a novel logic-based framework that allows one to
check whether a reasoner is complete for a given query $\Q$ and ontology
$\T$—that is, whether the reasoner is guaranteed to compute all answers to
$\Q$ w.r.t. $\T$ and an arbitrary data set $\A$. Since ontologies and typical
queries are often fixed at application design time, our approach allows
application developers to check whether a reasoner known to be incomplete in
general is actually complete for the kinds of input relevant for the
application.
We also present a technique that, given a query $\Q$, an ontology $\T$, and
reasoners $R_1$ and $R_2$ that satisfy certain assumptions, can be used to
determine whether, for each data set $\A$, reasoner $R_1$ computes more
answers to $\Q$ w.r.t. $\T$ and $\A$ than reasoner $R_2$. This allows
application developers to select the reasoner that provides the highest degree
of completeness for $\Q$ and $\T$ that is compatible with the application's
scalability requirements.
Our results thus provide a theoretical and practical foundation for the design
of future ontology-based information systems that maximise scalability while
minimising or even eliminating incompleteness of query answers.
Birte Glimm, Ian Horrocks, Boris Motik, Rob Shearer, and Giorgos Stoilos. A Novel Approach to Ontology Classification. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 14:84–101, 2012.
@Article{ghms12ontology-classification, author = "Birte Glimm and Ian Horrocks and Boris Motik and Rob Shearer and Giorgos Stoilos", title = "{A Novel Approach to Ontology Classification}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", volume = "14", pages = "84--101", year = "2012", }
Ontology classification—the computation of the subsumption hierarchies for
classes and properties—is a core reasoning service provided by all OWL
reasoners known to us. A popular algorithm for computing the class hierarchy
is the so-called Enhanced Traversal (ET) algorithm. In this paper we present a
new classification algorithm that attempts to address certain shortcomings of
ET and improve its performance. Apart from classification of classes, we also
consider object and data property classification. Using several simple
examples, we show that the algorithms commonly used to implement these tasks
are incomplete even for relatively weak ontology languages. Furthermore, we
show that property classification can be reduced to class classification,
which allows us to classify properties using our optimised algorithm. We
implemented all our algorithms in the OWL reasoner HermiT. The results of our
performance evaluation show significant performance improvements on several
well-known ontologies.
Boris Motik. Representing and Querying Validity Time in RDF and OWL: A Logic-Based Approach. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 12–13:3–21, 2012.
@Article{m12validity-time, author = "Boris Motik", title = "{Representing and Querying Validity Time in RDF and OWL: A Logic-Based Approach}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", volume = "12--13", pages = "3--21", year = "2012", }
RDF(S) and OWL 2 can currently represent only static information. In practice,
however, the truth of statements often changes with time. Semantic Web
applications often need to represent such changes and reason about them. In
this paper we present a logic-based approach for representing validity time in
RDF(S) and OWL 2. Unlike the existing proposals, our approach is applicable to
nondeterministic entailment relations and/or entailment relations that involve
existential quantification, such as the OWL 2 Direct Entailment and the OWL 2
RDF-Based Entailment. We also present an extension of SPARQL that can be used
to query temporal RDF(S) and OWL 2. Moreover, we present a general query
evaluation algorithm that can be used with all entailment relations used in
the Semantic Web. Finally, we present two optimizations of the algorithm that
are applicable to entailment relations characterized by a set of deterministic
rules, such RDF(S) and OWL 2 RL/RDF Entailment.
Boris Motik and Riccardo Rosati. Reconciling Description Logics and Rules. Journal of the ACM, 57(5):1–62, 2010.
@Article{mr10mknf-rules, author = "Boris Motik and Riccardo Rosati", title = "{Reconciling Description Logics and Rules}", journal = "Journal of the ACM", volume = "57", number = "5", pages = "1--62", year = "2010", }
Description logics (DLs) and rules are formalisms that emphasize
different aspects of knowledge representation: whereas DLs are
focused on specifying and reasoning about conceptual knowledge,
rules are focused on nonmonotonic inference. Many applications,
however, require features of both DLs and rules. Developing a
formalism that integrates DLs and rules would be a natural outcome
of a large body of research in knowledge representation and
reasoning of the last two decades; however, achieving this goal is
very challenging and the approaches proposed thus far have not fully
reached it. In this paper, we present a hybrid formalism of
\MKNFplus knowledge bases, which integrates DLs and rules in
a coherent semantic framework. Achieving seamless integration is
nontrivial, since DLs use an open-world assumption, while the rules
are based on a closed-world assumption. We overcome this discrepancy
by basing the semantics of our formalism on the logic of minimal
knowledge and negation as failure (MKNF) by Lifschitz. We present
several algorithms for reasoning with \MKNFplus knowledge bases,
each suitable to different kinds of rules, and establish tight
complexity bounds.
Boris Motik, Rob Shearer, and Ian Horrocks. Hypertableau Reasoning for Description Logics. Journal of Artificial Intelligence Research, 36:165–228, 2009.
@Article{msh09hypertableau, author = "Boris Motik and Rob Shearer and Ian Horrocks", title = "{Hypertableau Reasoning for Description Logics}", journal = "Journal of Artificial Intelligence Research", year = "2009", volume = "36", pages = "165--228", }
We present a novel reasoning calculus for the description logic
SHOIQplus—a knowledge representation formalism with applications
in areas such as the Semantic Web. Unnecessary nondeterminism and
the construction of large models are two primary sources of
inefficiency in the tableau-based reasoning calculi used in
state-of-the-art reasoners. In order to reduce nondeterminism, we
base our calculus on hypertableau and hyperresolution calculi, which
we extend with a blocking condition to ensure termination. In order
to reduce the size of the constructed models, we introduce
anywhere pairwise blocking. We also present an improved
nominal introduction rule that ensures termination in the presence
of nominals, inverse roles, and number restrictions—a combination
of DL constructs that has proven notoriously difficult to handle.
Our implementation shows significant performance improvements over
state-of-the-art reasoners on several well-known ontologies.
Héctor Pérez-Urbina, Boris Motik, and Ian Horrocks. Tractable Query Answering and Rewriting under Description Logic Constraints. Journal of Applied Logic, 8(2):151–232, 2009.
@Article{pumh09rewriting, author = "H{\'e}ctor P{\'e}rez-Urbina and Boris Motik and Ian Horrocks", title = "{Tractable Query Answering and Rewriting under Description Logic Constraints}", journal = "Journal of Applied Logic", year = "2009", volume = "8", number = "2", pages = "151--232", }
Answering queries over an incomplete database w.r.t. a set of
constraints is an important computational task with applications in
fields as diverse as information integration and metadata management
in the semantic Web. Description Logics (DLs) are constraint
languages that have been extensively studied with the
goal of providing useful modeling constructs while keeping the query
answering problem decidable. For many DLs, query answering under
constraints can be solved via query rewriting: given a conjunctive
query $Q$ and a set of DL constraints $\tbox$, the query $Q$ can be
transformed into a datalog query $Q_\tbox$ that takes into account
the semantic consequences of $\tbox$; then, to obtain answers to $Q$
w.r.t. $\tbox$ and some (arbitrary) database instance $\abox$, one
can simply evaluate $Q_\tbox$ over $\abox$ using existing
(deductive) database technology, without taking $\tbox$ into
account. In this paper, we present a novel query rewriting algorithm
that handles constraints modeled in the DL \ELHIOneg and use it to
show that answering conjunctive queries in this setting is
P
Time-complete w.r.t. data complexity. Our algorithm deals with
various description logics of the \EL and \DLlite families and is
worst-case optimal w.r.t. data complexity for all of them.
Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, and Ulrike Sattler. Representing Ontologies Using Description Logics, Description Graphs, and Rules. Artificial Intelligence, 173(14):1275–1309, 2009.
@Article{mghs09graphs-journal, author = "Boris Motik and Bernardo Cuenca Grau and Ian Horrocks and Ulrike Sattler", title = "{Representing Ontologies Using Description Logics, Description Graphs, and Rules}", journal = "Artificial Intelligence", year = "2009", volume = "173", number = "14", pages = "1275--1309", }
Description logics (DLs) are a family of state-of-the-art knowledge
representation languages, and their expressive power has been
carefully crafted to provide useful knowledge modeling primitives
while allowing for practically effective decision procedures for the
basic reasoning problems. Recent experience with DLs, however, has
shown that their expressivity is often insufficient to accurately
describe structured objects—objects whose parts are
interconnected in arbitrary, rather than tree-like ways. DL
knowledge bases describing structured objects are therefore usually
underconstrained, which precludes the entailment of certain
consequences and causes performance problems during reasoning.
To address this problem, we propose an extension of DL languages
with description graphs—a knowledge modeling construct that
can accurately describe objects with parts connected in arbitrary
ways. Furthermore, to enable modeling the conditional aspects of
structured objects, we also extend DLs with rules. We present an
in-depth study of the computational properties of such a formalism.
In particular, we first identify the sources of undecidability of
the general, unrestricted formalism. Based on that analysis, we then
investigate several restrictions of the general formalism that make
reasoning decidable. We present practical evidence that such a logic
can be used to model nontrivial structured objects. Finally, we
present a practical decision procedure for our formalism, as well as
tight complexity bounds.
Boris Motik, Ian Horrocks, and Ulrike Sattler. Bridging the Gap Between OWL and Relational Databases. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 7(2):74–89, 2009.
@Article{mhs09owl-dbs, author = "Boris Motik and Ian Horrocks and Ulrike Sattler", title = "{Bridging the Gap Between OWL and Relational Databases}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", year = "2009", volume = "7", number = "2", pages = "74--89", }
Despite similarities between the Web Ontology Language (OWL) and
schema languages traditionally used in relational databases, systems
based on these languages exhibit quite different behavior in
practice. The schema statements in relational databases are usually
interpreted as integrity constraints and are used to check
whether the data is structured according to the schema. OWL allows
for axioms that resemble integrity constraints; however, these
axioms are interpreted under the standard first-order semantics and
not as checks. This often leads to confusion and is inappropriate in
certain data-centric applications. To explain the source of this
confusion, in this paper we compare OWL and relational databases
w.r.t. their schema languages and basic computational problems.
Based on this comparison, we extend OWL with integrity constraints
that capture the intuition behind similar statements in relational
databases. We show that, if the integrity constraints are satisfied,
they need not be considered while answering a broad range of
positive queries. Finally, we discuss several algorithms for
checking integrity constraint satisfaction, each of which is
suitable to different types of OWL knowledge bases.
Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter Patel-Schneider, and Ulrike Sattler. OWL 2: The next step for OWL. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 6(4):309–322, 2008.
@Article{ghmppss08next-steps, author = "Bernardo Cuenca Grau and Ian Horrocks and Boris Motik and Bijan Parsia and Peter Patel-Schneider and Ulrike Sattler", title = "{OWL 2: The next step for OWL}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", year = "2008", volume = "6", number = "4", pages = "309--322", }
Since achieving W3C recommendation status in 2004, the Web Ontology
Language (OWL) has been successfully applied to many problems in
computer science. Practical experience with OWL has been quite
positive in general; however, it has also revealed room for
improvement in several areas. We systematically analyze the
identified shortcomings of OWL, such as expressivity issues,
problems with its syntaxes, and deficiencies in the definition of
OWL species. Furthermore, we present an overview of OWL 2—an
extension to and revision of OWL that is currently being developed
within the W3C OWL Working Group. Many aspects of OWL have been
thoroughly reengineered in OWL 2, thus producing a robust platform
for future development of the language.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Deciding Expressive Description Logics in the Framework of Resolution. Information & Computation, 206(5):579–601, 2008.
@Article{hms08deciding, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Deciding Expressive Description Logics in the Framework of Resolution}", journal = "Information \& Computation", year = "2008", volume = "206", number = "5", pages = "579--601", }
We present a decision procedure for the description logic SHIQ
based on the basic superposition calculus, and show that it runs in
exponential time for unary coding of numbers. To derive our
algorithm, we extend basic superposition with a decomposition
inference rule, which transforms conclusions of certain inferences
into equivalent, but simpler clauses. This rule can be used for
general first-order theorem proving with any resolution-based
calculus compatible with the standard notion of redundancy.
Yevgeny Kazakov and Boris Motik. A Resolution-Based Decision Procedure for SHOIQ. Journal of Automated Reasoning, 40(2–3):89–116, 2008.
@Article{km07shoiq-journal, author = "Yevgeny Kazakov and Boris Motik", title = "{A Resolution-Based Decision Procedure for SHOIQ}", journal = "Journal of Automated Reasoning", year = "2008", volume = "40", number = "2--3", pages = "89--116", }
We present a resolution-based decision procedure for the description
logic SHOIQ—the logic underlying the Semantic Web ontology
language OWL-DL. Our procedure is goal-oriented, and it naturally
extends a similar procedure for SHIQ, which has proven itself in
practice. Extending this procedure to SHOIQ using existing
techniques is not straightforward because of nominals, number
restrictions, and inverse roles—a combination known to cause
termination problems. We overcome this difficulty by using the basic
superposition calculus extended with custom simplification rules.
Boris Motik. On the Properties of Metamodeling in OWL. Journal of Logic and Computation, 17(4):617–637, 2007.
@Article{motik07metamodeling-journal, author = "Boris Motik", title = "{On the Properties of Metamodeling in OWL}", journal = "Journal of Logic and Computation", year = "2007", volume = "17", number = "4", pages = "617--637", }
A common practice in conceptual modeling is to separate the
conceptual from the data model. Although very intuitive, this
approach is inadequate for many complex domains, in which the
borderline between the two models is not clear-cut. Therefore,
OWL-Full, the most expressive of the Semantic Web ontology
languages, allows us to combine the conceptual and the data model by
a feature we refer to as metamodeling. In this paper, we show
that the semantics of metamodeling adopted in OWL-Full leads to the
undecidability of basic inference problems due to the free usage of
the built-in vocabulary. Based on this result, we propose two
alternative semantics for metamodeling: the contextual and
the HiLog semantics. We present several examples showing how
to use the latter semantics to axiomatize the interaction between
concepts and metaconcepts. Finally, we show that SHOIQD—the
description logic underlying OWL-DL—is still decidable when
extended with metamodeling under either semantics.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Reasoning in Description Logics by a Reduction to Disjunctive Datalog. Journal of Automated Reasoning, 39(3):351–384, 2007.
@Article{hms07reasoning, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Reasoning in Description Logics by a Reduction to Disjunctive Datalog}", journal = "Journal of Automated Reasoning", year = "2007", volume = "39", number = "3", pages = "351--384", }
As applications of description logics proliferate, efficient
reasoning with knowledge bases containing many assertions becomes
ever more important. For such cases, we developed a novel reasoning
algorithm that reduces a SHIQ knowledge base to a disjunctive
datalog program while preserving the set of ground consequences.
Queries can then be answered in the resulting program while reusing
existing and practically proven optimization techniques of deductive
databases, such as join-order optimizations or magic sets. Moreover,
we use our algorithm to derive precise data complexity bounds: we
show that SHIQ is data complete for
NP
, and we identify an
expressive fragment of SHIQ with polynomial data complexity.
Boris Motik, Ulrike Sattler, and Rudi Studer. Query Answering for OWL-DL with Rules. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 3(1):41–60, 2005.
@Article{mss05query-journal, author = "Boris Motik and Ulrike Sattler and Rudi Studer", title = "{Query Answering for OWL-DL with Rules}", journal = "Journal of Web Semantics: Science, Services and Agents on the World Wide Web", year = "2005", volume = "3", number = "1", pages = "41--60", }
Both OWL-DL and function-free Horn rules are decidable fragments of
first-order logic with interesting, yet orthogonal expressive power.
A combination of OWL-DL and rules is desirable for the Semantic Web;
however, it might easily lead to the undecidability of interesting
reasoning problems. Here, we present a decidable such
combination where rules are required to be DL-safe: each
variable in the rule is required to occur in a non-DL-atom in the
rule body. We discuss the expressive power of such a combination and
present an algorithm for query answering in the related logic SHIQ
extended with DL-safe rules, based on a reduction to disjunctive
programs.
Raphael Volz, Steffen Staab, and Boris Motik. Incrementally Maintaining Materializations of Ontologies Stored in Logic Databases. Journal of Data Semantics II, 3360:1–34, 2005. LNCS, Springer.
@Article{vsm05incrementally, author = "Raphael Volz and Steffen Staab and Boris Motik", title = "{Incrementally Maintaining Materializations of Ontologies Stored in Logic Databases}", journal = "Journal of Data Semantics II", year = "2005", volume = "3360", note = "LNCS, Springer", pages = "1--34", }
This article presents a technique to incrementally maintain materializations of ontological entailments. Materialization consists in precomputing and storing a set of implicit entailments, such that frequent and/or crucial queries to the ontology can be solved more efficiently. The central problem that arises with materialization is its maintenance when axioms change, viz. the process of propagating changes in explicit axioms to the stored implicit entailments. par When considering rule-enabled ontology languages that are operationalized in logic databases, we can distinguish two types of changes. Changes to the ontology will typically manifest themselves in changes to the rules of the logic program, whereas changes to facts will typically lead to changes in the extensions of logical predicates. The incremental maintenance of the latter type of changes has been studied extensively in the deductive database context and we apply the technique proposed in [30] for our purpose. The former type of changes has, however, not been tackled before. par In this article we elaborate on our previous papers [32, 33], which extend the approach of [30] to deal with changes in the logic program. Our approach is not limited to a particular ontology language but can be generally applied to arbitrary ontology languages that can be translated to Datalog programs, i.e. such as O-Telos, F-Logic [16] RDF(S), or Description Logic Programs [34].
Alexander Maedche, Boris Motik, and Ljiljana Stojanovic. Managing multiple and distributed ontologies on the Semantic Web. VLDB Journal, 12(4):286–302, 2003.
@Article{mms03managing, author = "Alexander Maedche and Boris Motik and Ljiljana Stojanovic", title = "{Managing multiple and distributed ontologies on the Semantic Web}", journal = "VLDB Journal", year = "2003", volume = "12", number = "4", pages = "286--302", }
In traditional software systems, significant attention is devoted to keeping modules well separated and coherent with respect to functionality, thus ensuring that changes in the system are localized to a handful of modules. Reuse is seen as the key method in reaching that goal. Ontology-based systems on the Semantic Web are just a special class of software systems, so the same principles apply. In this article, we present an integrated framework for managing multiple and distributed ontologies on the Semant ic Web. It is based on the representation model for ontologies, trading off between expressivity and tractability. In our framework, we provide features for reusing existing ontologies and for evolving them while retaining the consistency. The approach is implemented within KAON, the Karlsruhe Ontology and SemanticWeb tool suite.
Alexander Maedche, Boris Motik, Ljiljana Stojanovic, Rudi Studer, and Raphael Volz. Ontologies for Enterprise Knowledge Management. IEEE Intelligent Systems, 18(2):26–33, 2003.
@Article{mmssv03ontologies, author = "Alexander Maedche and Boris Motik and Ljiljana Stojanovic and Rudi Studer and Raphael Volz", title = "{Ontologies for Enterprise Knowledge Management}", journal = "IEEE Intelligent Systems", volume = "18", number = "2", year = "2003", pages = "26--33", }
Alexander Maedche and Boris Motik. Repräsentations- und Anfragesprachen für Ontologien—eine Übersicht. Datenbank-Spektrum, 6:43–53, 2003. In German.
@Article{mm03db-spektrum, author = "Alexander Maedche and Boris Motik", title = "{Repr{\"a}sentations- und Anfragesprachen f{\"u}r Ontologien---eine {\"U}bersicht}", journal = "Datenbank-Spektrum", year = "2003", volume = "6", note = "In German", pages = "43--53", }
Semantik spielt in vielen Anwendungsgebieten eine wichtige Rolle. Eines der
Anwendungsgebiete von semantischen Modellen ist die Informationsintegration, in
welcher heterogene Informationsquellen zusammengeführt werden. Dieser Beitrag
gibt eine Einführung und Übersicht über den aktuellen Stand im Bereich der
Repräsentations- und Anfragesprachen für semantische Modelle und stellt den
Karlsruher Ansatz KAON zur Ontologierepräsentation und -anfrage vor.
In Conferences
Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan. Decidability of Graph Neural Networks via Logical Characterizations. In Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola Svensson, editors, Proc. of the 51st Int. Colloquium on Automata, Languages, and Programming (ICALP 2024), volume 297 of LIPIcs, pages 127:1–127:20, Tallinn, Estonia, July 8–12 2024. Schloss Dagstuhl—Leibniz-Zentrum für Informatik.
@InProceedings{blmt24logical-characterizations-GNN, author = "Michael Benedikt and Chia-Hsuan Lu and Boris Motik and Tony Tan", editor = "Karl Bringmann and Martin Grohe and Gabriele Puppis and Ola Svensson", title = "{Decidability of Graph Neural Networks via Logical Characterizations}", booktitle = "Proc. of the 51st Int. Colloquium on Automata, Languages, and Programming (ICALP 2024)", month = "July 8--12", year = "2024", address = "Tallinn, Estonia", series = "LIPIcs", volume = "297", pages = "127:1--127:20", publisher = "Schloss Dagstuhl---Leibniz-Zentrum f{\"u}r Informatik", }
\begin{abstract}
We present results concerning the expressiveness and decidability of a popular graph learning formalism, graph neural networks (GNNs), exploiting connections with logic. We use a family of recently-discovered decidable logics involving "Presburger quantifiers''. We show how to use these logics to
measure the expressiveness of classes of GNNs, in some cases getting exact correspondences between the expressiveness of logics and GNNs. We also employ the logics, and the techniques used to analyze them, to obtain decision procedures
for verification problems over GNNs. We complement this with undecidability results for static analysis problems involving the logics, as well as for GNN verification problems.
\end{abstract}
David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, and Egor V. Kostylev. On the Correspondence Between Monotonic Max-Sum GNNs and Datalog. In Pierre Marquis, Tran Cao Son, and Gabriele Kern-Isberner, editors, Proc. of the 20th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2023), pages 658–667, Rhodes, Greece, September 2–8 2023.
@InProceedings{tccgmk23monotonic-max-sum, author = "David {Tena Cucala} and Bernardo {Cuenca Grau} and Boris Motik and Egor V. Kostylev", title = "{On the Correspondence Between Monotonic Max-Sum GNNs and Datalog}", booktitle = "Proc. of the 20th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2023)", year = "2023", editor = "Pierre Marquis and Tran Cao Son and Gabriele Kern-Isberner", pages = "658--667", address = "Rhodes, Greece", month = "September 2--8", }
Although there has been significant interest in applying machine learning
techniques to structured data, the expressivity (i.e., a description of
what can be learned) of such techniques is still poorly understood. In this
paper, we study data transformations based on graph neural networks
(GNNs). First, we note that the choice of how a dataset is encoded into a
numeric form processable by a GNN can obscure the characterisation of a model's
expressivity, and we argue that a canonical encoding provides an
appropriate basis. Second, we study the expressivity of monotonic
max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation
functions. We show that, for each such GNN, one can compute a Datalog program
such that applying the GNN to any dataset produces the same facts as a single
round of application of the program's rules to the dataset. Monotonic max-sum
GNNs can sum an unbounded number of feature vectors which can result in
arbitrarily large feature values, whereas rule application requires only a
bounded number of constants. Hence, our result shows that the unbounded
summation of monotonic max-sum GNNs does not increase their expressive power.
Third, we sharpen our result to the subclass of monotonic max GNNs,
which use only the max aggregation function, and identify a corresponding class
of Datalog programs.
Ian Horrocks, Jordi Olivares, Valerio Cocchi, Boris Motik, and Dylan Roy. The Dow Jones Knowledge Graph. In Paul Groth, Maria-Esther Vidal, Fabian M. Suchanek, Pedro A. Szekely, Pavan Kapanipathi, Catia Pesquita, Hala Skaf-Molli, and Minna Tamper, editors, Proc. of the 19th Extended Semantic Web Conference (ESWC 2022), volume 13261 of LNCS, pages 427–443, Hersonissos, Crete, Greece, May 29–June 2 2022. Springer.
@InProceedings{hocmr22dow-jones, author = "Ian Horrocks and Jordi Olivares and Valerio Cocchi and Boris Motik and Dylan Roy", editor = "Paul Groth and Maria-Esther Vidal and Fabian M. Suchanek and Pedro A. Szekely and Pavan Kapanipathi and Catia Pesquita and Hala Skaf-Molli and Minna Tamper", title = "{The Dow Jones Knowledge Graph}", booktitle = "Proc. of the 19th Extended Semantic Web Conference (ESWC 2022)", month = "May 29--June 2", address = "Hersonissos, Crete, Greece", pages = "427--443", year = "2022", series = "LNCS", volume = "13261", publisher = "Springer", }
Dow Jones is a leading provider of market, industry and portfolio intelligence
serving a wide range of financial applications including asset management,
trading, analysis and bankruptcy/restructuring. The information needed to
provide such intelligence comes from a variety of heterogeneous data sources.
Integrating this information and answering complex queries over it presents
both conceptual and computational challenges. In order to address these
challenges Dow Jones have used the \RDFox{} system to integrate the various
sources in a large RDF knowledge graph. The knowledge graph is being used to
power an expanding range of internal processes and market intelligence products.
David Tena Cucala, Bernardo Cuenca Grau, and Boris Motik. Faithful Approaches to Rule Learning. In Gabriele Kern-Isberner and Thomas Meyer, editors, Proc. of the 19th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2022), pages 484–493, Haifa, Israel, July 31–August 5 2022.
@InProceedings{tccgm22faithful-approaches, author = "David {Tena Cucala} and Bernardo {Cuenca Grau} and Boris Motik", title = "{Faithful Approaches to Rule Learning}", booktitle = "Proc. of the 19th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR 2022)", year = "2022", editor = "Gabriele Kern-Isberner and Thomas Meyer", pages = "484--493", address = "Haifa, Israel", month = "July 31--August 5", }
Rule learning involves developing machine learning models that can be
applied to a set of logical facts to predict additional facts, as well as
providing methods for extracting from the learned model a set of logical rules
that explain symbolically the model's predictions. Existing such approaches,
however, do not describe formally the relationship between the model's
predictions and the derivations of the extracted rules; rather, it is often
claimed without justification that the extracted rules `approximate' or
`explain' the model, and rule quality is evaluated by manual inspection. In
this paper, we study the formal properties of Neural-LP—a prominent rule
learning approach. We show that the rules extracted from Neural-LP models can
be both unsound and incomplete: on the same input dataset, the extracted rules
can derive facts not predicted by the model, and the model can make predictions
not derived by the extracted rules. We also propose a modification to the
Neural-LP model that ensures that the extracted rules are always sound and
complete. Finally, we show that, on several prominent benchmarks, the
classification performance of our modified model is comparable to that of the
standard Neural-LP model. Thus, faithful learning of rules is feasible
from both a theoretical and practical point of view.
David Tena Cucala, Bernardo Cuenca Grau, Egor V. Kostylev, and Boris Motik. Explainable GNN-Based Models over Knowledge Graphs. In Proc. of the 10th Int. Conf. on Learning Representations (ICLR 2022), Virtual Event, April 25–39 2022. OpenReview.net.
@InProceedings{tccgkm22explainable-gnn-models, author = "David {Tena Cucala} and Bernardo {Cuenca Grau} and Egor V. Kostylev and Boris Motik", title = "{Explainable GNN-Based Models over Knowledge Graphs}", booktitle = "Proc. of the 10th Int. Conf. on Learning Representations (ICLR 2022)", year = "2022", address = "Virtual Event", publisher = "OpenReview.net", month = "April 25--39", }
Graph Neural Networks (GNNs) are often used to learn transformations of graph
data. While effective in practice, such approaches make predictions via numeric
manipulations so their output cannot be easily explained symbolically. We
propose a new family of GNN-based transformations of graph data that can be
trained effectively, but where all predictions can be explained symbolically as
logical inferences in Datalog—a well-known rule-based formalism. In
particular, we show how to encode an input knowledge graph into a graph with
numeric feature vectors, process this graph using a GNN, and decode the result
into an output knowledge graph. We use a new class of monotonic GNNs
(MGNNs) to ensure that this process is equivalent to a round of application of
a set of Datalog rules. We also show that, given an arbitrary MGNN, we can
automatically extract rules that completely characterise the transformation. We
evaluate our approach by applying it to classification tasks in knowledge graph
completion.
Temitope Ajileye, Boris Motik, and Ian Horrocks. Streaming Partitioning of RDF Graphs for Datalog Reasoning. In Ruben Verborgh, Katja Hose, Heiko Paulheim, Pierre-Antoine Champin, Maria Maleshkova, Óscar Corcho, Petar Ristoski, and Mehwish Alam, editors, Proc. of the 18th European Semantic Web Conference (ESWC 2021), volume 12731 of LNCS, pages 3–22, Hersonissos, Greece, June 6–10 2021. Springer.
@InProceedings{amh21streaming-partitioning-reasoning, author = "Temitope Ajileye and Boris Motik and Ian Horrocks", title = "{Streaming Partitioning of RDF Graphs for Datalog Reasoning}", booktitle = "Proc. of the 18th European Semantic Web Conference (ESWC 2021)", year = "2021", editor = "Ruben Verborgh and Katja Hose and Heiko Paulheim and Pierre-Antoine Champin and Maria Maleshkova and {\'O}scar Corcho and Petar Ristoski and Mehwish Alam", pages = "3--22", address = "Hersonissos, Greece", month = "June 6--10", publisher = "Springer", series = "LNCS", volume = "12731", }
A cluster of servers is often used to reason over RDF graphs whose size exceeds
the capacity of a single server. While many distributed approaches to reasoning
have been proposed, the problem of data partitioning has received little
attention thus far. In practice, data is usually partitioned by a variant of
hashing, which is very simple, but it does not pay attention to data locality.
Locality-aware partitioning approaches have been considered, but they usually
process the entire dataset on a single server. In this paper, we present two
new RDF partitioning strategies. Both are inspired by recent streaming
graph partitioning algorithms, which partition a graph while keeping only a
small subset of the graph in memory. We have evaluated our approaches
empirically against hash and min-cut partitioning. Our results suggest that our
approaches can significantly improve reasoning performance, but without
unrealistic demands on the memory of the servers used for partitioning.
Temitope Ajileye, Boris Motik, and Ian Horrocks. Datalog Materialisation in Distributed RDF Stores with Dynamic Data Exchange. In Chiara Ghidini, Olaf Hartig, Maria Maleshkova, Vojtech Sv'atek, Isabel F. Cruz, Aidan Hogan, Jie Song, Maxime Lefrançois, and Fabien Gandon, editors, Proc. of the 18th Int. Semantic Web Conf. (ISWC 2019), volume 11778 of LNCS, pages 21–37, Auckland, New Zealand, October 26–30 2019. Springer.
@InProceedings{amh19materialisation-data-exchange, author = "Temitope Ajileye and Boris Motik and Ian Horrocks", title = "{Datalog Materialisation in Distributed RDF Stores with Dynamic Data Exchange}", booktitle = "Proc. of the 18th Int. Semantic Web Conf. (ISWC 2019)", year = "2019", editor = "Chiara Ghidini and Olaf Hartig and Maria Maleshkova and Vojtech Sv{'{a}}tek and Isabel F. Cruz and Aidan Hogan and Jie Song and Maxime Lefran{\c{c}}ois and Fabien Gandon", pages = "21--37", address = "Auckland, New Zealand", month = "October 26--30", publisher = "Springer", series = "LNCS", volume = "11778", }
Several centralised RDF systems support datalog reasoning by precomputing and
storing all logically implied triples using the well-known semina\"{ive
algorithm}. Large RDF datasets often exceed the capacity of centralised RDF
systems, and a common solution is to distribute the datasets in a cluster of
shared-nothing servers. While numerous distributed query answering techniques
are known, distributed semina\"{i}ve evaluation of arbitrary datalog rules is
less understood. In fact, most distributed RDF stores either support no
reasoning or can handle only limited datalog fragments. In this paper, we
extend the dynamic data exchange approach for distributed query
answering by \citet{pmnh18dynamic-rdf-stores} to a reasoning algorithm that can
handle arbitrary rules while preserving important properties such as
nonrepetition of inferences. We also show empirically that our algorithm scales
well to very large RDF datasets.
Pan Hu, Jacopo Urbani, Boris Motik, and Ian Horrocks. Datalog Reasoning over Compressed RDF Knowledge Bases. In Wenwu Zhu, Dacheng Tao, Xueqi Cheng, Peng Cui, Elke A. Rundensteiner, David Carmel, Qi He, and Jeffrey Xu Yu, editors, Proc. of the 28th ACM Int. Conf. on Information and Knowledge Management (CIKM 2019), pages 2065–2068, Beijing, China, November 3–7 2019. ACM.
@InProceedings{humh19compressed-materialisation, author = "Pan Hu and Jacopo Urbani and Boris Motik and Ian Horrocks", title = "{Datalog Reasoning over Compressed RDF Knowledge Bases}", booktitle = "Proc. of the 28th ACM Int. Conf. on Information and Knowledge Management (CIKM 2019)", year = "2019", editor = "Wenwu Zhu and Dacheng Tao and Xueqi Cheng and Peng Cui and Elke A. Rundensteiner and David Carmel and Qi He and Jeffrey Xu Yu", pages = "2065--2068", address = "Beijing, China", month = "November 3--7", publisher = "ACM", }
Materialisation is often used in RDF systems as a preprocessing step to
derive all facts implied by given RDF triples and rules. Although widely used,
materialisation considers all possible rule applications and can use a lot of
memory for storing the derived facts, which can hinder performance. We present
a novel materialisation technique that compresses the RDF triples so that the
rules can sometimes be applied to multiple facts at once, and the derived facts
can be represented using structure sharing. Our technique can thus require less
space, as well as skip certain rule applications. Our experiments show that our
technique can be very effective: when the rules are relatively simple, our
system is both faster and requires less memory than prominent state-of-the-art
RDF systems.
Pan Hu, Boris Motik, and Ian Horrocks. Modular Materialisation of Datalog Programs. In Pascal Van Hentenryck and Zhi-Hua Zhou, editors, Proc. of the 33rd AAAI Conf. on Artificial Intelligence (AAAI 2019), pages 2859–286, Honolulu, HI, USA, January 27–February 1 2019. AAAI Press.
@InProceedings{hmh18modular-materialisation, author = "Pan Hu and Boris Motik and Ian Horrocks", title = "{Modular Materialisation of Datalog Programs}", booktitle = "Proc. of the 33rd AAAI Conf. on Artificial Intelligence (AAAI 2019)", year = "2019", editor = "Pascal Van Hentenryck and Zhi-Hua Zhou", pages = "2859--286", address = "Honolulu, HI, USA", month = "January 27--February 1", publisher = "AAAI Press", }
The semina\"ive algorithm can be used to materialise all consequences of a
datalog program, and it also forms the basis for algorithms that incrementally
update a materialisation as the input facts change. Certain (combinations of)
rules, however, can be handled much more efficiently using custom algorithms.
To integrate such algorithms into a general reasoning approach that can handle
arbitrary rules, we propose a modular framework for computing and maintaining a
materialisation. We split a datalog program into modules that can be handled
using specialised algorithms, and we handle the remaining rules using the
semina\"ive algorithm. We also present two algorithms for computing the
transitive and the symmetric–transitive closure of a relation that can be used
within our framework. Finally, we show empirically that our framework can
handle arbitrary datalog programs while outperforming existing approaches,
often by orders of magnitude.
Mark Kaminski, Bernardo Cuenca Grau, Egor V. Kostylev, Boris Motik, and Ian Horrocks. Stratified Negation in Limit Datalog Programs. In Jérôme Lang, editor, Proc. of the 27th Int. Joint Conf. on Artificial Intelligence (IJCAI 2018), pages 1875–1881, Stockholm, Sweden, July 13–19 2018.
@InProceedings{kcgkmh18stratified-negation-limit-datalog, author = "Mark Kaminski and Bernardo {Cuenca Grau} and Egor V. Kostylev and Boris Motik and Ian Horrocks", title = "{Stratified Negation in Limit Datalog Programs}", booktitle = "Proc. of the 27th Int. Joint Conf. on Artificial Intelligence (IJCAI 2018)", year = "2018", editor = "J{\'e}r{\^o}me Lang", pages = "1875--1881", address = "Stockholm, Sweden", month = "July 13--19", }
There has recently been an increasing interest in declarative data analysis,
where analytic tasks are specified using a logical language, and their
implementation and optimisation are delegated to a general-purpose query
engine. Existing declarative languages for data analysis can be formalised as
variants of logic programming equipped with arithmetic function symbols and/or
aggregation, and are typically undecidable. In prior work, the language of
limit programs was proposed, which is sufficiently powerful to capture
many analysis tasks and has decidable entailment problem. Rules in this
language, however, do not allow for negation. In this paper, we study an
extension of limit programs with stratified negation-as-failure. We show that
the additional expressive power makes reasoning computationally more demanding,
and provide tight data complexity bounds. We also identify a fragment with
tractable data complexity and sufficient expressivity to capture many relevant
tasks.
Giorgio Stefanoni, Boris Motik, and Egor V. Kostylev. Estimating the Cardinality of Conjunctive Queries over RDF Data Using Graph Summarisation. In Pierre-Antoine Champin, Fabien L. Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis, editors, Proc. of the 27th Int. World Wide Web Conference (WWW 2018), pages 1043–1052, Lyon, France, April 23–27 2018.
@InProceedings{smk18estimating-cardinality, author = "Giorgio Stefanoni and Boris Motik and Egor V. Kostylev", title = "{Estimating the Cardinality of Conjunctive Queries over RDF Data Using Graph Summarisation}", booktitle = "Proc. of the 27th Int. World Wide Web Conference (WWW 2018)", year = "2018", editor = "Pierre-Antoine Champin and Fabien L. Gandon and Mounia Lalmas and Panagiotis G. Ipeirotis", pages = "1043--1052", address = "Lyon, France", month = "April 23--27", }
Estimating the cardinality (i.e., the number of answers) of conjunctive queries
is particularly difficult in RDF systems: queries over RDF data are
navigational and thus tend to involve many joins. We present a new, principled
cardinality estimation technique based on graph summarisation. We interpret a
summary of an RDF graph using a possible world semantics and formalise the
estimation problem as computing the expected cardinality over all RDF graphs
represented by the summary, and we present a closed-form formula for computing
the expectation of arbitrary queries. We also discuss approaches to RDF graph
summarisation. Finally, we show empirically that our cardinality technique is
more accurate and more consistent, often by orders of magnitude, than the state
of the art.
Michael Benedikt, Boris Motik, and Efthymia Tsamoura. Goal-Driven Query Answering for Existential Rules with Equality. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018), pages 1761–1770, New Orleans, LA, USA, February 2–7 2018. AAAI Press.
@InProceedings{bmt18goal-driven-equality, author = "Michael Benedikt and Boris Motik and Efthymia Tsamoura", title = "{Goal-Driven Query Answering for Existential Rules with Equality}", booktitle = "Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018)", year = "2018", editor = "Sheila A. McIlraith and Kilian Q. Weinberger", pages = "1761--1770", address = "New Orleans, LA, USA", month = "February 2--7", publisher = "AAAI Press", }
Inspired by the magic sets for Datalog, we present a novel goal-driven approach
for answering queries over terminating existential rules with equality (aka
TGDs and EGDs). Our technique improves the performance of query answering by
pruning the consequences that are not relevant for the query. This is
challenging in our setting because equalities can potentially affect all
predicates in a dataset. We address this problem by combining the existing
singularization technique with two new ingredients: an algorithm for
identifying the rules relevant to a query and a new magic sets algorithm. We
show empirically that our technique can significantly improve the performance
of query answering, and that it can mean the difference between answering a
query in a few seconds or not being able to process the query at all.
Pan Hu, Boris Motik, and Ian Horrocks. Optimised Maintenance of Datalog Materialisations. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018), pages 1871–1879, New Orleans, LA, USA, February 2–7 2018. AAAI Press.
@InProceedings{hmh18optimised-maintenance, author = "Pan Hu and Boris Motik and Ian Horrocks", title = "{Optimised Maintenance of Datalog Materialisations}", booktitle = "Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018)", year = "2018", editor = "Sheila A. McIlraith and Kilian Q. Weinberger", pages = "1871--1879", address = "New Orleans, LA, USA", month = "February 2--7", publisher = "AAAI Press", }
To efficiently answer queries, datalog systems often materialise all
consequences of a datalog program, so the materialisation must be updated
whenever the input facts change. Several solutions to the materialisation
update problem have been proposed. The Delete/Rederive (DRed) and the
Backward/Forward (B/F) algorithms solve this problem for general
datalog, but both contain steps that evaluate rules `backwards' by matching
their heads to a fact and evaluating the partially instantiated rule bodies as
queries. We show that this can be a considerable source of overhead even on
very small updates. In contrast, the Counting algorithm does not
evaluate the rules `backwards', but it can handle only nonrecursive rules. We
present two hybrid approaches that combine DRed and B/F with Counting so as to
reduce or even eliminate `backward' rule evaluation while still handling
arbitrary datalog programs. We show empirically that our hybrid algorithms are
usually significantly faster than existing approaches, sometimes by orders of
magnitude.
Alessandro Ronca, Mark Kaminski, Bernardo Cuenca Grau, Boris Motik, and Ian Horrocks. Stream Reasoning in Temporal Datalog. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018), pages 1941–1948, New Orleans, LA, USA, February 2–7 2018. AAAI Press.
@InProceedings{rkcgmh18stream-datalog, author = "Alessandro Ronca and Mark Kaminski and Bernardo {Cuenca Grau} and Boris Motik and Ian Horrocks", title = "{Stream Reasoning in Temporal Datalog}", booktitle = "Proc. of the 32nd AAAI Conf. on Artificial Intelligence (AAAI 2018)", year = "2018", editor = "Sheila A. McIlraith and Kilian Q. Weinberger", pages = "1941--1948", address = "New Orleans, LA, USA", month = "February 2--7", publisher = "AAAI Press", }
In recent years, there has been an increasing interest in extending traditional
stream processing engines with logical, rule-based, reasoning capabilities.
This poses significant theoretical and practical challenges since rules can
derive new information and propagate it both towards past and future time
points; as a result, streamed query answers can depend on data that has not yet
been received, as well as on data that arrived far in the past. Stream
reasoning algorithms, however, must be able to stream out query answers as soon
as possible, and can only keep a limited number of previous input facts in
memory. In this paper, we propose novel reasoning problems to deal with these
challenges, and study their computational properties on Datalog extended with a
temporal sort and the successor function—a core rule-based language for
stream reasoning applications.
Mark Kaminski, Bernardo Cuenca Grau, Egor V. Kostylev, Boris Motik, and Ian Horrocks. Foundations of Declarative Data Analysis Using Limit Datalog Programs. In Carles Sierra, editor, Proc. of the 26th Int. Joint Conf. on Artificial Intelligence (IJCAI 2017), pages 1123–1130, Melbourne, Vic, Australia, August 19–25 2017.
@InProceedings{kcgkmh17limit-datalog, author = "Mark Kaminski and Bernardo {Cuenca Grau} and Egor V. Kostylev and Boris Motik and Ian Horrocks", title = "{Foundations of Declarative Data Analysis Using Limit Datalog Programs}", booktitle = "Proc. of the 26th Int. Joint Conf. on Artificial Intelligence (IJCAI 2017)", year = "2017", editor = "Carles Sierra", address = "Melbourne, Vic, Australia", month = "August 19--25", pages = "1123--1130", }
Motivated by applications in declarative data analysis, we study $\DLog$—an
extension of positive Datalog with arithmetic functions over integers. This
language is known to be undecidable, so we propose two fragments. In
limit $\DLog$ predicates are axiomatised to keep minimal/maximal numeric
values, allowing us to show that fact entailment is
$\textsc{coNExpTime}$-complete in combined, and $\textsc{coNP}$-complete in
data complexity. Moreover, an additional stability requirement causes
the complexity to drop to $\textsc{ExpTime}$ and $\textsc{PTime}$,
respectively. Finally, we show that stable $\DLog$ can express many useful data
analysis tasks, and so our results provide a sound foundation for the
development of advanced information systems.
Alberto Tonon, Philippe Cudré-Mauroux, Albert Blarer, Vincent Lenders, and Boris Motik. ArmaTweet: Detecting Events by Semantic Tweet Analysis. In Eva Blomqvist, Diana Maynard, Aldo Gangemi, Rinke Hoekstra, Pascal Hitzler, and Olaf Hartig, editors, Proc. of the 14th Extended Semantic Web Conference (ESWC 2017), volume 10250 of LNCS, pages 138–153, Portorož, Slovenia, May 28–June 1 2017. Springer.
@InProceedings{tcmblm17armatweet, author = "Alberto Tonon and Philippe Cudr{\'e}-Mauroux and Albert Blarer and Vincent Lenders and Boris Motik", title = "{ArmaTweet: Detecting Events by Semantic Tweet Analysis}", booktitle = "Proc. of the 14th Extended Semantic Web Conference (ESWC 2017)", year = "2017", editor = "Eva Blomqvist and Diana Maynard and Aldo Gangemi and Rinke Hoekstra and Pascal Hitzler and Olaf Hartig", volume = "10250", series = "LNCS", pages = "138--153", address = "Portoro{\v{z}}, Slovenia", month = "May 28--June 1", publisher = "Springer", }
armasuisse Science and Technology, the R&D agency for the Swiss Armed Forces,
is developing a Social Media Analysis (SMA) system to help detect events such
as natural disasters and terrorist activity by analysing Twitter posts. The
system currently supports only keyword search, which cannot identify complex
events such as `politician dying' or `militia terror act' since the keywords
that correctly identify such events are typically unknown. In this paper we
present \system, an extension of SMA developed in a collaboration between
armasuisse and the Universities of Fribourg and Oxford that supports
semantic event detection. Our system extracts a structured
representation from the tweets' text using NLP technology, which it then
integrates with DBpedia and WordNet in an RDF knowledge graph. Security
analysts can thus describe the events of interest precisely and declaratively
using SPARQL queries over the graph. Our experiments show that \system can
detect many complex events that cannot be detected by keywords alone.
Michael Benedikt, George Konstantinidis, Giansalvatore Mecca, Boris Motik, Paolo Papotti, Donatello Santoro, and Efthymia Tsamoura. Benchmarking the Chase. In Emanuel Sallinger, Jan Van den Bussche, and Floris Geerts, editors, Proc. of 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS 2017), pages 37–52, Chicago, IL, USA, May 14–19 2017. ACM.
@InProceedings{bkmmpst17becnhmarking-chase, author = "Michael Benedikt and George Konstantinidis and Giansalvatore Mecca and Boris Motik and Paolo Papotti and Donatello Santoro and Efthymia Tsamoura", title = "{Benchmarking the Chase}", booktitle = "Proc. of 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS 2017)", year = "2017", editor = "Emanuel Sallinger and Jan Van den Bussche and Floris Geerts", pages = "37--52", address = "Chicago, IL, USA", month = "May 14--19", publisher = "ACM", }
The chase is a family of algorithms used in a number of data management tasks,
such as data exchange, answering queries under dependencies, query
reformulation with constraints, and data cleaning. It is well established as a
theoretical tool for understanding these tasks, and in addition a number of
prototype systems have been developed. While individual chase-based systems and
particular optimizations of the chase have been experimentally evaluated in the
past, we provide the first comprehensive and publicly available
benchmark—test infrastructure and a set of test scenarios—for evaluating
chase implementations across a wide range of assumptions about the dependencies
and the data. We used our benchmark to compare chase-based systems on data
exchange and query answering tasks with one another, as well as with systems
that can solve similar tasks developed in closely related communities. Our
evaluation provided us with a number of new insights concerning the factors
that impact the performance of chase implementations.
Anthony Potter, Boris Motik, Yavor Nenov, and Ian Horrocks. Distributed RDF Query Answering with Dynamic Data Exchange. In Paul T. Groth, Elena Simperl, Alasdair J. G. Gray, Marta Sabou, Markus Krötzsch, Freddy Lécué, Fabian Flöck, and Yolanda Gil, editors, Proc. of the 15th Int. Semantic Web Conf., Part I (ISWC 2016), volume 9981 of LNCS, pages 480–497, Kobe, Japan, October 17–21 2016. Springer.
@InProceedings{pmnh16distributed-RDF-query, author = "Anthony Potter and Boris Motik and Yavor Nenov and Ian Horrocks", title = "{Distributed RDF Query Answering with Dynamic Data Exchange}", booktitle = "Proc. of the 15th Int. Semantic Web Conf., Part I (ISWC 2016)", year = "2016", editor = "Paul T. Groth and Elena Simperl and Alasdair J. G. Gray and Marta Sabou and Markus Kr{\"o}tzsch and Freddy L{\'e}cu{\'e} and Fabian Fl{\"o}ck and Yolanda Gil", volume = "9981", series = "LNCS", pages = "480--497", address = "Kobe, Japan", month = "October 17--21", publisher = "Springer", }
Evaluating joins over RDF data stored in a shared-nothing server cluster is key
to processing truly large RDF datasets. To the best of our knowledge, the
existing approaches use a variant of the data exchange operator that is
inserted into the query plan statically (i.e., at query compile time) to
shuffle data between servers. We argue that such approaches often miss
opportunities for local computation, and we present a novel solution to
distributed query answering that consists of two main components. First, we
present a query answering algorithm based on dynamic data exchange,
which exploits data locality to maximise the amount of computation on a single
server. Second, we present a partitioning algorithm for RDF data based on graph
partitioning whose aim is to increase data locality. We have implemented our
approach in the RDFox system, and our performance evaluation suggests that our
techniques outperform the state of the art by up to an order of magnitude in
terms of query evaluation times, network communication, and memory use.
Robert Piro, Yavor Nenov, Boris Motik, Ian Horrocks, Peter Hendler, Scott Kimberly, and Michael Rossman. Semantic Technologies for Data Analysis in Health Care. In Paul T. Groth, Elena Simperl, Alasdair J. G. Gray, Marta Sabou, Markus Krötzsch, Freddy Lécué, Fabian Flöck, and Yolanda Gil, editors, Proc. of the 15th Int. Semantic Web Conf., Part II (ISWC 2016), volume 9982 of LNCS, pages 400–417, Kobe, Japan, October 17–21 2016. Springer.
@InProceedings{pnmhhkr16semantic-health-care, author = "Robert Piro and Yavor Nenov and Boris Motik and Ian Horrocks and Peter Hendler and Scott Kimberly and Michael Rossman", title = "{Semantic Technologies for Data Analysis in Health Care}", booktitle = "Proc. of the 15th Int. Semantic Web Conf., Part II (ISWC 2016)", year = "2016", editor = "Paul T. Groth and Elena Simperl and Alasdair J. G. Gray and Marta Sabou and Markus Kr{\"o}tzsch and Freddy L{\'e}cu{\'e} and Fabian Fl{\"o}ck and Yolanda Gil", volume = "9982", series = "LNCS", pages = "400--417", address = "Kobe, Japan", month = "October 17--21", publisher = "Springer", }
A fruitful application of Semantic Technologies in the field of healthcare data
analysis has emerged from the collaboration between Oxford and Kaiser
Permanente a US healthcare provider (HMO). US HMOs have to annually deliver
measurement results on their quality of care to US authorities. One of these
sets of measurements is defined in a specification called HEDIS which is
infamous amongst data analysts for its complexity. Traditional solutions with
either SAS-programs or SQL-queries lead to involved solutions whose maintenance
and validation is difficult and binds considerable amount of resources. In this
paper we present the project in which we have applied Semantic Technologies to
compute the most difficult part of the HEDIS measures. We show that we arrive
at a clean, structured and legible encoding of HEDIS in the rule language of
the RDF-triple store RDFox. We use RDFox's reasoning capabilities and SPARQL
queries to compute and extract the results. The results of a whole Kaiser
Permanente regional branch could be computed in competitive time by RDFox on
readily available commodity hardware. Further development and deployment of the
project results are envisaged in Kaiser Permanente.
Andrew Bate, Boris Motik, Bernardo Cuenca Grau, František Simančík, and Ian Horrocks. Extending Consequence-Based Reasoning to SRIQ. In Chitta Baral, James P. Delgrande, and Frank Wolter, editors, Proc. of the 15th Int. Conf. on the Principles of Knowledge Representation and Reasoning (KR 2016), pages 187–196, Cape Town, South Africa, April 25–29 2016.
@InProceedings{bmcgsh16consequence-based-SHIQ, author = "Andrew Bate and Boris Motik and Bernardo {Cuenca Grau} and Franti{\v{s}}ek Siman{\v{c}}{\'i}k and Ian Horrocks", title = "{Extending Consequence-Based Reasoning to $\mathcal{SRIQ}$}", booktitle = "Proc. of the 15th Int. Conf. on the Principles of Knowledge Representation and Reasoning (KR 2016)", year = "2016", editor = "Chitta Baral and James P. Delgrande and Frank Wolter", pages = "187--196", address = "Cape Town, South Africa", month = "April 25--29", }
Consequence-based calculi are a family of reasoning algorithms for
description logics (DLs), and they combine hypertableau and resolution
in a way that often achieves excellent performance in practice. Up to now,
however, they were proposed for either Horn DLs (which do not support
disjunction), or for DLs without counting quantifiers. In this paper we present
a novel consequence-based calculus for \SRIQ—a rich DL that supports both
features. This extension is non-trivial since the intermediate consequences
that need to be derived during reasoning cannot be captured using DLs
themselves. The results of our preliminary performance evaluation suggest the
feasibility of our approach in practice.
Yavor Nenov, Robert Piro, Boris Motik, Ian Horrocks, Zhe Wu, and Jay Banerjee. RDFox: A Highly-Scalable RDF Store. In Marcelo Arenas, Óscar Corcho, Elena Simperl, Markus Strohmaier, Mathieu d'Aquin, Kavitha Srinivas, Paul T. Groth, Michel Dumontier, Jeff Heflin, Krishnaprasad Thirunarayan, and Steffen Staab, editors, Proc. of the 14th Int. Semantic Web Conf. (ISWC 2015), volume 9367 of LNCS, pages 3–20, Bethlehem, PA, USA, October 11–15 2015. Springer.
@InProceedings{npmhwb15RDFox-scalable, author = "Yavor Nenov and Robert Piro and Boris Motik and Ian Horrocks and Zhe Wu and Jay Banerjee", title = "{RDFox: A Highly-Scalable RDF Store}", booktitle = "Proc. of the 14th Int. Semantic Web Conf. (ISWC 2015)", year = "2015", editor = "Marcelo Arenas and {\'O}scar Corcho and Elena Simperl and Markus Strohmaier and Mathieu d'Aquin and Kavitha Srinivas and Paul T. Groth and Michel Dumontier and Jeff Heflin and Krishnaprasad Thirunarayan and Steffen Staab", volume = "9367", series = "LNCS", pages = "3--20", address = "Bethlehem, PA, USA", month = "October 11--15", publisher = "Springer", }
We present RDFox—a main-memory, scalable, centralised RDF store that supports
materialisation-based parallel datalog reasoning and SPARQL query answering.
RDFox uses novel and highly-efficient parallel reasoning algorithms for the
computation and incremental update of datalog materialisations with efficient
handling of \sameAs. In this system description paper, we present an overview
of the system architecture and highlight the main ideas behind our indexing
data structures and our novel reasoning algorithms. In addition, we evaluate
RDFox on a high-end SPARC T5-8 server with 128 physical cores and 4TB of RAM.
Our results show that RDFox can effectively exploit such a machine, achieving
speedups of up to 87 times, storage of up to 9.2 billion triples, memory usage
as low as 36.9 bytes per triple, importation rates of up to 1 million triples
per second, and reasoning rates of up to 6.1 million triples per second.
Boris Motik, Yavor Nenov, Robert Piro, and Ian Horrocks. Combining Rewriting and Incremental Materialisation Maintenance for Datalog Programs with Equality. In Qiang Yang and Michael Wooldridge, editors, Proc. of the 24th Int. Joint Conf. on Artificial Intelligence (IJCAI 2015), pages 3127–3133, Buenos Aires, Argentina, July 25–31 2015. AAAI Press.
@InProceedings{mnph15incremental-BF-sameAs, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks", title = "{Combining Rewriting and Incremental Materialisation Maintenance for Datalog Programs with Equality}", booktitle = "Proc. of the 24th Int. Joint Conf. on Artificial Intelligence (IJCAI 2015)", year = "2015", editor = "Qiang Yang and Michael Wooldridge", pages = "3127--3133", address = "Buenos Aires, Argentina", month = "July 25--31", publisher = "AAAI Press", }
Materialisation precomputes all consequences of a set of facts and a
datalog program so that queries can be evaluated directly (i.e., independently
from the program). Rewriting optimises materialisation for datalog
programs with equality by replacing all equal constants with a single
representative; and incremental maintenance algorithms can efficiently
update a materialisation for small changes in the input facts. Both techniques
are critical to practical applicability of datalog systems; however, we are
unaware of an approach that combines rewriting and incremental maintenance. In
this paper we present the first such combination, and we show empirically that
it can speed up updates by several orders of magnitude compared to using either
rewriting or incremental maintenance in isolation.
Boris Motik, Yavor Nenov, Robert Piro, and Ian Horrocks. Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm. In Blai Bonet and Sven Koenig, editors, Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015), pages 1560–1568, Austin, TX, USA, January 25–30 2015. AAAI Press.
@InProceedings{mnph15incremental-BF, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks", title = "{Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm}", booktitle = "Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015)", year = "2015", editor = "Blai Bonet and Sven Koenig", pages = "1560--1568", address = "Austin, TX, USA", month = "January 25--30", publisher = "AAAI Press", }
Datalog-based systems often materialise all consequences of a datalog
program and the data, allowing users' queries to be evaluated directly in the
materialisation. This process, however, can be computationally intensive, so
most systems update the materialisation incrementally when input data changes.
We argue that existing solutions, such as the well-known Delete/Rederive
(DRed) algorithm, can be inefficient in cases when facts have many alternate
derivations. As a possible remedy, we propose a novel Backward/Forward
(B/F) algorithm that tries to reduce the amount of work by a combination of
backward and forward chaining. In our evaluation, the B/F algorithm was several
orders of magnitude more efficient than the DRed algorithm on some inputs, and
it was never significantly less efficient.
Boris Motik, Yavor Nenov, Robert Piro, and Ian Horrocks. Handling owl:sameAs via Rewriting. In Blai Bonet and Sven Koenig, editors, Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015), pages 231–237, Austin, TX, USA, January 25–30 2015. AAAI Press.
@InProceedings{mnph15owl-sameAs-rewriting, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks", title = "{Handling owl:sameAs via Rewriting}", booktitle = "Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015)", year = "2015", editor = "Blai Bonet and Sven Koenig", pages = "231--237", address = "Austin, TX, USA", month = "January 25--30", publisher = "AAAI Press", }
Rewriting is widely used to optimise \sameAs reasoning in materialisation based
OWL 2 RL systems. We investigate issues related to both the correctness and
efficiency of rewriting, and present an algorithm that guarantees correctness,
improves efficiency, and can be effectively parallelised. Our evaluation shows
that our approach can reduce reasoning times on practical data sets by orders
of magnitude.
Giorgio Stefanoni and Boris Motik. Answering Conjunctive Queries over EL Knowledge Bases with Transitive and Reflexive Roles. In Blai Bonet and Sven Koenig, editors, Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015), pages 1611–1617, Austin, TX, USA, January 25–30 2015. AAAI Press.
@InProceedings{sm15transitive-reflexive-EL-CQ, author = "Giorgio Stefanoni and Boris Motik", title = "{Answering Conjunctive Queries over EL Knowledge Bases with Transitive and Reflexive Roles}", booktitle = "Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015)", year = "2015", editor = "Blai Bonet and Sven Koenig", pages = "1611--1617", address = "Austin, TX, USA", month = "January 25--30", publisher = "AAAI Press", }
Answering conjunctive queries (CQs) over \el knowledge bases (KBs) with complex
role inclusions is \pspace-hard and in \pspace in certain cases; however, if
complex role inclusions are restricted to role transitivity, a tight upper
complexity bound has so far been unknown. Furthermore, the existing algorithms
cannot handle reflexive roles, and they are not practicable. Finally, the
problem is tractable for acyclic CQs and $\mathcal{ELH}$, and \np-complete for
unrestricted CQs and \elho KBs. In this paper we complete the complexity
landscape of CQ answering for several important cases. In particular, we
present a practicable \np algorithm for answering CQs over \elhso KBs—a logic
containing all of OWL 2 EL, but with complex role inclusions restricted to role
transitivity. Our preliminary evaluation suggests that the algorithm can be
suitable for practical use. Moreover, we show that, even for a restricted class
of so-called arborescent acyclic queries, CQ answering over \el KBs
becomes \np-hard in the presence of either transitive or reflexive roles.
Finally, we show that answering arborescent CQs over \elho KBs is tractable,
whereas answering acyclic CQs is \np-hard.
Boris Motik, Yavor Nenov, Robert Piro, Ian Horrocks, and Dan Olteanu. Parallel Materialisation of Datalog Programs in Centralised, Main-Memory RDF Systems. In Carla E. Brodley and Peter Stone, editors, Proc. of the 28th AAAI Conf. on Artificial Intelligence (AAAI 2014), pages 129–137, Québec City, Québec, Canada, July 27–31 2014. AAAI Press.
@InProceedings{mnpho14parallel-materialisation-RDFox, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks and Dan Olteanu", title = "{Parallel Materialisation of Datalog Programs in Centralised, Main-Memory RDF Systems}", booktitle = "Proc. of the 28th AAAI Conf. on Artificial Intelligence (AAAI 2014)", year = "2014", editor = "Carla E. Brodley and Peter Stone", pages = "129--137", address = "Qu{\'e}bec City, Qu{\'e}bec, Canada", month = "July 27--31", publisher = "AAAI Press", }
We present a novel approach to parallel materialisation (i.e., fixpoint
computation) of datalog programs in centralised, main-memory, multi-core RDF
systems. Our approach comprises an algorithm that evenly distributes the
workload to cores, and an RDF indexing data structure that supports efficient,
`mostly' lock-free parallel updates. Our empirical evaluation shows that our
approach parallelises computation very well: with 16 physical cores,
materialisation can be up to 13.9 times faster than with just one core.
Pierre Chaussecourte, Birte Glimm, Ian Horrocks, Boris Motik, and Laurent Pierre. The Energy Management Adviser at EDF. In Harith Alani, Lalana Kagal, Achille Fokoue, Paul Groth, Chris Biemann, Josiane Xavier Parreira, Lora Aroyo, Natasha Noy, Chris Welty, and Krzysztof Janowicz, editors, Proc. of the 12th Int. Semantic Web Conf. (ISWC 2013), In-Use Track, volume 8219 of LNCS, pages 49–64, Sydney, NSW, Australia, October 21–25 2013. Springer.
@InProceedings{cghmp13energy-EDF, author = "Pierre Chaussecourte and Birte Glimm and Ian Horrocks and Boris Motik and Laurent Pierre", title = "{The Energy Management Adviser at EDF}", booktitle = "Proc. of the 12th Int. Semantic Web Conf. (ISWC 2013), In-Use Track", year = "2013", editor = "Harith Alani and Lalana Kagal and Achille Fokoue and Paul Groth and Chris Biemann and Josiane Xavier Parreira and Lora Aroyo and Natasha Noy and Chris Welty and Krzysztof Janowicz", volume = "8219", series = "LNCS", pages = "49--64", address = "Sydney, NSW, Australia", month = "October 21--25", publisher = "Springer", }
The EMA (Energy Management Adviser) aims to produce personalised energy saving
advice for EDF's customers. The advice takes the form of one or more `tips',
and personalisation is achieved using semantic technologies: customers are
described using RDF, an OWL ontology provides a conceptual model of the
relevant domain (housing, environment, and so on) and the different kinds of
tips, and SPARQL query answering is used to identify relevant tips. The
current prototype provides tips to more than 300,000 EDF customers in France
at least twice a year. The main challenges for our future work include
providing a timely service for all of the 35 million EDF customers in France,
simplifying the system's maintenance, and providing new ways for interacting
with customers such as via a Web site.
Bernardo Cuenca Grau, Boris Motik, Giorgios Stoilos, and Ian Horrocks. Computing Datalog Rewritings Beyond Horn Ontologies. In Francesca Rossi, editor, Proc. of the 23rd Int. Joint Conf. on Artificial Intelligence (IJCAI 2013), pages 832–838, Beijing, China, August 3–9 2013.
@InProceedings{gmsh13beyond-horn, author = "Bernardo {Cuenca Grau} and Boris Motik and Giorgios Stoilos and Ian Horrocks", title = "{Computing Datalog Rewritings Beyond Horn Ontologies}", booktitle = "Proc. of the 23rd Int. Joint Conf. on Artificial Intelligence (IJCAI 2013)", year = "2013", editor = "Francesca Rossi", pages = "832--838", address = "Beijing, China", month = "August 3--9", }
Rewriting-based approaches for answering queries over an OWL 2 DL ontology
have so far been developed mainly for Horn fragments of OWL 2 DL. In this
paper, we study the possibilities of answering queries over non-Horn
ontologies using datalog rewritings. We prove that this is impossible in
general even for very simple ontology languages, and even if $
P
time = NP
$.
Furthermore, we present a resolution-based procedure for $\SHI$ ontologies
that, in case it terminates, produces a datalog rewriting of the ontology. We
also show that our procedure necessarily terminates on $\Bool$ ontologies—an
extension of OWL 2 QL with transitive roles and Boolean connectives.
Giorgio Stefanoni, Boris Motik, and Ian Horrocks. Introducing Nominals to the Combined Query Answering Approaches for EL. In Marie desJardins and Michael L. Littman, editors, Proc. of the 27th AAAI Conf. on Artificial Intelligence (AAAI 2013), pages 1177–1183, Bellevue, WA, USA, July 14–18 2013.
@InProceedings{smh13nominals-combined, author = "Giorgio Stefanoni and Boris Motik and Ian Horrocks", title = "{Introducing Nominals to the Combined Query Answering Approaches for $\mathcal{EL}$}", booktitle = "Proc. of the 27th AAAI Conf. on Artificial Intelligence (AAAI 2013)", year = "2013", editor = "Marie desJardins and Michael L. Littman", pages = "1177--1183", address = "Bellevue, WA, USA", month = "July 14--18", }
So-called combined approaches answer a conjunctive query over a
description logic ontology in three steps: first, they materialise certain
consequences of the ontology and the data; second, they evaluate the query
over the data; and third, they filter the result of the second phase to
eliminate unsound answers. Such approaches were developed for various members
of the DL-Lite and the EL families of languages, but none of them
can handle ontologies containing nominals. In our work, we bridge this gap and
present a combined query answering approach for \elho—a logic that contains
all features of the OWL 2 EL standard apart from transitive roles and complex
role inclusions. This extension is nontrivial because nominals require
equality reasoning, which introduces complexity into the first and the third
step. Our empirical evaluation suggests that our technique is suitable for
practical application, and so it provides a practical basis for conjunctive
query answering in a large fragment of OWL 2 EL.
Despoina Magka, Boris Motik, and Ian Horrocks. Modelling Structured Domains Using Description Graphs and Logic Programming. In Elena Simperl, Philipp Cimiano, Axel Polleres, Óscar Corcho, and Valentina Presutti, editors, Proc. of the 9th Extended Semantic Web Conference (ESWC 2012), volume 7295 of LNCS, pages 330–344, Heraklion, Greece, May 27–31 2012. Springer.
@InProceedings{mmh12dgs-and-lp, author = "Despoina Magka and Boris Motik and Ian Horrocks", title = "{Modelling Structured Domains Using Description Graphs and Logic Programming}", booktitle = "Proc. of the 9th Extended Semantic Web Conference (ESWC 2012)", year = "2012", editor = "Elena Simperl and Philipp Cimiano and Axel Polleres and {\'O}scar Corcho and Valentina Presutti", volume = "7295", series = "LNCS", pages = "330--344", address = "Heraklion, Greece", month = "May 27--31", publisher = "Springer", }
Although OWL 2 is widely used to describe complex objects such as chemical
molecules, it cannot represent `structural' features of chemical entities
(e.g., having a ring). A combination of rules and description graphs
(DGs) has been proposed as a possible solution, but it still exhibits several
drawbacks. In this paper we present a radically different approach that we
call Description Graph Logic Programs. Syntactically, our approach combines
DGs, rules, and OWL 2 RL axioms, but its semantics is defined via a
translation into logic programs under stable model semantics. The result is an
expressive OWL 2 RL-compatible formalism that is well suited for modelling
objects with complex structure.
Bernardo Cuenca Grau, Ian Horrocks, Markus Krötzsch, Clemens Kupke, Despoina Magka, Boris Motik, and Zhe Wang. Acyclicity Conditions and their Application to Query Answering in Description Logics. In Gerhard Brewka, Thomas Eiter, and Sheila A. McIlraith, editors, Proc. of the 13th Int. Conf. on the Principles of Knowledge Representation and Reasoning (KR 2012), pages 243–253, Rome, Italy, June 10–14 2012.
@InProceedings{ghkkmmw12acyclicity, author = "Bernardo {Cuenca Grau} and Ian Horrocks and Markus Kr{\"o}tzsch and Clemens Kupke and Despoina Magka and Boris Motik and Zhe Wang", title = "{Acyclicity Conditions and their Application to Query Answering in Description Logics}", booktitle = "Proc. of the 13th Int. Conf. on the Principles of Knowledge Representation and Reasoning (KR 2012)", year = "2012", editor = "Gerhard Brewka and Thomas Eiter and Sheila A. McIlraith", pages = "243--253", address = "Rome, Italy", month = "June 10--14", }
Answering conjunctive queries (CQs) over a set of facts extended with
existential rules is a key problem in knowledge representation and databases.
This problem can be solved using the chase (aka materialisation)
algorithm; however, CQ answering is undecidable for general existential rules,
so the chase is not guaranteed to terminate. Several acyclicity
conditions provide sufficient conditions for chase termination. In this
paper, we present two novel such conditions—model-faithful acyclicity
(MFA) and model-summarising acyclicity (MSA)—that generalise many of
the acyclicity conditions known so far in the literature.
Materialisation provides the basis for several widely-used OWL 2 DL reasoners.
In order to avoid termination problems, many of these systems handle only the
OWL 2 RL profile of OWL 2 DL; furthermore, some systems go beyond OWL 2 RL,
but they provide no termination guarantees. In this paper we investigate
whether various acyclicity conditions can provide a principled and practical
solution to these problems. On the theoretical side, we show that query
answering for acyclic ontologies is of lower complexity than for general
ontologies. On the practical side, we show that many of the commonly used OWL
2 DL ontologies are MSA, and that the facts obtained via materialisation are
not too large. Thus, our results suggest that principled extensions to
materialisation-based OWL 2 DL reasoners may be practically feasible.
Boris Motik, Ian Horrocks, and Su Myeon Kim. Delta-Reasoner: A Semantic Web Reasoner for an Intelligent Mobile Platform. In Alain Mille, Fabien L. Gandon, Jacques Misselis, Michael Rabinovich, and Steffen Staab, editors, Proc. of the 21st Int. World Wide Web Conference (WWW 2012), Industry Track, pages 63–72, Lyon, France, April 16–20 2012.
@InProceedings{mhk12delta-reasoner, author = "Boris Motik and Ian Horrocks and Su Myeon Kim", title = "{Delta-Reasoner: A Semantic Web Reasoner for an Intelligent Mobile Platform}", booktitle = "Proc. of the 21st Int. World Wide Web Conference (WWW 2012), Industry Track", year = "2012", editor = "Alain Mille and Fabien L. Gandon and Jacques Misselis and Michael Rabinovich and Steffen Staab", pages = "63--72", address = "Lyon, France", month = "April 16--20", }
To make mobile device applications more intelligent, one can combine the
information obtained via device sensors with background knowledge in order to
deduce the user's current context, and then use this context to adapt the
application's behaviour to the user's needs. In this paper we describe
\Dreasoner, a key component of the Intelligent Mobile Platform (IMP), which
was designed to support context-aware applications running on mobile devices.
Context-aware applications and the mobile platform impose unusual requirements
on the reasoner, which we have met by incorporating advanced features such as
incremental reasoning and continuous query evaluation into our reasoner.
Although we have so far been able to conduct only a very preliminary
performance evaluation, our results are very encouraging: our reasoner
exhibits sub-second response time on ontologies whose size significantly
exceeds the size of the ontologies used in the IMP.
Giorgos Stoilos, Bernardo Cuenca Grau, Boris Motik, and Ian Horrocks. Repairing Ontologies for Incomplete Reasoners. In Lora Aroyo, Chris Welty, Harith Alani, Jamie Taylor, Abraham Bernstein, Lalana Kagal, Natasha Fridman Noy, and Eva Blomqvist, editors, Proc. of the 10th Int. Semantic Web Conf. (ISWC 2011), volume 7031 of LNCS, pages 681–696, Bonn, Germany, October 23–27 2011. Springer.
@InProceedings{sgmh11repairing, author = "Giorgos Stoilos and Bernardo Cuenca Grau and Boris Motik and Ian Horrocks", title = "{Repairing Ontologies for Incomplete Reasoners}", booktitle = "Proc. of the 10th Int. Semantic Web Conf. (ISWC 2011)", year = "2011", editor = "Lora Aroyo and Chris Welty and Harith Alani and Jamie Taylor and Abraham Bernstein and Lalana Kagal and Natasha Fridman Noy and Eva Blomqvist", volume = "7031", series = "LNCS", pages = "681--696", address = "Bonn, Germany", month = "October 23--27", publisher = "Springer", }
The need for scalable query answering often forces Semantic Web applications
to use incomplete OWL 2 reasoners, which in some cases fail to derive
all answers to a query. This is clearly undesirable, and in some applications
may even be unacceptable. To address this problem, we investigate the problem
of `repairing' an ontology $\T$—that is, computing an ontology $\R$ such
that a reasoner that is incomplete for $\T$ becomes complete when used with
$\T \cup \R$. We identify conditions on $\T$ and the reasoner that make this
possible, present a practical algorithm for computing $\R$, and present a
preliminary evaluation which shows that, in some realistic cases, repairs are
feasible to compute, reasonable in size, and do not significantly affect
reasoner performance.
Boris Motik. Representing and Querying Validity Time in RDF and OWL: A Logic-Based Approach. In Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, and Birte Glimm, editors, Proc. of the 9th Int. Semantic Web Conf. (ISWC 2010), volume 6496 of LNCS, pages 550–565, Shanghai, China, November 7–11 2010. Springer.
@InProceedings{m10validity-time, author = "Boris Motik", title = "{Representing and Querying Validity Time in RDF and OWL: A Logic-Based Approach}", booktitle = "Proc. of the 9th Int. Semantic Web Conf. (ISWC 2010)", year = "2010", editor = "Peter F. Patel-Schneider and Yue Pan and Pascal Hitzler and Peter Mika and Lei Zhang and Jeff Z. Pan and Ian Horrocks and Birte Glimm", volume = "6496", series = "LNCS", pages = "550--565", address = "Shanghai, China", month = "November 7--11", publisher = "Springer", }
RDF(S) and OWL 2 currently support only static ontologies. In practice,
however, the truth of statements often changes with time, and Semantic Web
applications often need to represent such changes and reason about them. In
this paper we present a logic-based approach for representing validity time in
RDF and OWL. Unlike the existing proposals, our approach is applicable to
entailment relations that are not deterministic, such as the Direct Semantics
or the RDF-Based Semantics of OWL 2. We also extend SPARQL to temporal RDF
graphs and present a query evaluation algorithm. Finally, we present an
optimization of our algorithm that is applicable to entailment relations
characterized by a set of deterministic rules, such RDF(S) and OWL 2 RL/RDF
entailment.
Birte Glimm, Ian Horrocks, Boris Motik, and Giorgos Stoilos. Optimising Ontology Classification. In Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, and Birte Glimm, editors, Proc. of the 9th Int. Semantic Web Conf. (ISWC 2010), volume 6496 of LNCS, pages 225–240, Shanghai, China, November 7–11 2010. Springer.
@InProceedings{ghms10classification, author = "Birte Glimm and Ian Horrocks and Boris Motik and Giorgos Stoilos", title = "{Optimising Ontology Classification}", booktitle = "Proc. of the 9th Int. Semantic Web Conf. (ISWC 2010)", year = "2010", editor = "Peter F. Patel-Schneider and Yue Pan and Pascal Hitzler and Peter Mika and Lei Zhang and Jeff Z. Pan and Ian Horrocks and Birte Glimm", volume = "6496", series = "LNCS", pages = "225--240", address = "Shanghai, China", month = "November 7--11", publisher = "Springer", }
Ontology classification—the computation of subsumption hierarchies for
classes and properties—is one of the most important tasks for OWL reasoners.
Based on the algorithm by Shearer and Horrocks \cite{Rob09a}, we present a new
classification procedure that addresses several open issues of the original
algorithm, and that uses several novel optimisations in order to achieve
superior performance. We also consider the classification of (object and data)
properties. We show that algorithms commonly used to implement that task are
incomplete even for relatively weak ontology languages. Furthermore, we show
how to reduce the property classification problem into a standard (class)
classification problem, which allows reasoners to classify properties using
our optimised procedure. We have implemented our algorithms in the OWL HermiT
reasoner, and we present the results of a performance evaluation.
Birte Glimm, Ian Horrocks, and Boris Motik. Optimized Description Logic Reasoning via Core Blocking. In Jürgen Giesl and Reiner Hähnle, editors, Proc. of the 5th Int. Joint Conf. on Automated Reasoning (IJCAR 2010), volume 6173 of LNCS, pages 457–471, Edinburgh, UK, July 16–19 2010. Springer.
@InProceedings{ghm10core-blocking, author = "Birte Glimm and Ian Horrocks and Boris Motik", title = "{Optimized Description Logic Reasoning via Core Blocking}", booktitle = "Proc. of the 5th Int. Joint Conf. on Automated Reasoning (IJCAR 2010)", year = "2010", editor = "J{\"u}rgen Giesl and Reiner H{\"a}hnle", volume = "6173", series = "LNCS", pages = "457--471", address = "Edinburgh, UK", month = "July 16--19", publisher = "Springer", }
State of the art reasoners for expressive description logics, such as those
that underpin the OWL ontology language, are typically based on highly
optimized implementations of (hyper)tableau algorithms. Despite numerous
optimizations, certain ontologies encountered in practice still pose
significant challenges to such reasoners, mainly because of the size of the
model abstractions that they construct. To address this problem, we propose a
new blocking technique that tries to identify and halt redundant construction
at a much earlier stage than standard blocking techniques. An evaluation of a
prototypical implementation in the HermiT reasoner shows that our technique
can dramatically reduce the size of constructed model abstractions and reduce
reasoning time.
Bernardo Cuenca Grau and Boris Motik. Pushing the Limits of Reasoning over Ontologies with Hidden Content. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, Proc. of the 12th Int. Conference on Principles of Knowledge Representation and Reasoning (KR 2010), pages 200–210, Toronto, ON, Canada, May 9–13 2010. AAAI Press.
@InProceedings{gm10pushing-ibq, author = "Bernardo Cuenca Grau and Boris Motik", title = "{Pushing the Limits of Reasoning over Ontologies with Hidden Content}", booktitle = "Proc. of the 12th Int. Conference on Principles of Knowledge Representation and Reasoning (KR 2010)", year = "2010", editor = "Fangzhen Lin and Ulrike Sattler and Miroslaw Truszczynski", pages = "200--210", address = "Toronto, ON, Canada", month = "May 9--13", publisher = "AAAI Press", }
There is currently a growing interest in techniques for hiding parts of the
signature of an ontology $\hid{\KB}$ that is being reused by another ontology
$\vis{\KB}$. Towards this goal, \citeA{ijcai09-paper} recently proposed the
import-by-query framework, which makes the content of $\hid{\KB}$
accessible through a limited query interface. If $\vis{\KB}$ reuses the
symbols from $\hid{\KB}$ in a certain restricted way, one can reason over
${\vis{\KB} \cup \hid{\KB}}$ by accessing only $\vis{\KB}$ and the query
interface. In this paper, we map out the landscape of the import-by-query
problem. We show that certain restrictions of our original framework are
strictly necessary to make reasoning possible, we propose extensions that
overcome some of the expressivity limitations, we present several novel
reasoning algorithms, and we outline the limitations of the new framework.
Héctor Pérez-Urbina, Ian Horrocks, and Boris Motik. Efficient Query Answering for OWL 2. In Abraham Bernstein, David R. Karger, Tom Heath, Lee Feigenbaum, Diana Maynard, Enrico Motta, and Krishnaprasad Thirunarayan, editors, Proc. of the 8th Int. Semantic Web Conference (ISWC 2009), volume 5823 of LNCS, pages 489–504, Chantilly, VA, USA, October 25–29 2009. Springer.
@InProceedings{puhm09query-OWL2, author = "H{\'e}ctor P{\'e}rez-Urbina and Ian Horrocks and Boris Motik", title = "{Efficient Query Answering for OWL 2}", booktitle = "Proc. of the 8th Int. Semantic Web Conference (ISWC 2009)", year = "2009", editor = "Abraham Bernstein and David R. Karger and Tom Heath and Lee Feigenbaum and Diana Maynard and Enrico Motta and Krishnaprasad Thirunarayan", volume = "5823", series = "LNCS", pages = "489--504", address = "Chantilly, VA, USA", month = "October 25--29", publisher = "Springer", }
The QL profile of OWL 2 has been designed so that it is possible to
use database technology for query answering via query rewriting.
% : a
% query is first transformed into a union of conjunctive queries
% (i.e., an SQL query) using the conceptual part of the ontology and
% then the evaluation of the rewritten query is delegated to the
% database where the instance data resides.
% In our previous work we
% presented a rewriting algorithm that can deal with various
% description logics of the \DLlite and \EL families—the bases for
% the QL and EL profiles of OWL 2. Our algorithm exhibits
% "pay-as-you-go'' behavior: in the case of \DLliteR, it produces a
% union of conjunctive queries.
We present a comparison of our resolution based rewriting algorithm
% algorithm scaled down to \DLliteR
with the standard %rewriting
algorithm proposed by Calvanese et al., implementing both and conducting
% We implemented both algorithms and conducted
an empirical evaluation using ontologies
and queries derived from realistic applications. The results
indicate that our algorithm produces significantly smaller
rewritings in most cases, which could be important for practicality
in realistic applications.
Bernardo Cuenca Grau, Boris Motik, and Yevgeny Kazakov. Import-by-Query: Ontology Reasoning under Access Limitations. In Craig Boutilier, editor, Proc. of the 21st Int. Joint Conf. on Artificial Intelligence (IJCAI 2009), pages 727–733, Pasadena, CA, USA, July 11–17 2009.
@InProceedings{cgmk09ibq, author = "Bernardo Cuenca Grau and Boris Motik and Yevgeny Kazakov", title = "{Import-by-Query: Ontology Reasoning under Access Limitations}", booktitle = "Proc. of the 21st Int. Joint Conf. on Artificial Intelligence (IJCAI 2009)", year = "2009", editor = "Craig Boutilier", pages = "727--733", address = "Pasadena, CA, USA", month = "July 11--17", }
To enable ontology reuse, the Web Ontology Language (OWL) allows an
ontology $\vis{\KB}$ to import an ontology $\hid{\KB}$. To
reason with such a $\vis{\KB}$, a reasoner needs physical access to
the axioms of $\hid{\KB}$. For copyright and/or privacy reasons,
however, the authors of $\hid{\KB}$ might not want to publish the
axioms of $\hid{\KB}$; instead, they might prefer to provide an
oracle that can answer a (limited) set of queries over
$\hid{\KB}$, thus allowing $\vis{\KB}$ to import $\hid{\KB}$ "by
query.'' In this paper, we study import-by-query algorithms,
which can answer questions about ${\vis{\KB} \cup \hid{\KB}}$ by
accessing only $\vis{\KB}$ and the oracle. We show that no such
algorithm exists in general, and present restrictions under which
importing by query becomes feasible.
Boris Motik and Ian Horrocks. OWL Datatypes: Design and Implementation. In Amit P. Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy Finin, and Krishnaprasad Thirunarayan, editors, Proc. of the 7th Int. Semantic Web Conference (ISWC 2008), volume 5318 of LNCS, pages 307–322, Karlsruhe, Germany, October 26–30 2008. Springer.
@InProceedings{mh08datatypes, author = "Boris Motik and Ian Horrocks", title = "{OWL Datatypes: Design and Implementation}", booktitle = "Proc. of the 7th Int. Semantic Web Conference (ISWC 2008)", year = "2008", editor = "Amit P. Sheth and Steffen Staab and Mike Dean and Massimo Paolucci and Diana Maynard and Timothy Finin and Krishnaprasad Thirunarayan", volume = "5318", series = "LNCS", pages = "307--322", address = "Karlsruhe, Germany", month = "October 26--30", publisher = "Springer", }
We analyze the datatype system of OWL and OWL 2, and discuss certain
nontrivial consequences of its definition, such as the extensibility
of the set of supported datatypes and complexity of reasoning. We
also argue that certain datatypes from the list of normative
datatypes in the current OWL 2 Working Draft are inappropriate and
should be replaced with different ones. Finally, we present an
algorithm for datatype reasoning. Our algorithm is modular in the
sense that it can handle any datatype that supports certain basic
operations. We show how to implement these operations for number and
string datatypes.
Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, and Ulirke Sattler. Representing Structured Objects using Description Graphs. In Gerhard Brewka and Jérôme Lang, editors, Proc. of the 11th Int. Joint Conf. on Principles of Knowledge Representation and Reasoning (KR 2008), pages 296–306, Sydney, NSW, Australia, August 16–19 2008. AAAI Press.
@InProceedings{mghs08description-graphs, author = "Boris Motik and Bernardo Cuenca Grau and Ian Horrocks and Ulirke Sattler", title = "{Representing Structured Objects using Description Graphs}", booktitle = "Proc. of the 11th Int. Joint Conf. on Principles of Knowledge Representation and Reasoning (KR 2008)", year = "2008", editor = "Gerhard Brewka and J{\'e}r{\^o}me Lang", pages = "296--306", address = "Sydney, NSW, Australia", month = "August 16--19", publisher = "AAAI Press", }
State-of-the-art ontology languages are often not sufficiently
expressive to accurately represent domains consisting of objects
connected in a complex way. As a possible remedy, in our previous
work we have proposed an extension of ontology languages with
description graphs. In this paper, we extend this formalism
by allowing for multiple graphs that can be combined in complex
ways, thus obtaining a powerful language for modeling structured
objects. By imposing a particular acyclicity restriction on
the relationships between the graphs, we ensure that checking
satisfiability of knowledge bases expressed in our language is
decidable. We also present a practical reasoning algorithm.
Boris Motik and Ian Horrocks. Individual Reuse in Description Logic Reasoning. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, Proc. of the 4th Int. Joint Conf. on Automated Reasoning (IJCAR 2008), volume 5195 of LNAI, pages 242–258, Sydney, NSW, Australia, August 12–15 2008. Springer.
@InProceedings{mh08reuse, author = "Boris Motik and Ian Horrocks", title = "{Individual Reuse in Description Logic Reasoning}", booktitle = "Proc. of the 4th Int. Joint Conf. on Automated Reasoning (IJCAR 2008)", year = "2008", editor = "Alessandro Armando and Peter Baumgartner and Gilles Dowek", volume = "5195", series = "LNAI", pages = "242--258", address = "Sydney, NSW, Australia", month = "August 12--15", publisher = "Springer", }
Tableau calculi are the state-of-the-art for reasoning in
description logics (DL). Despite recent improvements, tableau-based
reasoners still cannot process certain knowledge bases (KBs), mainly
because they end up building very large models. To address this, we
propose a tableau calculus with individual reuse: to satisfy
an existential assertion, our calculus nondeterministically tries to
reuse individuals from the model generated thus far. We present two
expansion strategies: one is applicable to the DL \ELOH and
gives us a worst-case optimal algorithm, and the other is applicable
to the DL SHOIQ. Using this technique, our reasoner can process
several KBs that no other reasoner can.
Thanh Tran Duc, Peter Haase, Boris Motik, Bernardo Cuenca Grau, and Ian Horrocks. Metalevel Information in Ontology-Based Applications. In Dieter Fox and Carla P. Gomes, editors, Proc. of the 23rd AAAI Conf. on Artificial Intelligence (AAAI 2008), pages 1237–1242, Chicago, IL, USA, July 13–17 2008. AAAI Press.
@InProceedings{dhmgh08-metalevel-information, author = "Thanh Tran Duc and Peter Haase and Boris Motik and Bernardo Cuenca Grau and Ian Horrocks", title = "{Metalevel Information in Ontology-Based Applications}", booktitle = "Proc. of the 23rd AAAI Conf. on Artificial Intelligence (AAAI 2008)", year = "2008", editor = "Dieter Fox and Carla P. Gomes", pages = "1237--1242", address = "Chicago, IL, USA", month = "July 13--17", publisher = "AAAI Press", }
Applications of Semantic Web technologies often require the
management of metalevel information—that is, information
that provides additional detail about domain-level information, such
as provenance or access rights policies. Existing OWL-based tools
provide little or no support for the representation and management
of metalevel information. To fill this gap, we propose a framework
based on metaviews—ontologies that describe facts in the
application domain. We have implemented our framework in the KAON2
reasoner, and have successfully applied it in a nontrivial scenario.
Boris Motik, Bernardo Cuenca Grau, and Ulrike Sattler. Structured Objects in OWL: Representation and Reasoning. In Jinpeng Huai, Robin Chen, Hsiao-Wuen Hon, Yunhao Liu, Wei-Ying Ma, Andrew Tomkins, and Xiaodong Zhang, editors, Proc. of the 17th Int. World Wide Web Conference (WWW 2008), pages 555–564, Beijing, China, April 21–25 2008. ACM Press.
@InProceedings{mgs08structured-objects, author = "Boris Motik and Bernardo Cuenca Grau and Ulrike Sattler", title = "{Structured Objects in OWL: Representation and Reasoning}", booktitle = "Proc. of the 17th Int. World Wide Web Conference (WWW 2008)", year = "2008", editor = "Jinpeng Huai and Robin Chen and Hsiao-Wuen Hon and Yunhao Liu and Wei-Ying Ma and Andrew Tomkins and Xiaodong Zhang", pages = "555--564", address = "Beijing, China", month = "April 21--25", publisher = "ACM Press", }
Applications of semantic technologies often require the
representation of and reasoning with structured
objects—that is, objects composed of parts connected in complex
ways. Although OWL is a general and powerful language, its class
descriptions and axioms cannot be used to describe arbitrarily
connected structures. An OWL representation of structured objects
can thus be underconstrained, which reduces the inferences that can
be drawn and causes performance problems in reasoning. To address
these problems, we extend OWL with description graphs, which
allow for the description of structured objects in a simple and
precise way. To represent conditional aspects of the domain, we also
allow for SWRL-like rules over description graphs. Based on an
observation about the nature of structured objects, we ensure
decidability of our formalism. We also present a hypertableau-based
decision procedure, which we implemented in the HermiT reasoner. To
evaluate its performance, we have extracted description graphs from
the GALEN and FMA ontologies, classified them successfully, and even
detected a modeling error in GALEN.
Christine Golbreich, Matthew Horridge, Ian Horrocks, Boris Motik, and Rob Shearer. OBO and OWL: Leveraging Semantic Web Technologies for the Life Sciences. In Karl Aberer, Key-Sun Choi, Natasha Fridman Noy, Dean Allemang, Kyung-Il Lee, Lyndon J. B. Nixon, Jennifer Golbeck, Peter Mika, Diana Maynard, Riichiro Mizoguchi, Guus Schreiber, and Philippe Cudré-Mauroux, editors, Proc. of the 6th Int. Semantic Web Conference (ISWC 2007), volume 4825 of LNCS, pages 169–182, Busan, Korea, November 11-15 2007. Springer.
@InProceedings{ghhms07obo-and-owl, author = "Christine Golbreich and Matthew Horridge and Ian Horrocks and Boris Motik and Rob Shearer", title = "{OBO and OWL: Leveraging Semantic Web Technologies for the Life Sciences}", booktitle = "Proc. of the 6th Int. Semantic Web Conference (ISWC 2007)", year = "2007", editor = "Karl Aberer and Key-Sun Choi and Natasha Fridman Noy and Dean Allemang and Kyung-Il Lee and Lyndon J. B. Nixon and Jennifer Golbeck and Peter Mika and Diana Maynard and Riichiro Mizoguchi and Guus Schreiber and Philippe Cudr{\'e}-Mauroux", volume = "4825", series = "LNCS", pages = "169--182", address = "Busan, Korea", month = "November 11-15", publisher = "Springer", }
OBO is an ontology language that has often been used for modeling
ontologies in the life sciences. Its definition is relatively
informal, so, in this paper, we provide a clear specification for
OBO syntax and semantics via a mapping to OWL. This mapping also
allows us to apply existing Semantic Web tools and techniques to
OBO. We show that Semantic Web reasoners can be used to efficiently
reason with OBO ontologies. Furthermore, we show that grounding the
OBO language in formal semantics is useful for the ontology
development process: using an OWL reasoner, we detected a likely
modeling error in one OBO ontology.
Boris Motik, Rob Shearer, and Ian Horrocks. Optimized Reasoning in Description Logics using Hypertableaux. In Frank Pfenning, editor, Proc. of the 21st Conference on Automated Deduction (CADE-21), volume 4603 of LNAI, pages 67–83, Bremen, Germany, July 17–20 2007. Springer.
@InProceedings{msh07optimized, author = "Boris Motik and Rob Shearer and Ian Horrocks", title = "{Optimized Reasoning in Description Logics using Hypertableaux}", booktitle = "Proc. of the 21st Conference on Automated Deduction (CADE-21)", year = "2007", editor = "Frank Pfenning", volume = "4603", series = "LNAI", pages = "67--83", address = "Bremen, Germany", month = "July 17--20", publisher = "Springer", }
We present a novel reasoning calculus for Description Logics
(DLs)—knowledge representation formalisms with applications in
areas such as the Semantic Web. In order to reduce the
nondeterminism due to general inclusion axioms, we base our calculus
on hypertableau and hyperresolution calculi, which we extend with a
blocking condition to ensure termination. To prevent the calculus
from generating large models, we introduce "anywhere"
pairwise blocking. Our preliminary implementation shows significant
performance improvements on several well-known ontologies. To the
best of our knowledge, our reasoner is currently the only one that
can classify the original version of the GALEN terminology.
Boris Motik, Ian Horrocks, and Ulrike Sattler. Bridging the Gap Between OWL and Relational Databases. In Carey L. Williamson, Mary Ellen Zurko, Peter F. Patel-Schneider, and Prashant J. Shenoy, editors, Proc. of the 16th International World Wide Web Conference (WWW 2007), pages 807–816, Banff, AB, Canada, May 8–12 2007. ACM Press.
@InProceedings{mhs07bridging, author = "Boris Motik and Ian Horrocks and Ulrike Sattler", title = "{Bridging the Gap Between OWL and Relational Databases}", booktitle = "Proc. of the 16th International World Wide Web Conference (WWW 2007)", year = "2007", editor = "Carey L. Williamson and Mary Ellen Zurko and Peter F. Patel-Schneider and Prashant J. Shenoy", pages = "807--816", address = "Banff, AB, Canada", month = "May 8--12", publisher = "ACM Press", }
Schema statements in OWL are interpreted quite differently from
analogous statements in relational databases. If these statements
are meant to be interpreted as integrity constraints (ICs), OWL's
interpretation may seem confusing and/or inappropriate. Therefore,
we propose an extension of OWL with ICs that captures the intuition
behind ICs in relational databases. We discuss the algorithms for
checking IC satisfaction for different types of knowledge bases, and
show that, if the constraints are satisfied, we can disregard them
while answering a broad range of positive queries.
Boris Motik and Riccardo Rosati. A Faithful Integration of Description Logics with Logic Programming. In Manuela M. Veloso, editor, Proc. of the 20th Int. Joint Conference on Artificial Intelligence (IJCAI 2007), pages 477–482, Hyderabad, India, January 6–12 2007. Morgan Kaufmann Publishers.
@InProceedings{mr07dl-and-lp, author = "Boris Motik and Riccardo Rosati", title = "{A Faithful Integration of Description Logics with Logic Programming}", booktitle = "Proc. of the 20th Int. Joint Conference on Artificial Intelligence (IJCAI 2007)", year = "2007", editor = "Manuela M. Veloso", pages = "477--482", address = "Hyderabad, India", month = "January 6--12", publisher = "Morgan Kaufmann Publishers", }
Integrating description logics (DL) and logic programming (LP) would
produce a very powerful and useful formalism. However, DLs and LP
are based on quite different principles, so achieving a seamless
integration is not trivial. In this paper, we introduce hybrid
MKNF knowledge bases that faithfully integrate DLs with LP using
the logic of Minimal Knowledge and Negation as Failure (MKNF)
\cite{DBLP:conf/ijcai/Lifschitz91}. We also give reasoning
algorithms and tight data complexity bounds for several interesting
fragments of our logic.
Boris Motik, Ian Horrocks, Riccardo Rosati, and Ulrike Sattler. Can OWL and Logic Programming Live Together Happily Ever After? In Isabel F. Cruz, Stefan Decker, Dean Allemang, Chris Preist, Daniel Schwabe, Peter Mika, Michael Uschold, and Lora Aroyo, editors, Proc. of the 5th Int. Semantic Web Conference (ISWC 2006), volume 4273 of LNCS, pages 501–514, Athens, GA, USA, November 5–9 2006. Springer.
@InProceedings{mhs06happily, author = "Boris Motik and Ian Horrocks and Riccardo Rosati and Ulrike Sattler", title = "{Can OWL and Logic Programming Live Together Happily Ever After?}", booktitle = "Proc. of the 5th Int. Semantic Web Conference (ISWC 2006)", year = "2006", editor = "Isabel F. Cruz and Stefan Decker and Dean Allemang and Chris Preist and Daniel Schwabe and Peter Mika and Michael Uschold and Lora Aroyo", volume = "4273", series = "LNCS", pages = "501--514", address = "Athens, GA, USA", month = "November 5--9", publisher = "Springer", }
Logic programming (LP) is often seen as a way to overcome several
shortcomings of the Web Ontology Language (OWL), such as the
inability to model integrity constraints or perform closed-world
querying. However, the open-world semantics of OWL seems to be
fundamentally incompatible with the closed-world semantics of LP.
This has sparked a heated debate in the Semantic Web community,
resulting in proposals for alternative ontology languages based
entirely on logic programming. To help resolving this debate, we
investigate the practical use cases which seem to be addressed by
logic programming. In fact, many of these requirements have already
been addressed outside the Semantic Web. By drawing inspiration from
these existing formalisms, we present a novel logic of hybrid
MKNF knowledge bases, which seamlessly integrates OWL with LP. We
are thus capable of addressing the identified use cases without a
radical change in the architecture of the Semantic Web.
Boris Motik and Ulrike Sattler. A Comparison of Reasoning Techniques for Querying Large Description Logic ABoxes. In Miki Hermann and Andrei Voronkov, editors, Proc. of the 13th Int. Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR 2006), volume 4246 of LNCS, pages 227–241, Phnom Penh, Cambodia, November 13–17 2006. Springer.
@InProceedings{ms06kaon2, author = "Boris Motik and Ulrike Sattler", title = "{A Comparison of Reasoning Techniques for Querying Large Description Logic ABoxes}", booktitle = "Proc. of the 13th Int. Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR 2006)", year = "2006", editor = "Miki Hermann and Andrei Voronkov", volume = "4246", series = "LNCS", pages = "227--241", address = "Phnom Penh, Cambodia", month = "November 13--17", publisher = "Springer", }
Many modern applications of description logics (DLs) require
answering queries over large data quantities, structured according
to relatively simple ontologies. For such applications, we
conjectured that reusing ideas of deductive databases might improve
scalability of DL systems. Hence, in our previous work, we developed
an algorithm for reducing a DL knowledge base to a disjunctive
datalog program. To test our conjecture, we implemented our
algorithm in a new DL reasoner KAON2, which we describe in this
paper. Furthermore, we created a comprehensive test suite and used
it to conduct a performance evaluation. Our results show that, on
knowledge bases with large ABoxes but with simple TBoxes, our
technique indeed shows good performance; in contrast, on knowledge
bases with large and complex TBoxes, existing techniques still
perform better. This allowed us to gain important insights into
strengths and weaknesses of both approaches.
Yevgeny Kazakov and Boris Motik. A Resolution-Based Decision Procedure for SHOIQ. In Ulrich Furbach, John Harrison, and Natarajan Shankar, editors, Proc. of the 3rd Int. Joint Conference on Automated Reasoning (IJCAR 2006), volume 4130 of LNAI, pages 662–667, Seattle, WA, USA, August 17–20 2006. Springer.
@InProceedings{km06shoiq, author = "Yevgeny Kazakov and Boris Motik", title = "{A Resolution-Based Decision Procedure for SHOIQ}", booktitle = "Proc. of the 3rd Int. Joint Conference on Automated Reasoning (IJCAR 2006)", year = "2006", editor = "Ulrich Furbach and John Harrison and Natarajan Shankar", volume = "4130", series = "LNAI", pages = "662--667", address = "Seattle, WA, USA", month = "August 17--20", publisher = "Springer", }
We present a resolution-based decision procedure for the description
logic SHOIQ—the logic underlying the Semantic Web ontology
language OWL-DL. Our procedure is goal-oriented, and it naturally
extends a similar procedure for SHIQ, which has proven itself in
practice. Applying existing techniques for deriving saturation-based
decision procedures to SHOIQ is not straightforward due to
nominals, number restrictions, and inverse roles—a combination
known to cause termination problems. We overcome this difficulty by
using the basic superposition calculus, extended with custom
simplification rules.
Stephan Grimm, Boris Motik, and Chris Preist. Matching Semantic Service Descriptions with Local Closed-World Reasoning. In York Sure and John Domingue, editors, Proc. of the 3rd European Semantic Web Conference (ESWC 2006), volume 4011 of LNCS, pages 575–589, Budva, Montenegro, June 11–14 2006. Springer.
@InProceedings{gmp06matching, author = "Stephan Grimm and Boris Motik and Chris Preist", title = "{Matching Semantic Service Descriptions with Local Closed-World Reasoning}", booktitle = "Proc. of the 3rd European Semantic Web Conference (ESWC 2006)", year = "2006", editor = "York Sure and John Domingue", volume = "4011", series = "LNCS", pages = "575--589", address = "Budva, Montenegro", month = "June 11--14", publisher = "Springer", }
Semantic Web Services were developed with the goal of automating
the integration of business processes on the Web. The main idea is
to express the functionality of the services explicitly, using
semantic annotations. Such annotations can, for example, be used
for service discovery—the task of locating a service capable of
fulfilling a business request. In this paper, we present a
framework for annotating Web Services using description logics
(DLs), a family of knowledge representation formalisms widely used
in the Semantic Web. We show how to realise service discovery by
matching semantic service descriptions, applying DL inferencing.
Building on our previous work, we identify problems that occur in
the matchmaking process due to the open-world assumption when
handling incomplete service descriptions. We propose to use
autoepistemic extensions to DLs (ADLs) to overcome these problems.
ADLs allow for non-monotonic reasoning and for querying DL
knowledge bases under local closed-world assumption. We
investigate the use of epistemic operators of ADLs in service
descriptions, and show how they affect DL inferences in the
context of semantic matchmaking.
Boris Motik. On the Properties of Metamodeling in OWL. In Yolanda Gil, Enrico Motta, Richard V. Benjamins, and Mark Musen, editors, Proc. of the 4th Int. Semantic Web Conference (ISWC 2005), volume 3729 of LNCS, pages 548–562, Galway, Ireland, November 6–10 2005. Springer.
@InProceedings{m05metamodeling, author = "Boris Motik", title = "{On the Properties of Metamodeling in OWL}", booktitle = "Proc. of the 4th Int. Semantic Web Conference (ISWC 2005)", year = "2005", editor = "Yolanda Gil and Enrico Motta and Richard V. Benjamins and Mark Musen", volume = "3729", series = "LNCS", pages = "548--562", address = "Galway, Ireland", month = "November 6--10", publisher = "Springer", }
A common practice in conceptual modeling is to separate the
intensional from the extensional model. Although very intuitive,
this approach is inadequate for many complex domains, where the
borderline between the two models is not clear-cut. Therefore,
OWL-Full, the most expressive of the Semantic Web ontology
languages, allows combining the intensional and the extensional
model by a feature we refer to as metamodeling. In this
paper, we show that the semantics of metamodeling adopted in
OWL-Full leads to undecidability of basic inference problems, due to
free mixing of logical and metalogical symbols. Based on this
result, we propose two alternative semantics for metamodeling: the
contextual and the HiLog semantics. We show that
SHOIQ — a description logic underlying OWL-DL — extended with
metamodeling under either semantics is decidable. Finally, we show
how the latter semantics can be used in practice to axiomatize the
logical interaction between concepts and metaconcepts.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Data Complexity of Reasoning in Very Expressive Description Logics. In Leslie Pack Kaelbling and Alessandro Saffiotti, editors, Proc. of the 19th Int. Joint Conference on Artificial Intelligence (IJCAI 2005), pages 466–471, Edinburgh, UK, July 30–August 5 2005. Morgan Kaufmann Publishers.
@InProceedings{hms05data, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Data Complexity of Reasoning in Very Expressive Description Logics}", booktitle = "Proc. of the 19th Int. Joint Conference on Artificial Intelligence (IJCAI 2005)", year = "2005", editor = "Leslie Pack Kaelbling and Alessandro Saffiotti", pages = "466--471", address = "Edinburgh, UK", month = "July 30--August 5", publisher = "Morgan Kaufmann Publishers", }
Data complexity of reasoning in description logics (DLs)
estimates the performance of reasoning algorithms measured in the
size of the ABox only. We show that, even for the very expressive DL
SHIQ, satisfiability checking is data complete for
NP
. For
applications with large ABoxes, this can be a more accurate estimate
than the usually considered combined complexity, which is
ExpTime
-complete. Furthermore, we identify an expressive fragment,
Horn-SHIQ, which is data complete for P
, thus being very appealing
for practical usage.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. A Decomposition Rule for Decision Procedures by Resolution-based Calculi. In Franz Baader and Andrei Voronkov, editors, Proc. of the 11th Int. Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR 2004), volume 3452 of LNAI, pages 21–35, Montevideo, Uruguay, March 14–18 2005. Springer.
@InProceedings{hms04decomposition, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{A Decomposition Rule for Decision Procedures by Resolution-based Calculi}", booktitle = "Proc. of the 11th Int. Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR 2004)", year = "2005", editor = "Franz Baader and Andrei Voronkov", volume = "3452", series = "LNAI", pages = "21--35", address = "Montevideo, Uruguay", month = "March 14--18", publisher = "Springer", }
Resolution-based calculi are among the most widely used calculi for
theorem proving in first-order logic. Numerous refinements of
resolution are nowadays available, such as e.g. basic
superposition, a calculus highly optimized for theorem proving with
equality. However, even such an advanced calculus does not restrict
inferences enough to obtain decision procedures for complex logics,
such as SHIQ. In this paper, we present a new decomposition
inference rule, which can be combined with any resolution-based
calculus compatible with the standard notion of redundancy. We
combine decomposition with basic superposition to obtain three new
decision procedures: (i) for the description logic SHIQ,
(ii) for the description logic ALCHIQb, and (iii) for
answering conjunctive queries over SHIQ knowledge bases. The first
two procedures are worst-case optimal and, based on the vast
experience in building efficient theorem provers, we expect them to
be suitable for practical usage.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Reducing SHIQ- Description Logic to Disjunctive Datalog Programs. In Didier Dubois, Christopher A. Welty, and Mary-Anne Williams, editors, Proc. of the 9th Int. Conference on Principles of Knowledge Representation and Reasoning (KR 2004), pages 152–162, Whistler, Canada, June 2–5 2004. AAAI Press.
@InProceedings{hms04reducing, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Reducing $\mathcal{SHIQ}^-$ Description Logic to Disjunctive Datalog Programs}", booktitle = "Proc. of the 9th Int. Conference on Principles of Knowledge Representation and Reasoning (KR 2004)", year = "2004", editor = "Didier Dubois and Christopher A. Welty and Mary-Anne Williams", pages = "152--162", address = "Whistler, Canada", month = "June 2--5", publisher = "AAAI Press", }
As applications of description logics proliferate, efficient
reasoning with large ABoxes (sets of individuals with
descriptions) becomes ever more important. Motivated by the
prospects of reusing optimization techniques from deductive
databases, in this paper, we present a novel approach to checking
consistency of ABoxes, instance checking and query answering,
w.r.t. ontologies formulated using a slight restriction of the
description logic SHIQ. Our approach proceeds in three steps: (i)
the ontology is translated into first-order clauses, (ii) TBox and
RBox clauses are saturated using a resolution-based decision
procedure, and (iii) the saturated set of clauses is translated
into a disjunctive datalog program. Thus, query answering can be
performed using the resulting program, while applying all existing
optimization techniques, such as join-order optimizations or magic
sets. Equally important, the resolution-based decision procedure
we present is for unary coding of numbers worst-case optimal, i.e.
it runs in
ExpTime
.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Reasoning in Description Logics with a Concrete Domain in the Framework of Resolution. In Ramon López de Mántaras and Lorenza Saitta, editors, Proc. of the 16th European Conference on Artificial Intelligence (ECAI 2004), pages 353–357, Valencia, Spain, August 22–27 2004. IOS Press.
@InProceedings{hms04concrete, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Reasoning in Description Logics with a Concrete Domain in the Framework of Resolution}", booktitle = "Proc. of the 16th European Conference on Artificial Intelligence (ECAI 2004)", year = "2004", editor = "Ramon L{\'o}pez de M{\'a}ntaras and Lorenza Saitta", pages = "353--357", address = "Valencia, Spain", month = "August 22--27", publisher = "IOS Press", }
In description logics, concrete domains are used to model concrete
properties such as weight, name, or age, having concrete values
such as integers or strings, with built-in predicates, such
as <= or =. Until now, reasoning with concrete domains has
been studied predominantly in the context of tableaux and automata
calculi. In this paper, we present a general approach for concrete
domain reasoning in the resolution framework. We apply this
approach to devise an optimal decision procedure for SHIQD, the
extension of SHIQ with a restricted form of concrete domains,
serving as the logical underpinning of the web ontology language
OWL-DL.
Boris Motik, Ulrike Sattler, and Rudi Studer. Query Answering for OWL-DL with Rules. In Sheila A. McIlraith, Dimitris Plexousakis, and Frank van Harmelen, editors, Proc. of the 3rd Int. Semantic Web Conference (ISWC 2004), volume 3298 of LNCS, pages 549–563, Hiroshima, Japan, November 7–11 2004. Springer.
@InProceedings{mss04dl-safe, author = "Boris Motik and Ulrike Sattler and Rudi Studer", title = "{Query Answering for OWL-DL with Rules}", booktitle = "Proc. of the 3rd Int. Semantic Web Conference (ISWC 2004)", year = "2004", editor = "Sheila A. McIlraith and Dimitris Plexousakis and Frank van Harmelen", volume = "3298", series = "LNCS", pages = "549--563", address = "Hiroshima, Japan", month = "November 7--11", publisher = "Springer", }
Both OWL-DL and function-free Horn rules\footnote{Throughout this
paper, we use "rules" and "clauses" synonymously, following
\cite{HoPa04a}.} are decidable logics with interesting, yet
orthogonal expressive power: from the rules perspective, OWL-DL is
restricted to tree-like rules, but provides both existentially and
universally quantified variables and full, monotonic negation. From
the description logic perspective, rules are restricted to universal
quantification, but allow for the interaction of variables in
arbitrary ways. Clearly, a combination of OWL-DL and rules is
desirable for building Semantic Web ontologies, and several such
combinations have already been discussed. However, such a
combination might easily lead to the undecidability of interesting
reasoning problems. Here, we present a decidable such
combination which is, to the best of our knowledge, more general
than similar decidable combinations proposed so far. Decidability is
obtained by restricting rules to so-called DL-safe ones,
requiring each variable in a rule to occur in a non-DL-atom in the
rule body. We show that query answering in such a combined logic is
decidable, and we discuss its expressive power by means of a
non-trivial example. Finally, we present an algorithm for query
answering in SHIQD extended with DL-safe rules based on the
reduction to disjunctive datalog.
Gábor Nagypál and Boris Motik. A Fuzzy Model for Representing Uncertain, Subjective, and Vague Temporal Knowledge in Ontologies. In Robert Meersman, Zahir Tari, and Douglas C. Schmidt, editors, Proc. of the 2003 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003), volume 2888 of LNCS, pages 906–923, Catania, Italy, November 3–7 2003. Springer.
@InProceedings{nm03fuzzy, author = "G{\'a}bor Nagyp{\'a}l and Boris Motik", title = "{A Fuzzy Model for Representing Uncertain, Subjective, and Vague Temporal Knowledge in Ontologies}", booktitle = "Proc. of the 2003 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003)", year = "2003", editor = "Robert Meersman and Zahir Tari and Douglas C. Schmidt", volume = "2888", series = "LNCS", pages = "906--923", address = "Catania, Italy", month = "November 3--7", publisher = "Springer", }
Time modeling is a crucial feature in many application domains. However, temporal information often is not crisp, but is uncertain, subjective and vague. This is particularly true when representing historical information, as historical accounts are inherently imprecise. Similarly, we conjecture that in the Semantic Web representing uncertain temporal information will be a common requirement. Hence, existing approaches for temporal modeling based on crisp representation of time cannot be applied to these advanced modeling tasks. To overcome these difficulties, in this paper we present fuzzy interval-based temporal model capable of representing imprecise temporal knowledge. Our approach naturally subsumes existing crisp temporal models, i.e. crisp temporal relationships are intuitively represented in our system. Apart from presenting the fuzzy temporal model, we discuss how this model is integrated with the ontology model to allow annotating ontology definitions with time specifications.
Raphael Volz, Steffen Staab, and Boris Motik. Incremental Maintenance of Materialized Ontologies. In Robert Meersman, Zahir Tari, and Douglas C. Schmidt, editors, Proc. of the 2003 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003), volume 2888 of LNCS, pages 707–724, Catania, Italy, November 3–7 2003. Springer.
@InProceedings{vsm03incremental, author = "Raphael Volz and Steffen Staab and Boris Motik", title = "{Incremental Maintenance of Materialized Ontologies}", booktitle = "Proc. of the 2003 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003)", year = "2003", editor = "Robert Meersman and Zahir Tari and Douglas C. Schmidt", volume = "2888", series = "LNCS", pages = "707--724", address = "Catania, Italy", month = "November 3--7", publisher = "Springer", }
This paper discusses the incremental maintenance of materialized ontologies in a rule-enabled Semantic Web. Materialization allows to speed up query processing by explicating the implicit entailments which are sanctioned by the semantics of an ontology. The complexity of reasoning with the ontology is thereby shifted from query time to update time. We assume that materialization techniques will frequently be important to achieve a scalable Semantic Web, since read access to ontologies is predominant. Central to materialization are maintenance techniques that allow to incrementally update a materialization when changes occur. par We present a novel solution that allows to cope with changes in rules and facts. To achieve this we extend a known approach for the incremental maintenance of views in deductive databases. We show how our technique can be employed for a broad range of existing Web ontology languages, such as RDF/S and subsets of OWL and present a first evaluation.
Alexander Maedche, Boris Motik, Ljiljana Stojanovic, Rudi Studer, and Raphael Volz. An infrastructure for searching, reusing and evolving distributed ontologies. In Proc. of the 12th International World Wide Web Conference (WWW 2003), pages 439–448, Budapest, Hungary, May 20–24 2003. ACM Press.
@InProceedings{mmssv03infrastructure, author = "Alexander Maedche and Boris Motik and Ljiljana Stojanovic and Rudi Studer and Raphael Volz", title = "{An infrastructure for searching, reusing and evolving distributed ontologies}", booktitle = "Proc. of the 12th International World Wide Web Conference (WWW 2003)", year = "2003", pages = "439--448", address = "Budapest, Hungary", month = "May 20--24", publisher = "ACM Press", }
The vision of the Semantic Web can only be realized through
proliferation of well-known ontologies describing different
domains. To enable interoperability in the Semantic Web, it will
be necessary to break these ontologies down into smaller,
well-focused units that may be reused. Currently, three problems
arise in that scenario. Firstly, it is difficult to locate
ontologies to be reused, thus leading to many ontologies modeling
the same thing. Secondly, current tools do not provide means for
reusing existing ontologies while building new ontologies.
Finally, ontologies are rarely static, but are being adapted to
changing requirements. Hence, an infrastructure for management of
ontology changes, taking into account dependencies between
ontologies is needed. In this paper we present such an
infrastructure addressing the aforementioned problems.
Raphael Volz, Daniel Oberle, Steffen Staab, and Boris Motik. KAON SERVER — A Semantic Web Management System. In Alternate Track Proc. of the 12th International World Wide Web Conference (WWW 2003), Budapest, Hungary, May 20–24 2003. ACM Press.
@InProceedings{vosm03kaonserver, author = "Raphael Volz and Daniel Oberle and Steffen Staab and Boris Motik", title = "{KAON SERVER --- A Semantic Web Management System}", booktitle = "Alternate Track Proc. of the 12th International World Wide Web Conference (WWW 2003)", year = "2003", address = "Budapest, Hungary", month = "May 20--24", publisher = "ACM Press", }
The growing use of ontologies in applications creates the need for an infrastructure that allows developers to more easily combine different software modules like ontology stores, editors, or inference engines towards comprehensive ontology-based solutions. We call such an infrastructure Ontology Software Environment. The article discusses requirements and design issues of such an Ontology Software Environment. In particular, we present this discussion in light of the ontology and (meta)data standards that exist in the Semantic Web and present our corresponding implementation, the KAON SERVER.
Boris Motik, Alexander Maedche, and Raphael Volz. A Conceptual Modeling Approach for Semantics-Driven Enterprise Applications. In Robert Meersman and Zahir Tari, editors, Proc. of the 2002 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2002), volume 2519 of LNCS, pages 1082–1099, Irvine, CA, USA, 2002. Springer.
@InProceedings{mmv02conceptual, author = "Boris Motik and Alexander Maedche and Raphael Volz", title = "{A Conceptual Modeling Approach for Semantics-Driven Enterprise Applications}", booktitle = "Proc. of the 2002 Int. Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2002)", year = "2002", editor = "Robert Meersman and Zahir Tari", volume = "2519", series = "LNCS", pages = "1082--1099", address = "Irvine, CA, USA", publisher = "Springer", }
In recent years ontologies – shared conceptualizations of some
domain – are increasingly seen as the key to further automation
of information processing. Although many approaches for
representing and applying ontologies have already been devised,
they haven't found their way into enterprise applications. In
this paper we argue that ontology-based systems lack critical
technical features, such as scalability, reliability, concurrency
and integration with existing data sources, as well as the
support for modularization and meta-concept modeling from the
conceptual modeling perspective. We present a conceptual modeling
approach that balances some of the trade-offs to more easily
integrate into existing enterprise information infrastructure.
Our approach is implemented within KAON, the Karlsruhe Ontology
and Semantic Web tool suite.
Erol Bozsak, Marc Ehrig, Siegfried Handschuh, Andreas Hotho, Alexander Maedche, Boris Motik, Daniel Oberle, Christoph Schmitz, Steffen Staab, Ljiljana Stojanovic, Nenad Stojanovic, Rudi Studer, Gerd Stumme, York Sure, Julien Tane, Raphael Volz, and Valentin Zacharias. KAON — Towards a Large Scale Semantic Web. In Kurt Bauknecht, A. Min Tjoa, and Gerald Quirchmayr, editors, Proc. of the 3rd Int. Conference on E-Commerce and Web Technologies (EC-Web 2002), volume 2455 of LNCS, pages 304–313, Aix-en-Provence, France, September 2–6 2002. Springer.
@InProceedings{behhmmossssssstvz02kaon2, author = "Erol Bozsak and Marc Ehrig and Siegfried Handschuh and Andreas Hotho and Alexander Maedche and Boris Motik and Daniel Oberle and Christoph Schmitz and Steffen Staab and Ljiljana Stojanovic and Nenad Stojanovic and Rudi Studer and Gerd Stumme and York Sure and Julien Tane and Raphael Volz and Valentin Zacharias", title = "{KAON --- Towards a Large Scale Semantic Web}", booktitle = "Proc. of the 3rd Int. Conference on E-Commerce and Web Technologies (EC-Web 2002)", year = "2002", editor = "Kurt Bauknecht and A. Min Tjoa and Gerald Quirchmayr", volume = "2455", series = "LNCS", pages = "304--313", address = "Aix-en-Provence, France", month = "September 2--6", publisher = "Springer", }
The Semantic Web will bring structure to the content of Web pages, being an extension of the currentWeb, in which information is given a well-defined meaning. Especially within e-commerce applications, SemanticWeb technologies in the form of ontologies and metadata are becoming increasingly prevalent and important. This paper introduce KAON - the Karlsruhe Ontology and Semantic Web Tool Suite. KAON is developed jointly within several EU-funded projects and specifically designed to provide the ontology and metadata infrastructure needed for building, using and accessing semantics-driven applications on the Web and on your desktop.
Alexander Maedche, Boris Motik, Nuno Silva, and Raphael Volz. MAFRA — A MApping FRAmework for Distributed Ontologies. In Asunción Gómez-Pérez and Richard V. Benjamins, editors, Proc. of the 13th Int. Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), volume 2473 of LNAI, pages 235–250, Siguenza, Spain, October 1–4 2002. Springer.
@InProceedings{mmsv02mafra, author = "Alexander Maedche and Boris Motik and Nuno Silva and Raphael Volz", title = "{MAFRA --- A MApping FRAmework for Distributed Ontologies}", booktitle = "Proc. of the 13th Int. Conference on Knowledge Engineering and Knowledge Management (EKAW 2002)", year = "2002", editor = "Asunci{\'o}n G{\'o}mez-P{\'e}rez and Richard V. Benjamins", volume = "2473", series = "LNAI", pages = "235--250", address = "Siguenza, Spain", month = "October 1--4", publisher = "Springer", }
Ontologies as means for conceptualizing and structuring domain
knowledge within a community of interest are seen as a key to
realize the Semantic Web vision. However, the decentralized nature
of the Web makes achieving this consensus across communities
difficult, thus, hampering efficient knowledge sharing between
them. In order to balance the autonomy of each community with the
need for interoperability, mapping mechanisms between distributed
ontologies in the Semantic Web are required. In this paper we
present MAFRA, an interactive, incremental and dynamic framework
for mapping distributed ontologies.
Ljiljana Stojanovic, Alexander Maedche, Boris Motik, and Nenad Stojanovic. User-Driven Ontology Evolution Management. In Asunción Gómez-Pérez and Richard V. Benjamins, editors, Proc. of the 13th Int. Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), volume 2473 of LNAI, pages 285–300, Siguenza, Spain, October 1–4 2002. Springer.
@InProceedings{smms02userdriven, author = "Ljiljana Stojanovic and Alexander Maedche and Boris Motik and Nenad Stojanovic", title = "{User-Driven Ontology Evolution Management}", booktitle = "Proc. of the 13th Int. Conference on Knowledge Engineering and Knowledge Management (EKAW 2002)", year = "2002", editor = "Asunci{\'o}n G{\'o}mez-P{\'e}rez and Richard V. Benjamins", volume = "2473", series = "LNAI", pages = "285--300", address = "Siguenza, Spain", month = "October 1--4", publisher = "Springer", }
With rising importance of knowledge interchange, many industrial and academic applications have adopted ontologies as their conceptual backbone. However, industrial and academic environments are very dynamic, thus inducing changes to application requirements. To fulfil these changes, often the underlying ontology must be evolved as well. As ontologies grow in size, the complexity of change management increases, thus requiring a well-structured ontology evolution process. In this paper we identify a possible six-phase evolution process and focus on providing the user with capabilities to control and customize it. We introduce the concept of an evolution strategy encapsulating policy for evolution with respect to user's requirements.
Alexander Maedche, Boris Motik, Ljiljana Stojanovic, Rudi Studer, and Raphael Volz. Managing Multiple Ontologies and Ontology Evolution in Ontologging. In Mark A. Musen, Bernd Neumann, and Rudi Studer, editors, Proc. of the 17th Int. Federation for Information Processing Congress (IFIP 2002), volume 221 of IFIP Conference Proceedings, pages 51–63, Montréal, Québec, Canada, August 25–30 2002. Kluwer.
@InProceedings{mmssv02managing, author = "Alexander Maedche and Boris Motik and Ljiljana Stojanovic and Rudi Studer and Raphael Volz", title = "{Managing Multiple Ontologies and Ontology Evolution in Ontologging}", booktitle = "Proc. of the 17th Int. Federation for Information Processing Congress (IFIP 2002)", year = "2002", editor = "Mark A. Musen and Bernd Neumann and Rudi Studer", volume = "221", series = "IFIP Conference Proceedings", pages = "51--63", address = "Montr{\'e}al, Qu{\'e}bec, Canada", month = "August 25--30", publisher = "Kluwer", }
Ontologging is an ontology-driven environment to enable next generation knowledge management applications building on Semantic Web technology. In this paper we first present the conceptual architecture underlying Ontologging. Second, we focus on two important challenges for ontology-based knowledge management, namely the supporting multiple ontologies and managing ontology evolution. We will provide a general approach for handling these two essential issues within the Ontologging architecture.
Boris Motik and Vlado Glavinić. Enabling Agent Architecture through an RDF Query and Inference Engine. In Proc. of the 10th Mediterranean Electrotechnical Conference (MELECON 2000), volume II, pages 762–765, Lemesos, Cyprus, May 29–31 2000.
@InProceedings{mg00enabling, author = "Boris Motik and Vlado Glavini{\'c}", title = "{Enabling Agent Architecture through an RDF Query and Inference Engine}", booktitle = "Proc. of the 10th Mediterranean Electrotechnical Conference (MELECON 2000)", year = "2000", volume = "II", pages = "762--765", address = "Lemesos, Cyprus", month = "May 29--31", }
Vlado Glavinić and Boris Motik. Agent-Oriented Information Retrieval and Processing. In Damir Kalpić and Vesna Hljuz-Dobrić, editors, Proc. 20th Int. Conference on Information Technology Interfaces (ITI '98), pages 163–168, Pula, Croatia, June 16–19 1998.
@InProceedings{gm98agentoriented, author = "Vlado Glavini{\'c} and Boris Motik", title = "{Agent-Oriented Information Retrieval and Processing}", booktitle = "Proc. 20th Int. Conference on Information Technology Interfaces (ITI '98)", year = "1998", editor = "Damir Kalpi{\'c} and Vesna Hljuz-Dobri{\'c}", pages = "163--168", address = "Pula, Croatia", month = "June 16--19", }
Vlado Glavinić and Boris Motik. Object-Oriented Interface to MMS Services. In Proc. of the 9th Mediterranean Electrotechnical Conference (MELECON '98), volume II, pages 1375–1379, Tel-Aviv, Israel, May 18–20 1998.
@InProceedings{gm98objectoriented, author = "Vlado Glavini{\'c} and Boris Motik", title = "{Object-Oriented Interface to MMS Services}", booktitle = "Proc. of the 9th Mediterranean Electrotechnical Conference (MELECON '98)", year = "1998", volume = "II", pages = "1375--1379", address = "Tel-Aviv, Israel", month = "May 18--20", }
Vlado Glavinić and Boris Motik. On PC-Based MMS Implementations. In Proc. of the 20th Int. Convention Managing for Efficiency and Innovation (MIPRO '97), pages 193–196, Opatija, Croatia, May 19–23 1997.
@InProceedings{gm97pcbased, author = "Vlado Glavini{\'c} and Boris Motik", title = "{On PC-Based MMS Implementations}", booktitle = "Proc. of the 20th Int. Convention Managing for Efficiency and Innovation (MIPRO '97)", year = "1997", pages = "193--196", address = "Opatija, Croatia", month = "May 19--23", }
Vlado Glavinić and Boris Motik. G-LOTOS Visual Development Environment. In Proc. of the 8th Mediterranean Electrotechnical Conference (MELECON '96), volume I, pages 136–139, Bari, Italy, May 13–16 1996.
@InProceedings{gm96g-lotos, author = "Vlado Glavini{\'c} and Boris Motik", title = "{G-LOTOS Visual Development Environment}", booktitle = "Proc. of the 8th Mediterranean Electrotechnical Conference (MELECON '96)", year = "1996", volume = "I", pages = "136--139", address = "Bari, Italy", month = "May 13--16", }
Boris Motik and Vlado Glavinić. On Semantic Representation of LOTOS Programs. In Damir Kalpić and Vesna Hljuz-Dobrić, editors, Proc. 17th Int. Conference on Information Technology Interfaces (ITI '95), pages 217–224, Pula, Croatia, June 13–16 1995.
@InProceedings{mg95semantic, author = "Boris Motik and Vlado Glavini{\'c}", title = "{On Semantic Representation of LOTOS Programs}", booktitle = "Proc. 17th Int. Conference on Information Technology Interfaces (ITI '95)", year = "1995", editor = "Damir Kalpi{\'c} and Vesna Hljuz-Dobri{\'c}", pages = "217--224", address = "Pula, Croatia", month = "June 13--16", }
In Workshops
Michael Benedikt, Maxime Buron, Stefano Germano, Kevin Kappelmann, and Boris Motik. Datalog Rewriting for Guarded TGDs. In Mario Alviano and Andreas Pieris, editors, Proc/ of the 4th Int. Workshop on the Resurgence of Datalog in Academia and Industry (Datalog-2.0 2022), volume 3203 of CEUR Workshop Proceedings, pages 104–113, Genova-Nervi, Italy, September 5 2022.
@InProceedings{bbgkm22datalog-rewriting, author = "Michael Benedikt and Maxime Buron and Stefano Germano and Kevin Kappelmann and Boris Motik", editor = "Mario Alviano and Andreas Pieris", title = "{Datalog Rewriting for Guarded TGDs}", booktitle = "Proc/ of the 4th Int. Workshop on the Resurgence of Datalog in Academia and Industry (Datalog-2.0 2022)", address = "Genova-Nervi, Italy", month = "September 5", series = "CEUR Workshop Proceedings", volume = "3203", pages = "104--113", year = "2022", }
Jaehun Lee, Taeho Hwang, Jungho Park, Yunsu Lee, Boris Motik, and Ian Horrocks. A context-aware recommendation system for mobile devices. In Kerry L. Taylor, Rafael Gonçalves, Freddy L'ecu'e, and Jun Yan, editors, Proc. of the ISWC 2020 Demos and Industry Tracks (ISWC 2020)), volume 2721 of CEUR Workshop Proceedings, pages 380–382, Globally online, November 1–6 2020.
@InProceedings{lhplmh20context-aware-recommendation, author = "Jaehun Lee and Taeho Hwang and Jungho Park and Yunsu Lee and Boris Motik and Ian Horrocks", editor = "Kerry L. Taylor and Rafael Gon{\c{c}}alves and Freddy L{'{e}}cu{'{e}} and Jun Yan", booktitle = "Proc. of the ISWC 2020 Demos and Industry Tracks (ISWC 2020))", title = "{A context-aware recommendation system for mobile devices}", address = "Globally online", month = "November 1--6", year = "2020", series = "CEUR Workshop Proceedings", volume = "2721", pages = "380--382", }
Fredah Banda and Boris Motik. Community-Based RDF Graph Partitioning. In Thorsten Liebig, Achille Fokoue, and Zhe Wu, editors, Proc. of the 13th Int. Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2020), volume 2757 of CEUR Workshop Proceedings, pages 33–48, Athens, Greece, November 2 2020.
@InProceedings{bm20community-partitioning, author = "Fredah Banda and Boris Motik", title = "{Community-Based RDF Graph Partitioning}", booktitle = "Proc. of the 13th Int. Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2020)", editor = "Thorsten Liebig and Achille Fokoue and Zhe Wu", address = "Athens, Greece", month = "November 2", year = "2020", series = "CEUR Workshop Proceedings", volume = "2757", pages = "33--48", }
A common approach to processing large RDF datasets is to partition the data in
a cluster of shared-nothing servers and then use a distributed query evaluation
algorithm. It is commonly assumed in the literature that the performance of
query processing in such systems is limited mainly by network communication. In
this paper, we show that this assumption does not always hold: we present a new
RDF partitioning method based on Louvain community detection, which drastically
reduces communication, but without a corresponding decrease in query running
times. We show that strongly connected partitions can incur workload imbalance
among the servers during query processing. We thus further refined our
technique to strike a balance between reducing communication and spreading
processing more evenly, and we show that this technique can reduce both
communication and query times.
Nicholas P. Roth, Vasileios Trigonakis, Sungpack Hong, Hassan Chafi, Anthony Potter, Boris Motik, and Ian Horrocks. PGX.D/Async: A Scalable Distributed Graph Pattern Matching Engine. In Peter Boncz and Josep Lluis Larriba Pey, editors, Proc. of the 5th Int. Workshop on Graph Data-management Experiences & Systems (GRADES 2017), pages 7:1–7:6, Chicago, IL, USA, May 19 2017.
@InProceedings{rthcpmh17PGX-D-Async, author = "Nicholas P. Roth and Vasileios Trigonakis and Sungpack Hong and Hassan Chafi and Anthony Potter and Boris Motik and Ian Horrocks", title = "{PGX.D/Async: A Scalable Distributed Graph Pattern Matching Engine}", booktitle = "Proc. of the 5th Int. Workshop on Graph Data-management Experiences \& Systems (GRADES 2017)", editor = "Peter Boncz and Josep Lluis Larriba Pey", address = "Chicago, IL, USA", month = "May 19", year = "2017", published = "ACM", pages = "7:1--7:6", }
Andrew Bate, Boris Motik, Bernardo Cuenca Grau, František Simančík, and Ian Horrocks. Extending Consequence-Based Reasoning to SHIQ. In Diego Calvanese and Boris Konev, editors, Proc. of the 28th Int. Workshop on Description Logics (DL 2015), volume 1350 of CEUR Workshop Proceedings, pages 34–46, Athens, Greece, June 7–10 2015.
@InProceedings{bmgsh15consequence-SHIQ, author = "Andrew Bate and Boris Motik and Bernardo Cuenca Grau and Franti{\v{s}}ek Siman{\v{c}}{\'i}k and Ian Horrocks", title = "{Extending Consequence-Based Reasoning to $\mathcal{SHIQ}$}", booktitle = "Proc. of the 28th Int. Workshop on Description Logics (DL 2015)", editor = "Diego Calvanese and Boris Konev", address = "Athens, Greece", month = "June 7--10", year = "2015", series = "CEUR Workshop Proceedings", volume = "1350", pages = "34--46", }
Consequence-based calculi are a family of reasoning techniques for
description logics (DLs)—knowledge representation formalisms with
numerous applications. Such calculi are very effective in practice because they
combine hypertableau and resolution reasoning in a way that considerably
reduces the number of inferences. Up to now, however, they were proposed for
either Horn DLs (which do not support disjunctive reasoning), or for DLs
without counting quantifiers. In this paper we present a novel
consequence-based algorithm for SHIQ—a DL that supports both disjunctions
and counting quantifiers. Combining the two features is nontrivial since
counting quantifiers require reasoning over equality, which we handle using the
framework of ordered paramodulation—a state of the art method for
equational theorem proving. We thus obtain a calculus that can handle an
expressive DL, while still enjoying the benefits of existing calculi.
Anthony Potter, Boris Motik, and Ian Horrocks. Querying Distributed RDF Graphs: The Effects of Partitioning. In Thorsten Liebig and Achille Fokoue, editors, Proc. of the 10th Int. Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2014), volume 1261 of CEUR Workshop Proceedings, pages 29–44, Riva del Garda, Trentino, Italy, October 20 2014.
@InProceedings{pmh14distributed-RDF-graphs, author = "Anthony Potter and Boris Motik and Ian Horrocks", title = "{Querying Distributed RDF Graphs: The Effects of Partitioning}", booktitle = "Proc. of the 10th Int. Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2014)", editor = "Thorsten Liebig and Achille Fokoue", address = "Riva del Garda, Trentino, Italy", month = "October 20", year = "2014", series = "CEUR Workshop Proceedings", volume = "1261", pages = "29--44", }
Web-scale RDF datasets are increasingly processed using distributed RDF data
stores built on top of a cluster of shared-nothing servers. Such systems
critically rely on their data partitioning scheme and query answering scheme,
the goal of which is to facilitate correct and efficient query processing.
Existing data partitioning schemes are commonly based on hashing or graph
partitioning techniques. The latter techniques split a dataset in a way that
minimises the number of connections between the resulting subsets, thus
reducing the need for communication between servers; however, to facilitate
efficient query answering, considerable duplication of data at the intersection
between subsets is often needed. Building upon the known graph partitioning
approaches, in this paper we present a novel data partitioning scheme that
employs minimal duplication and keeps track of the connections between
partition elements; moreover, we propose a query answering scheme that uses
this additional information to correctly answer all queries. We show
experimentally that, on certain well-known RDF benchmarks, our data
partitioning scheme often allows more answers to be retrieved without
distributed computation than the known schemes, and we show that our query
answering scheme can efficiently answer many queries.
Boris Motik, Yavor Nenov, Robert Piro, Ian Horrocks, and Dan Olteanu. Parallel OWL 2 RL Materialisation in Centralised, Main-Memory RDF Systems. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, Proc. of the 27th Int. Workshop on Description Logics (DL 2014), volume 1193 of CEUR Workshop Proceedings, pages 311–323, Vienna, Austria, July 17–20 2014.
@InProceedings{mnpho14parallel-materialisation-RDFox-DL, author = "Boris Motik and Yavor Nenov and Robert Piro and Ian Horrocks and Dan Olteanu", title = "{Parallel OWL 2 RL Materialisation in Centralised, Main-Memory RDF Systems}", booktitle = "Proc. of the 27th Int. Workshop on Description Logics (DL 2014)", editor = "Meghyn Bienvenu and Magdalena Ortiz and Riccardo Rosati and Mantas Simkus", address = "Vienna, Austria", month = "July 17--20", year = "2014", series = "CEUR Workshop Proceedings", volume = "1193", pages = "311--323", }
We present a novel approach to parallel materialisation (i.e., fixpoint
computation) of OWL RL Knowledge Bases in centralised, main-memory, multi-core
RDF systems. Our approach comprises a datalog reasoning algorithm that evenly
distributes the workload to cores, and an RDF indexing data structure that
supports efficient, `mostly' lock-free parallel updates. Our empirical
evaluation shows that our approach parallelises computation very well so, with
16 physical cores, materialisation can be up to 13.9 times faster than with
just one core.
Giorgio Stefanoni, Boris Motik, and Ian Horrocks. Introducing Nominals to the Combined Query Answering Approaches for EL. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors, Proc. of the 26th Int. Workshop on Description Logics (DL 2013), volume 1014 of CEUR Workshop Proceedings, Ulm, Germany, July 23–26 2013.
@InProceedings{smh13nominals-combined-DL, author = "Giorgio Stefanoni and Boris Motik and Ian Horrocks", title = "{Introducing Nominals to the Combined Query Answering Approaches for $\mathcal{EL}$}", booktitle = "Proc. of the 26th Int. Workshop on Description Logics (DL 2013)", editor = "Thomas Eiter and Birte Glimm and Yevgeny Kazakov and Markus Kr{\"o}tzsch", address = "Ulm, Germany", month = "July 23--26", year = "2013", series = "CEUR Workshop Proceedings", volume = "1014", }
Ian Horrocks, Boris Motik, and Zhe Wang. The HermiT OWL Reasoner. In Ian Horrocks, Mikalai Yatskevich, and Ernesto Jimenez-Ruiz, editors, Proc. of the OWL Reasoner Evaluation Workshop (ORE 2012), volume 858 of CEUR Workshop Proceedings, Manchester, UK, July 1 2012.
@InProceedings{hmw12HermiT, author = "Ian Horrocks and Boris Motik and Zhe Wang", title = "{The HermiT OWL Reasoner}", booktitle = "Proc. of the OWL Reasoner Evaluation Workshop (ORE 2012)", editor = "Ian Horrocks and Mikalai Yatskevich and Ernesto Jimenez-Ruiz", address = "Manchester, UK", month = "July 1", year = "2012", series = "CEUR Workshop Proceedings", volume = "858", }
HermiT is the only reasoner we know of that fully supports the OWL 2
standard, and that correctly reasons about properties as well as
classes. It is based on a novel "hypertableau'' calculus that
addresses performance problems due to nondeterminism and model
size—the primary sources of complexity in state-of-the-art OWL
reasoners. HermiT also incorporates a number of novel optimizations,
including an optimized ontology classification procedure. Our tests
show that HermiT performs well compared to existing tableau reasoners
and is often much faster when classifying complex ontologies.
Giorgio Stefanoni, Boris Motik, and Ian Horrocks. Small Datalog Query Rewritings for EL. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, Proc. of the 25th Int. Workshop on Description Logics (DL 2012), volume 846 of CEUR Workshop Proceedings, Rome, Italy, June 7–10 2012.
@InProceedings{smh12small-rewritings-EL, author = "Giorgio Stefanoni and Boris Motik and Ian Horrocks", title = "{Small Datalog Query Rewritings for EL}", booktitle = "Proc. of the 25th Int. Workshop on Description Logics (DL 2012)", editor = "Yevgeny Kazakov and Domenico Lembo and Frank Wolter", address = "Rome, Italy", month = "June 7--10", year = "2012", series = "CEUR Workshop Proceedings", volume = "846", }
Despoina Magka, Boris Motik, and Ian Horrocks. Modelling Structured Domains Using Description Graphs and Logic Programming. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, Proc. of the 25th Int. Workshop on Description Logics (DL 2012), volume 846 of CEUR Workshop Proceedings, Rome, Italy, June 7–10 2012.
@InProceedings{mmh12dgs-and-lp-dl-version, author = "Despoina Magka and Boris Motik and Ian Horrocks", title = "{Modelling Structured Domains Using Description Graphs and Logic Programming}", booktitle = "Proc. of the 25th Int. Workshop on Description Logics (DL 2012)", editor = "Yevgeny Kazakov and Domenico Lembo and Frank Wolter", address = "Rome, Italy", month = "June 7--10", year = "2012", series = "CEUR Workshop Proceedings", volume = "846", }
Despoina Magka, Boris Motik, and Ian Horrocks. Classifying Chemicals Using Description Graphs and Logic Programming. In Matthew Horridge and Pavel Klinov, editors, Proc. of the 9th Int. Workshop on OWL: Experiences and Directions (OWLED 2012), Heraclion, Crete, May 27–28 2012.
@InProceedings{mmh12dgs-and-lp-owled-version, author = "Despoina Magka and Boris Motik and Ian Horrocks", title = "{Classifying Chemicals Using Description Graphs and Logic Programming}", booktitle = "Proc. of the 9th Int. Workshop on OWL: Experiences and Directions (OWLED 2012)", editor = "Matthew Horridge and Pavel Klinov", address = "Heraclion, Crete", month = "May 27--28", year = "2012", }
Although OWL 2 is widely used to describe complex objects such as chemical
molecules, it cannot represent `structural' features of chemical entities
(e.g., having a ring). A combination of rules and description graphs
(DGs) has been proposed as a possible solution, but it still exhibits several
drawbacks. In this paper we present a radically different approach that we
call Description Graph Logic Programs. Syntactically, our approach combines
DGs, rules, and OWL 2 RL axioms, but its semantics is defined via a
translation into logic programs under stable model semantics. The result is an
expressive OWL 2 RL-compatible formalism that is well suited for modelling
objects with complex structure.
František Simančík, Boris Motik, and Markus Krötzsch. Fixed Parameter Tractable Reasoning in DLs via Decomposition. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, Proc. of the 24th Int. Workshop on Description Logics (DL 2011), volume 745 of CEUR Workshop Proceedings, Barcelona, Spain, July 13–16 2011.
@InProceedings{smk11dl-decomposition, author = "Franti{\v{s}}ek Siman{\v{c}}{\'i}k and Boris Motik and Markus Kr{\"o}tzsch", title = "{Fixed Parameter Tractable Reasoning in DLs via Decomposition}", booktitle = "Proc. of the 24th Int. Workshop on Description Logics (DL 2011)", editor = "Riccardo Rosati and Sebastian Rudolph and Michael Zakharyaschev", address = "Barcelona, Spain", month = "July 13--16", year = "2011", series = "CEUR Workshop Proceedings", volume = "745", }
Birte Glimm, Ian Horrocks, and Boris Motik. Optimized DL Reasoning via Core Blocking. In Volker Haarslev, David Toman, and Grant Weddell, editors, Proc. of the 23rd Int. Workshop on Description Logics (DL 2010), volume 573 of CEUR Workshop Proceedings, Waterloo, ON, Canada, May 4–7 2010.
@InProceedings{ghm10core-blocking-DL, author = "Birte Glimm and Ian Horrocks and Boris Motik", title = "{Optimized DL Reasoning via Core Blocking}", booktitle = "Proc. of the 23rd Int. Workshop on Description Logics (DL 2010)", editor = "Volker Haarslev and David Toman and Grant Weddell", address = "Waterloo, ON, Canada", month = "May 4--7", year = "2010", series = "CEUR Workshop Proceedings", volume = "573", }
Héctor Pérez-Urbina, Ian Horrocks, and Boris Motik. Practical Aspects of Query Rewriting for OWL 2. In Rinke Hoekstra and Peter Patel-Schneider, editors, Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2009), Chantilly, VA, USA, October 23–24 2009.
@InProceedings{puhm09practical-aspects, author = "H{\'e}ctor P{\'e}rez-Urbina and Ian Horrocks and Boris Motik", title = "{Practical Aspects of Query Rewriting for OWL 2}", booktitle = "Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2009)", editor = "Rinke Hoekstra and Peter Patel-Schneider", address = "Chantilly, VA, USA", month = "October 23--24", year = "2009", }
Query answering for the QL profile of OWL 2 and a substantial fragment of the EL profile can be implemented via query rewriting: a query posed over an ontology is first rewritten using the conceptual part of the ontology and then the evaluation of the rewritten query is delegated to a (deductive) database where the instance data resides. In our previous work we presented a rewriting algorithm for OWL QL that can also deal with most of the QL profile. In order to test the likely practicality of our rewriting algorithm, we have implemented it in a query rewriting system that we call REQUIEM. A recent empirical evaluation of REQUIEM, in which we considered OWL 2 QL ontologies, indicates that it produces significantly smaller rewritings than existing approaches in most cases. However, our results suggest that typical queries over realistic ontologies can still lead to very large rewritings (e.g., containing many thousands of queries). In this paper, we describe query rewriting, briefly present the results of our evaluation, and discuss various optimization techniques aimed at reducing the size of the rewritings. Moreover, we present some preliminary results from an ongoing empirical evaluation of REQUIEM in which we consider OWL 2 EL ontologies.
Bernardo Cuenca Grau and Boris Motik. Importing Ontologies with Hidden Content. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, Proc. of the 22nd Int. Workshop on Description Logics (DL 2009), volume 477 of CEUR Workshop Proceedings, Oxford, UK, July 27–30 2009.
@InProceedings{gm09ibq-EL, author = "Bernardo Cuenca Grau and Boris Motik", title = "{Importing Ontologies with Hidden Content}", booktitle = "Proc. of the 22nd Int. Workshop on Description Logics (DL 2009)", editor = "Bernardo Cuenca Grau and Ian Horrocks and Boris Motik and Ulrike Sattler", address = "Oxford, UK", month = "July 27--30", year = "2009", series = "CEUR Workshop Proceedings", volume = "477", }
Rob Shearer, Ian Horrocks, and Boris Motik. Exploiting Partial Information in Taxonomy Construction. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, Proc. of the 22nd Int. Workshop on Description Logics (DL 2009), volume 477 of CEUR Workshop Proceedings, Oxford, UK, July 27–30 2009.
@InProceedings{shm09partial-information, author = "Rob Shearer and Ian Horrocks and Boris Motik", title = "{Exploiting Partial Information in Taxonomy Construction}", booktitle = "Proc. of the 22nd Int. Workshop on Description Logics (DL 2009)", editor = "Bernardo Cuenca Grau and Ian Horrocks and Boris Motik and Ulrike Sattler", address = "Oxford, UK", month = "July 27--30", year = "2009", series = "CEUR Workshop Proceedings", volume = "477", }
Héctor Pérez-Urbina, Boris Motik, and Ian Horrocks. A Comparison of Query Rewriting Techniques for DL-lite. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, Proc. of the 22nd Int. Workshop on Description Logics (DL 2009), volume 477 of CEUR Workshop Proceedings, Oxford, UK, July 27–30 2009.
@InProceedings{pumh09comparison, author = "H{\'e}ctor P{\'e}rez-Urbina and Boris Motik and Ian Horrocks", title = "{A Comparison of Query Rewriting Techniques for DL-lite}", booktitle = "Proc. of the 22nd Int. Workshop on Description Logics (DL 2009)", editor = "Bernardo Cuenca Grau and Ian Horrocks and Boris Motik and Ulrike Sattler", address = "Oxford, UK", month = "July 27--30", year = "2009", series = "CEUR Workshop Proceedings", volume = "477", }
Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, and Ulrike Sattler. Modeling Ontologies Using OWL, Description Graphs, and Rules. In Alan Ruttenberg, Ulrile Sattler, and Cathy Dolbear, editors, Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2008 EU), Karlsruhe, Germany, October 26–27 2008.
@InProceedings{mcghs2008graphs, author = "Boris Motik and Bernardo Cuenca Grau and Ian Horrocks and Ulrike Sattler", title = "{Modeling Ontologies Using OWL, Description Graphs, and Rules}", booktitle = "Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2008 EU)", editor = "Alan Ruttenberg and Ulrile Sattler and Cathy Dolbear", address = "Karlsruhe, Germany", month = "October 26--27", year = "2008", }
Rob Shearer, Boris Motik, and Ian Horrocks. HermiT: A Highly-Efficient OWL Reasoner. In Alan Ruttenberg, Ulrile Sattler, and Cathy Dolbear, editors, Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2008 EU), Karlsruhe, Germany, October 26–27 2008.
@InProceedings{smh08HermiT, author = "Rob Shearer and Boris Motik and Ian Horrocks", title = "{HermiT: A Highly-Efficient OWL Reasoner}", booktitle = "Proc. of the 5th Int. Workshop on OWL: Experiences and Directions (OWLED 2008 EU)", editor = "Alan Ruttenberg and Ulrile Sattler and Cathy Dolbear", address = "Karlsruhe, Germany", month = "October 26--27", year = "2008", }
%!TEX root = paper.tex
HermiT is a new OWL reasoner based on a novel "hypertableau'' calculus. The
new calculus addresses performance problems due to nondeterminism and model
size—the primary sources of complexity in state-of-the-art OWL reasoners.
The latter is particularly important in practice, and it is achieved in HermiT
with an improved blocking strategy and and an optimization that tries to reuse
existing individuals rather than generating new ones. HermiT also incorporates
a number of other novel optimizations, such as a more efficient approach to
handling nominals, and various techniques for optimizing ontology
classification. Our tests show that HermiT is usually much faster than other
reasoners when classifying complex ontologies, and it is already able to
classify a number of ontologies which no other reasoner has been able to
handle.
Boris Motik, Rob Shearer, and Ian Horrocks. Optimizing the Nominal Introduction Rule in (Hyper)Tableau Calculi. In Franz Baader, Carsten Lutz, and Boris Motik, editors, Proc. of the 21st Int. Workshop on Description Logics (DL 2008), volume 353 of CEUR Workshop Proceedings, Dresden, Germany, May 13–16 2008.
@InProceedings{msh08rewriting, author = "Boris Motik and Rob Shearer and Ian Horrocks", title = "{Optimizing the Nominal Introduction Rule in (Hyper)Tableau Calculi}", booktitle = "Proc. of the 21st Int. Workshop on Description Logics (DL 2008)", editor = "Franz Baader and Carsten Lutz and Boris Motik", address = "Dresden, Germany", month = "May 13--16", year = "2008", series = "CEUR Workshop Proceedings", volume = "353", }
%!TEX root = paper.tex
Interactions between nominals, inverse roles, and number restrictions provide
challenges to description logic reasoners, as they can make models of a
knowledge base non-forest-like. The solution to this problem used by the
standard tableau calculus can incur a high degree of nondeterminism and can
lead to the generation of unnecessarily large models. We present a more
efficient approach to handling this combination of constructs. We use this
approach to extend our hypertableau calculus to SHOIQ. This approach,
however, is equally applicable to traditional tableau calculi.
Boris Motik, Bernardo Cuenca Grau, and Ulrike Sattler. The Representation of Structured Objects in DLs using Description Graphs. In Franz Baader, Carsten Lutz, and Boris Motik, editors, Proc. of the 21st Int. Workshop on Description Logics (DL 2008), volume 353 of CEUR Workshop Proceedings, Dresden, Germany, May 13–16 2008.
@InProceedings{mgs08description-graphs, author = "Boris Motik and Bernardo Cuenca Grau and Ulrike Sattler", title = "{The Representation of Structured Objects in DLs using Description Graphs}", booktitle = "Proc. of the 21st Int. Workshop on Description Logics (DL 2008)", editor = "Franz Baader and Carsten Lutz and Boris Motik", address = "Dresden, Germany", month = "May 13--16", year = "2008", series = "CEUR Workshop Proceedings", volume = "353", }
Héctor Pérez-Urbina, Boris Motik, and Ian Horrocks. Rewriting Conjunctive Queries under Description Logic Constraints. In Andrea Cal`i, Georg Gottlob, Laks V.S. Lakshmanan, and Davide Martinenghi, editors, Proc. of the Int. Workshop on Logic in Databases (LID 2008), Rome, Italy, May 19–20 2008.
@InProceedings{pumh08rewriting-ELHI, author = "H{\'e}ctor P{\'e}rez-Urbina and Boris Motik and Ian Horrocks", title = "{Rewriting Conjunctive Queries under Description Logic Constraints}", booktitle = "Proc. of the Int. Workshop on Logic in Databases (LID 2008)", editor = "Andrea Cal`{i} and Georg Gottlob and Laks V.S. Lakshmanan and Davide Martinenghi", address = "Rome, Italy", month = "May 19--20", year = "2008", }
We consider the problems of conjunctive query answering and rewriting under Description Logic constraints. We present a query rewriting algorithm for $\ELHI$ knowledge bases, and use it to show that query answering in this setting is
P
Time-complete w.r.t. data complexity. We show that our algorithm is worst-case optimal for languages with data complexity of query answering ranging from \LogSpace to P
Time-complete.
Héctor Pérez-Urbina, Boris Motik, and Ian Horrocks. Rewriting Conjunctive Queries over Description Logic Knowledge Bases. In Klaus-Dieter Schewe and Bernhard Thalheim, editors, Proc. of the Int. Workshop on Semantics in Data and Knowledge Bases (SDKB 2008), volume 4925 of LNCS, pages 199–214, Nantes, France, March 29 2008. Springer.
@InProceedings{pumh08rewriting-DLlite-plus, author = "H{\'e}ctor P{\'e}rez-Urbina and Boris Motik and Ian Horrocks", title = "{Rewriting Conjunctive Queries over Description Logic Knowledge Bases}", booktitle = "Proc. of the Int. Workshop on Semantics in Data and Knowledge Bases (SDKB 2008)", editor = "Klaus-Dieter Schewe and Bernhard Thalheim", address = "Nantes, France", month = "March 29", year = "2008", publisher = "Springer", series = "LNCS", volume = "4925", pages = "199--214", }
We consider the problems of conjunctive query answering and
rewriting for information integration systems in which a Description
Logic ontology is used to provide a global view of the data. We
present a resolution-based query rewriting algorithm for
$\DLlitePlus$ ontologies, and use it to show that query answering in
this setting is \NLogSpace-complete with respect to data complexity.
We also show that our algorithm produces an optimal rewriting when
the input ontology is expressed in the language $\DLlite$. Finally,
we sketch an extended version of the algorithm that would, we are
confident, be optimal for several DL languages with data complexity
of query answering ranging from \LogSpace to
P
Time-complete.
Boris Motik, Rob Shearer, and Ian Horrocks. A Hypertableau Calculus for SHIQ. In Diego Calvanese, Enriso Franconi, Volker Haarslev, Domenico Lembo, Boris Motik, Sergio Tessaris, and Anny-Yasmin Turhan, editors, Proc. of the 20th Int. Workshop on Description Logics (DL 2007), pages 419–426, Brixen/Bressanone, Italy, June 8–10 2007. Bozen/Bolzano University Press.
@InProceedings{msh07hypertableau, author = "Boris Motik and Rob Shearer and Ian Horrocks", title = "{A Hypertableau Calculus for SHIQ}", booktitle = "Proc. of the 20th Int. Workshop on Description Logics (DL 2007)", editor = "Diego Calvanese and Enriso Franconi and Volker Haarslev and Domenico Lembo and Boris Motik and Sergio Tessaris and Anny-Yasmin Turhan", address = "Brixen/Bressanone, Italy", month = "June 8--10", year = "2007", publisher = "Bozen/Bolzano University Press", pages = "419--426", }
We present a novel reasoning calculus for the Description Logic
SHIQ. In order to reduce the nondeterminism due to general
inclusion axioms, we base our calculus on hypertableau and
hyperresolution calculi, which we extend with a blocking condition
to ensure termination. To prevent the calculus from generating large
models, we introduce "anywhere'' pairwise blocking. Our
preliminary implementation shows significant performance
improvements on several well-known ontologies. To the best of our
knowledge, our reasoner is currently the only one that can classify
the original version of the GALEN terminology.
Boris Motik, Ian Horrocks, and Ulrike Sattler. Adding Integrity Constraints to OWL. In Christine Golbreich, Aditya Kalyanpur, and Bijan Parsia, editors, OWL: Experiences and Directions 2007 (OWLED 2007), Innsbruck, Austria, June 6–7 2007.
@InProceedings{mhs07adding-constraints, author = "Boris Motik and Ian Horrocks and Ulrike Sattler", title = "{Adding Integrity Constraints to OWL}", booktitle = "OWL: Experiences and Directions 2007 (OWLED 2007)", editor = "Christine Golbreich and Aditya Kalyanpur and Bijan Parsia", address = "Innsbruck, Austria", month = "June 6--7", year = "2007", }
Schema statements in OWL are interpreted quite differently from
analogous statements in relational databases. If these statements
are meant to be interpreted as integrity constraints (ICs), OWL's
interpretation may seem confusing and/or inappropriate. Therefore,
we propose an extension of OWL with ICs that captures the intuition
behind ICs in relational databases. We show that, if the constraints
are satisfied, we can disregard them while answering a broad range
of positive queries.
Boris Motik and Ian Horrocks. Problems with OWL Syntax. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, OWL: Experiences and Directions 2006 (OWLED 2006), Athens, GA, USA, November 10–11 2006.
@InProceedings{mh06problems-owl, author = "Boris Motik and Ian Horrocks", title = "{Problems with OWL Syntax}", booktitle = "OWL: Experiences and Directions 2006 (OWLED 2006)", editor = "Bernardo Cuenca Grau and Pascal Hitzler and Connor Shankey and Evan Wallace", address = "Athens, GA, USA", month = "November 10--11", year = "2006", }
In this paper we discuss three problems with OWL syntax that
repeatedly surface in practice. The first problem is that OWL does
not allow for explicit declarations—assertions that a
certain class, property, or an individual exists in an ontology.
This aspect of the OWL standard was often misinterpreted, which
caused design errors in OWL APIs; moreover, the lack of declarations
makes devising an intuitive structural consistency check for OWL
ontologies difficult. The second problem is that OWL Abstract Syntax
and OWL RDF syntax rely on the separation between object and data
property names for disambiguation. We show that this prevents an
unambiguous interpretation of certain syntactically well-formed OWL
ontologies; furthermore, it makes implementing OWL parsers
unnecessarily difficult. The third problem is that OWL Abstract
Syntax cannot be translated into OWL RDF syntax without loss of
information. We present possible solutions to these three problems,
which, if adopted in OWL 1.1, would lead to a cleaner standard and
would significantly simplify the implementation of OWL APIs.
Boris Motik. Description Logics and Disjunctive Datalog—More Than just a Fleeting Resemblance? In Holger Schlingloff, editor, Proc. of the 4th Workshop on Methods for Modalities (M4M-4), volume 194 of Informatik-Berichte der Humboldt-Universität zu Berlin, pages 246–265, Berlin, Germany, December 1–2 2005.
@InProceedings{m05dl-and-dd, author = "Boris Motik", title = "{Description Logics and Disjunctive Datalog---More Than just a Fleeting Resemblance?}", booktitle = "Proc. of the 4th Workshop on Methods for Modalities (M4M-4)", editor = "Holger Schlingloff", address = "Berlin, Germany", month = "December 1--2", year = "2005", series = "Informatik-Berichte der Humboldt-Universit{\"a}t zu Berlin", volume = "194", pages = "246--265", }
As applications of description logics (DLs) proliferate, efficient
reasoning with large ABoxes (sets of individuals with descriptions)
becomes ever more important. Motivated by the prospects of reusing
optimization techniques of deductive databases, we developed a novel
algorithm for reasoning in description logic, which reduces a DL
knowledge base to a disjunctive datalog program without changing the
set of relevant consequences. This allows to answer queries by
applying optimization techniques, such as join-order optimizations
or magic sets. The algorithm supports the very expressive logic
SHIQD, so the reduction is quite technically involved. In this
paper we present a simplified algorithm for the basic logic \ALC.
Whereas this algorithm is much easier to understand, it is based on
the same principles as the general one.
Stephan Grimm and Boris Motik. Closed World Reasoning in the Semantic Web through Epistemic Operators. In Bernardo Cuenca Grau, Ian Horrocks, Bijan Parsia, and Peter Patel-Schneider, editors, Proc. of the Workshop on OWL: Experiences and Directions (OWLED 2005), Galway, Ireland, November 11–12 2005.
@InProceedings{gm05closed, author = "Stephan Grimm and Boris Motik", title = "{Closed World Reasoning in the Semantic Web through Epistemic Operators}", booktitle = "Proc. of the Workshop on OWL: Experiences and Directions (OWLED 2005)", editor = "Bernardo Cuenca Grau and Ian Horrocks and Bijan Parsia and Peter Patel-Schneider", address = "Galway, Ireland", month = "November 11--12", year = "2005", }
Peter Haase and Boris Motik. A Mapping System for the Integration of OWL-DL Ontologies. In Axel Hahn, Sven Abels, and Liane Haak, editors, Proc. of the 1st Int. ACM Workshop on Interoperability of Heterogeneous Information Systems (IHIS'05), pages 9–16, Bremen, Germany, November 4 2005. ACM Press.
@InProceedings{hm05mapping, author = "Peter Haase and Boris Motik", title = "{A Mapping System for the Integration of OWL-DL Ontologies}", booktitle = "Proc. of the 1st Int. ACM Workshop on Interoperability of Heterogeneous Information Systems (IHIS'05)", editor = "Axel Hahn and Sven Abels and Liane Haak", address = "Bremen, Germany", month = "November 4", publisher = "ACM Press", year = "2005", pages = "9--16", }
To enable interoperability between applications in
distributed information systems based
on heterogeneous ontologies, it is necessary to formally define the
notion of a mapping between ontologies.
In this paper, we define a mapping system for OWL-DL ontologies, where mappings are expressed as correspondences between conjunctive queries over ontologies.
As query answering within
such a general mapping system is undecidable, we identify a decidable fragment of the mapping system, which corresponds to OWL-DL extended with DL-safe rules.
We further show how the mapping system can be applied for the task of ontology integration and present a query answering algorithm.
Ullrich Hustadt and Boris Motik. Description Logics and Disjunctive Datalog—The Story so Far. In Ian Horrocks, Ulrike Sattler, and Frank Wolter, editors, Proc. of the 2005 Int. Workshop on Description Logics (DL 2005), volume 147 of CEUR Workshop Proceedings, Edinburgh, UK, July 26–28 2005.
@InProceedings{hm05dl-dd-so-far, author = "Ullrich Hustadt and Boris Motik", title = "{Description Logics and Disjunctive Datalog---The Story so Far}", booktitle = "Proc. of the 2005 Int. Workshop on Description Logics (DL 2005)", editor = "Ian Horrocks and Ulrike Sattler and Frank Wolter", address = "Edinburgh, UK", month = "July 26--28", year = "2005", series = "CEUR Workshop Proceedings", volume = "147", }
In this paper we present an overview of our recent work on the
relationship between description logics and disjunctive datalog. In
particular, we reduce satisfiability and instance checking in SHIQ
to corresponding problems in disjunctive datalog. This allows us to
apply practically successful deductive database optimization
techniques, such as magic sets. Interestingly, the reduction also
allows us to obtain novel theoretical results on description logics.
In particular, we show that the data complexity of reasoning
in SHIQ is in
NP
, and we define a fragment called Horn-SHIQ for
which the data complexity is in P
. Finally, the reduction provides
a basis for query answering in an extension of SHIQ with so-called
DL-safe rules.
Pascal Hitzler, Jürgen Angele, Boris Motik, and Rudi Studer. Bridging the Paradigm Gap with Rules for OWL. In Proc. of the W3C Workshop on Rule Languages for Interoperability, Washington, DC, USA, April 27–28 2005.
@InProceedings{hams05bridging, author = "Pascal Hitzler and J{\"u}rgen Angele and Boris Motik and Rudi Studer", title = "{Bridging the Paradigm Gap with Rules for OWL}", booktitle = "Proc. of the W3C Workshop on Rule Languages for Interoperability", address = "Washington, DC, USA", month = "April 27--28", year = "2005", }
Stephan Grimm, Boris Motik, and Chris Preist. Variance in e-Business Service Discovery. In David Martin, Rubén Lara, and Takahira Yamaguchi, editors, Proc. of the ISWC 2004 Workshop on Semantic Web Services: Preparing to Meet the World of Business Applications, volume 119 of CEUR Workshop Proceedings, Hiroshima, Japan, November 8 2004.
@InProceedings{gmp04variance, author = "Stephan Grimm and Boris Motik and Chris Preist", title = "{Variance in e-Business Service Discovery}", booktitle = "Proc. of the ISWC 2004 Workshop on Semantic Web Services: Preparing to Meet the World of Business Applications", editor = "David Martin and Rub{\'e}n Lara and Takahira Yamaguchi", address = "Hiroshima, Japan", month = "November 8", year = "2004", series = "CEUR Workshop Proceedings", volume = "119", }
Automating the process of B2B partner discovery and contract
negotiation is expected to significantly optimise company processes.
Numerous existing proposals for discovery follow the approach where
service descriptions are expressed by concept expressions in
description logics (DL), and description matching is performed by
well-known DL inferences. However, these approaches do not always
produce results one might intuitively expect, due to a gap between
the formal semantics of service descriptions and human intuition. In
this paper, we address this problem by analysing the connection
between the modeler's intuition and formal logic used to
operationalise discovery. Furthermore, we show how to correctly map
the intuition into description logic constructs. Finally, we
investigate different inferences used to realise service discovery.
Boris Motik, Alexander Maedche, and Raphael Volz. Optimizing Query Answering in Description Logics using Disjunctive Deductive Databases. In François Bry, Carsten Lutz, Ulrike Sattler, and Mareike Schoop, editors, Proc. of the 10th Int. Workshop on Knowledge Representation meets Databases (KRDB 2003), volume 79 of CEUR Workshop Proceedings, Hamburg, Germany, September 15–16 2003.
@InProceedings{mmv03optimizing, author = "Boris Motik and Alexander Maedche and Raphael Volz", title = "{Optimizing Query Answering in Description Logics using Disjunctive Deductive Databases}", booktitle = "Proc. of the 10th Int. Workshop on Knowledge Representation meets Databases (KRDB 2003)", editor = "Fran\c{c}ois Bry and Carsten Lutz and Ulrike Sattler and Mareike Schoop", address = "Hamburg, Germany", month = "September 15--16", year = "2003", series = "CEUR Workshop Proceedings", volume = "79", }
Motivated by the possibilities of applying deductive database
technology for efficient query answering in description logics, we
present a translation operator $\mu$ that transforms non-recursive
ALC ontologies into a disjunctive deductive database. Contrary
to our previous work, in this paper we focus on handling negation,
disjunction and existential quantifiers, which cannot be handled
by deductive databases in a straightforward manner. We present a
performance evaluation of our approach, confirming the intuition
that techniques for optimizing query answering in disjunctive
deductive databases may improve query answering in description
logics.
Raphael Volz, Steffen Staab, and Boris Motik. Incremental Maintenance of Dynamic Datalog Programs. In Raphael Volz, Stefan Decker, and Isabel F. Cruz, editors, Proc. of the 1st Int. Workshop on Practical and Scalable Semantic Systems (PSSS1), volume 89 of CEUR Workshop Proceedings, Sanibel Island, FL, USA, October 20 2003.
@InProceedings{vsm03incremental-ws, author = "Raphael Volz and Steffen Staab and Boris Motik", title = "{Incremental Maintenance of Dynamic Datalog Programs}", editor = "Raphael Volz and Stefan Decker and Isabel F. Cruz", booktitle = "Proc. of the 1st Int. Workshop on Practical and Scalable Semantic Systems (PSSS1)", address = "Sanibel Island, FL, USA", month = "October 20", year = "2003", series = "CEUR Workshop Proceedings", volume = "89", }
Alexander Maedche, Boris Motik, Nuno Silva, and Raphael Volz. MAFRA — An Ontology MApping FRAmework in the context of the Semantic Web. In Proc. of the ECAI 2002 Workshop on Knowledge Transformation for the Semantic Web (KTSW), Lyon, France, July 23 2002.
@InProceedings{mmsv02mafra-ws, author = "Alexander Maedche and Boris Motik and Nuno Silva and Raphael Volz", title = "{MAFRA --- An Ontology MApping FRAmework in the context of the Semantic Web}", booktitle = "Proc. of the ECAI 2002 Workshop on Knowledge Transformation for the Semantic Web (KTSW)", address = "Lyon, France", month = "July 23", year = "2002", }
Ontologies as means for conceptualizing and structuring domain
knowledge within a community of interest are seen as a key to
realize the Semantic Web vision. However, the decentralized nature
of the Web makes achieving this consensus across communities
difficult, thus, hampering efficient knowledge sharing between
them. In order to balance the autonomy of each community with the
need for interoperability, mapping mechanisms between distributed
ontologies in the Semantic Web are required. In this paper we
present MAFRA, an interactive, incremental and dynamic framework
for mapping distributed ontologies in the Semantic Web.
Book Contributions
Boris Motik. Resolution-Based Reasoning for Ontologies, 2nd edition. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, International Handbooks on Information Systems, pages 529–550, 2009. Springer.
@InCollection{m09resolution-based-reasoning, author = "Boris Motik", title = "{Resolution-Based Reasoning for Ontologies, 2nd edition}", booktitle = "Handbook on Ontologies", editor = "Steffen Staab and Rudi Studer", publisher = "Springer", series = "International Handbooks on Information Systems", year = "2009", pages = "529--550", }
We overview the algorithms for reasoning with description logic (DL)
ontologies based on resolution. These algorithms often have
worst-case optimal complexity, and, by relying on vast experience in
building resolution theorem provers, they can be implemented
efficiently. Furthermore, we present a resolution-based algorithm
that reduces a DL knowledge base into a disjunctive datalog program,
while preserving the set of entailed facts. This reduction enables
the application of optimization techniques from deductive databases,
such as magic sets, to reasoning in DLs. This approach has proven
itself in practice on ontologies with relatively small and simple
TBoxes, but large ABoxes.
Boris Motik, Alexander Maedche, and Raphael Volz. Ontology Representation and Querying for Realizing Semantics-Driven Applications. In Giorgos Samou and Stefanos Kollias, editors, Multimedia Content and the Semantic Web: Methods, Standards and Tools, pages 45–73, 2005. John Wiley & Sons.
@InCollection{mmv05ontology-representation, author = "Boris Motik and Alexander Maedche and Raphael Volz", title = "{Ontology Representation and Querying for Realizing Semantics-Driven Applications}", booktitle = "Multimedia Content and the Semantic Web: Methods, Standards and Tools", editor = "Giorgos Samou and Stefanos Kollias", publisher = "John Wiley \& Sons", year = "2005", pages = "45--73", }
In recent years ontologies – shared conceptualizations of some
domain – are increasingly seen as the key to further advances in
automation of information processing. Although many approaches for
representing and querying ontologies have already been devised,
they haven't found their way into enterprise applications yet.
Many of them offer a high degree of expressivity, but require
complicated reasoning and query answering procedures, and often
lack features needed in practical applications, such as
constraints and meta-concept modeling. This is compounded by the
lack of critical technical features in ontology management tools,
such as scalability, reliability, concurrency and the support for
ontology modularization. We present an approach for ontology
representation and querying that balances some of the trade-offs
to more easily integrate into existing enterprise information
infrastructure. In particular, we focus on efficient evaluation of
queries using existing information management infrastructure, such
as relational databases. Finally, we describe a prototype
implementation within KAON, the Karlsruhe Ontology and Semantic
Web tool suite.
Daniel Oberle, Raphael Volz, Steffen Staab, and Boris Motik. An Extensible Ontology Software Environment. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, International Handbooks on Information Systems, pages 299–320, 2004. Springer.
@InCollection{ovsm04extensible, author = "Daniel Oberle and Raphael Volz and Steffen Staab and Boris Motik", title = "{An Extensible Ontology Software Environment}", booktitle = "Handbook on Ontologies", editor = "Steffen Staab and Rudi Studer", publisher = "Springer", series = "International Handbooks on Information Systems", year = "2004", pages = "299--320", }
The growing use of ontologies in applications creates the need for an infrastructure that allows developers to more easily combine different software modules like ontology stores, editors, or inference engines towards comprehensive ontology-based solutions. We call such an infrastructure Ontology Software Environment. The article discusses requirements and design issues of such an Ontology Software Environment. In particular, we present this discussion in light of the ontology and (meta)data standards that exist in the Semantic Web and present our corresponding implementation, the KAON SERVER.
Technical Reports
Boris Motik, Bernardo Cuenca Grau, and Ulrike Sattler. Structured Objects in OWL: Representation and Reasoning. Technical Report, University of Oxford, UK, 2007.
@TechReport{mgs07structured-objects-report, author = "Boris Motik and Bernardo Cuenca Grau and Ulrike Sattler", title = "{Structured Objects in OWL: Representation and Reasoning}", institution = "University of Oxford", address = "UK", year = "2007", }
Applications of semantic technologies often require the
representation of and reasoning with structured
objects—that is, objects composed of parts connected in complex
ways. Although OWL is a general and powerful language, its class
descriptions and axioms cannot be used to describe arbitrarily
connected structures. An OWL representation of structured objects
can thus be underconstrained, which reduces the inferences that can
be drawn and causes performance problems in reasoning. To address
these problems, we extend OWL with description graphs, which
allow for the description of structured objects in a simple and
precise way. To represent conditional aspects of the domain, we also
allow for SWRL-like rules over description graphs. Based on an
observation about the nature of structured objects, we ensure
decidability of our formalism. We also present a hypertableau-based
decision procedure, which we implemented in the HermiT reasoner. To
evaluate its performance, we have extracted description graphs from
the GALEN and FMA ontologies, classified them successfully, and even
detected a modeling error in GALEN.
Boris Motik, Ian Horrocks, and Ulrike Sattler. Integrating Description Logics and Relational Databases. Technical Report, University of Manchester, UK, 2006.
@TechReport{mhs06constraints-report, author = "Boris Motik and Ian Horrocks and Ulrike Sattler", title = "{Integrating Description Logics and Relational Databases}", institution = "University of Manchester", address = "UK", year = "2006", }
In this paper, we compare description logics with relational
databases with respect to their treatment of schema constraints, the
languages used to express these constraints, and the approaches to
query answering and constraint checking. Our analysis reveals a
significant overlap between the two formalisms. Inspired by the
integrity constraints of relational databases, we define a notion of
integrity constraints for description logics. We analyze different
possibilities for defining the semantics of the constraints.
Finally, we present several algorithms for checking constraint
satisfaction for description logics with varying degrees of
expressivity.
Boris Motik and Riccardo Rosati. Closing Semantic Web Ontologies. Technical Report, University of Manchester, UK, 2006.
@TechReport{mr06closing-report, author = "Boris Motik and Riccardo Rosati", title = "{Closing Semantic Web Ontologies}", institution = "University of Manchester", address = "UK", year = "2006", }
In this paper, we present a novel formalism of hybrid MKNF
knowledge bases, which allows us to seamlessly integrate an arbitrary
decidable description logic with logic programming rules. We thus
obtain a powerful hybrid formalism that combines the best features
of both description logics, such as the ability to model taxonomic
knowledge, and logic programming, such as the ability to perform
nonmonotonic reasoning. Extending DLs with unrestricted rules makes
reasoning undecidable. To obtain decidability, we apply the
well-known DL-safety restriction that makes the rules
applicable only to explicitly named individuals, and thus trade some
expressivity for decidability. We present several reasoning
algorithms for different fragments of our logic, as well as the
corresponding complexity results. Our results show that, in many
cases, the data complexity of reasoning with hybrid MKNF knowledge
bases is not higher than the data complexity of reasoning in the
corresponding fragment of logic programming.
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Reasoning for Description Logics around SHIQ in a Resolution Framework. Technical Report 3-8-04/04, FZI, Germany, 2004.
@TechReport{hms04reasoning-report, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Reasoning for Description Logics around SHIQ in a Resolution Framework}", institution = "FZI", address = "Germany", number = "3-8-04/04", year = "2004", }
Ullrich Hustadt, Boris Motik, and Ulrike Sattler. Reducing SHIQ- Description Logics to Disjunctive Datalog Programs. Technical Report 1-8-11/03, FZI, Germany, 2004.
@TechReport{hms04reducing-report, author = "Ullrich Hustadt and Boris Motik and Ulrike Sattler", title = "{Reducing $\mathcal{SHIQ}^-$ Description Logics to Disjunctive Datalog Programs}", institution = "FZI", address = "Germany", number = "1-8-11/03", year = "2004", }
Proceedings
Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors. Proceedings of the 22nd International Workshop on Description Logics (DL 2009). Oxford, UK, July 27–30 2009. CEUR Workshop Proceedings. Vol. 477.
@Proceedings{dl09, title = "{Proceedings of the 22nd International Workshop on Description Logics (DL 2009)}", editor = "Bernardo Cuenca Grau and Ian Horrocks and Boris Motik and Ulrike Sattler", address = "Oxford, UK", month = "July 27--30", year = "2009", publisher = "CEUR Workshop Proceedings", note = "Vol. 477", }
Franz Baader, Carsten Lutz, and Boris Motik, editors. Proceedings of the 21st International Workshop on Description Logics (DL 2008). Dresden, Germany, May 13–16 2008. CEUR Workshop Proceedings. Vol. 353.
@Proceedings{dl08, title = "{Proceedings of the 21st International Workshop on Description Logics (DL 2008)}", editor = "Franz Baader and Carsten Lutz and Boris Motik", address = "Dresden, Germany", month = "May 13--16", year = "2008", publisher = "CEUR Workshop Proceedings", note = "Vol. 353", }
Diego Calvanese, Enrico Franconi, Volker Haarslev, Domenico Lembo, Boris Motik, Sergio Tessaris, and Anni-Yasmin Turhan, editors. Proceedings of the 20th International Workshop on Description Logics (DL 2007). Brixen/Bressanone, Italy, June 8–10 2007. CEUR Workshop Proceedings. Vol. 250.
@Proceedings{dl07, title = "{Proceedings of the 20th International Workshop on Description Logics (DL 2007)}", editor = "Diego Calvanese and Enrico Franconi and Volker Haarslev and Domenico Lembo and Boris Motik and Sergio Tessaris and Anni-Yasmin Turhan", address = "Brixen/Bressanone, Italy", month = "June 8--10", year = "2007", publisher = "CEUR Workshop Proceedings", note = "Vol. 250", }
Books
Boris Motik and Julijan Šribar. C++ Demystified. Element, Zagreb, Croatia, 2nd edition, 2001. In Croatian.
@Book{demistificirani1, author = "Boris Motik and Julijan {\v{S}}ribar", title = "{C++ Demystified}", address = "Zagreb, Croatia", edition = "2nd", publisher = "Element", year = "2001", note = "In Croatian", }
@Book{demistificirani2, author = "Boris Motik and Julijan {\v{S}}ribar", title = "{C++ Demystified}", address = "Zagreb, Croatia", publisher = "Element", year = "1997", note = "In Croatian", }
Theses
Boris Motik. Reasoning in Description Logics using Resolution and Deductive Databases. PhD Thesis, Univesität Karlsruhe (TH), Karlsruhe, Germany, January 2006.
@PhDThesis{motik06PhD, author = "Boris Motik", title = "{Reasoning in Description Logics using Resolution and Deductive Databases}", school = "Univesit{\"a}t Karlsruhe (TH), Karlsruhe, Germany", month = "January", year = "2006", }
Boris Motik. Improving Interaction with the Information Infrastructure Using Intelligent Agents. Master's Thesis, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, December 1999. In Croatian.
@MastersThesis{motik99masters, author = "Boris Motik", title = "{Improving Interaction with the Information Infrastructure Using Intelligent Agents}", school = "Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia", month = "December", year = "1999", note = "In Croatian", }
Boris Motik. Integrated Development Environment for Specifying Distributed Systems. Diploma Thesis, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, February 1996. In Croatian.
@MastersThesis{motik96diploma, author = "Boris Motik", title = "{Integrated Development Environment for Specifying Distributed Systems}", school = "Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia", month = "February", year = "1996", type = "Diploma Thesis", note = "In Croatian", }