Monolith User Manual

This is the User Manual for Monolith, our Semantic Enterprise Knowledge Graph Platform. Monolith is a combination of an Ontology-based Data Management (OBDM) Platform and an Enterprise Knowledge Graph IDE. Monolith provides these features through the Mastro Web Server, which connects Monolith to the Mastro OBDM reasoner. To learn more about OBDM and Mastro, visit our website.

Throughout this tutorial we’ll take you through Monolith’s features, and to do so, we’ll use the Books ontology as a running example. This is what you are going to need:

  • The Books database: you can download this file, or use the default H2 database that is preinstalled in Monolith
  • The Books ontology. Monolith supports both OWL 2 ontologies and Graphol ontologies (if you aren’t familiar with Graphol, you should take a look at Eddy!). Download one of these two files:
  • The mappings between the ontology and the database: download this file.
  • Some SPARQL queries to get you started: download this file.

Before we begin, you’ll want to create the Books database. The SQL script you downloaded will create the Books MySQL database.
If you are more comfortable working with PostgreSQL, you can use our Superheroes ontology for the tutorial.


Before you begin the installation of Monolith, please check that your Java (JDK) installation is updated to version 11.

Monolith comes packaged as a .zip file. Extract this file to a new directory on your system.

Aside from setting up your data connectors through their JDBC drivers and loading your Monolith license, which we will get to in a second, pretty much the only thing you need (actually, you don’t strictly need to do this either, it’s optional) to do before running Monolith is setting up the MASTRO_HOME environment variable.

This variable will tell the system which directory in your file system it should use to store all its files (configuration files, ontology and mapping files, etc.). So, according to your operating system, set up the MASTRO_HOME environment variable wherever you like on your file system, and you are ready to go (if you are not sure how to do so, a quick Google search will help you out). Or, you can decide to stick with the default settings, and the MASTRO_HOME will be set up in your user home directory.

To use Monolith, you need to load the license files that we have provided you into the Mastro Home. Therefore, copy the license.info and license.key files into the MASTRO_HOME/license/ folder, and you’re all set.

Monolith uses JDBC connections to interact with DBMSs. To install the JDBC Driver for your DBMS, simply copy the driver jar file into the monolith/jdbc/ folder, and, after logging into the application with administrator rights, access the JDBC Drivers tab of the Settings page and add the driver (class name and, optionally, URL template).

For example, to add the MySQL JDBC Driver, you can insert the following parameters:

  • Class name: com.mysql.cj.jdbc.Driver
  • URL template: jdbc:mysql://localhost/

Monolith is distributed with an embedded H2 database for the Books specification. To add the H2 JDBC Driver, type in these parameters:

  • Class name: org.h2.Driver
  • URL template: jdbc:h2:mem:

Currently Monolith has specific support (SQL dialects, connectors, ecc.) for all main commercial and open source DBMSs, including:

  • Oracle
  • SQLServer
  • MySQL
  • MariaDB
  • Cloudera Impala
  • PostgreSQL
  • Denodo
  • DB2
  • Derby
  • SQLLite
  • Impala

For DBMSs not on the list, Monolith provides standard SQL support. Since we are always looking to add more specific SQL dialects, let us know if you have a specific need.

Open a command line console and move inside the main directory of the application. To start the Mastro Web Server in Windows systems, double click on the run.bat file in the main directory of the folder, or in Linux/OSX, double click on the run.command file, or run the following command:

$ ./run.sh

The Monolith web application will now be accessible at http://localhost:8989/monolith/#/.


First things first, you need to log into Monolith. You can use the default user, by typing in

Username: admin
Password: admin


and the default address of the Mastro Web Services: localhost (or, if you are less lazy, http://localhost:8989/mws/rest/mwsx)

Now that you’re logged into the Home Page, you have access to Monolith’s main modules from the Navigation Menu, as well as your most recent ontologies and knowledge graphs.
Before you do anything else, you should create your first ontology.


In Monolith, ontologies are like projects: you can create a new one, add new versions of an ontology, create mappings from the ontology to a database, and query the ontology.

From the Navigation Menu, choose Ontology . This will bring you to the Ontology Catalog, from which you can create your first ontology. Press the Add Ontology button, and choose a name and (optionally) a description for your ontology.
Let’s call the ontology “Books”.
The Books ontology will have been added to the Ontology Catalog.

Choose the Books ontology: now you can add a new Version (either a .owl or a .graphol file) to the ontology. Let’s try the .graphol file, to see what Monolith is capable of.
Once the new version has been loaded successfully, it will have been added to the Ontology Version catalog for the Books ontology. It’s called version 1.0.
Now select the card in the catalog to open the Ontology module of Monolith.

The Ontology Menu lets you navigate the sections of Monolith’s Ontology module:

  • Info: here you can consult all the meta-data of the ontology version.
    • The Ontology IRI and Version IRI
    • The description
    • The prefixes and imports defined in the ontology
    • The number of axioms, classes, object properties, and data properties in the ontology
  • Browse: the Browse page lets you inspect all the entities in the ontology by showing their usage in the ontology’s axioms. Entities are accessible from the Ontology Entity tree, where classes, object properties, data properties and individuals of the ontology are listed hierarchically.
    From each entity page you can access the entity in the ontology Graphol diagrams through the Graphol button in the upper right-hand corner, and you will be redirected to the Graphol viewer (more on this below).

    Entities in OWL can be shown in different ways: through their full IRI, through their prefixed IRI, their label, etc. You can choose how you want to render OWL entities in through the Rendering tab of the Settings module (follow the Settings link in the Navigation Menu and go to the Rendering tab). From here on out, we’ll be using the entityPrefixIRI rendering mode.
  • Explore: Monolith offers two different ways of exploring the ontology, the first which is more geared towards having an overall view of the ontology model, or of its specific parts, the second which is instead designed to let you navigate the model and the underlying data incrementally.
    • Ontology: Monolith features the Grapholscape viewer for Graphol ontology diagrams, which you can use if you aren’t all that familiar with OWL 2, or if you just want to see a nice diagram of the ontology. We love ontology diagrams, so we highly recommend it! Or, you can use the viewer to see a graph representation of an OWL 2 ontology.
    • VKG: VKG, which stands for Virtual Knowledge Graph, is your gateway to exploring the ontology starting from a single class, and navigating the relationships between the classes and entities in the domain. To be able to fully exploit the capabilities of the VKG explorer, you have to have a Mastro Endpoint set up already. This will allow you to incrementally and seamlessly move from the classes and object properties in the ontology model to the underlying data, meaning to the instances of the classes.
  • Mappings: here you can load a new mapping for the ontology. We’ll get to that in a little bit.
  • Query: here you will find your three options for building and running queries through Mastro. More on that later…

Before you can link your ontology to some data, you have to tell Monolith where that data is going to be coming from. This means creating a Datasource.
Follow the Datasources link in the Navigation Menu, and you’ll be right in the Datasources page, from where you can create a new datasource by pressing on the “Create a datasource” button.
To create the Books datasource, simply:

  • type in Books as the name of the datasource
  • choose the MySQL jdbc driver
  • type in the URL of the Books database, so something like:
jdbc:mysql://127.0.0.1/books
  • and then type in the username and password of your MySQL server

After creating the datasource, you can test the connection, modify it, or delete it by clicking on the buttons in the lower right-hand corner of the datasource card.
Now that you have your first datasource, you are ready to map data to your ontology!

Keep in mind that for some versions of MySQL, to get the JDBC driver to work with UTC time zone, you have to specify the serverTimezone explicitly in the connection string. So this would be the URL:

jdbc:mysql://localhost/books?serverTimezone=UTC

You can also use the preinstalled H2 database for the Books specification, by selecting the H2 JDBC driver, and inserting the following URL: jdbc:h2:mem:books. No username or password is necessary.


Before you try making a mapping of your own from scratch, let’s load the Books mapping which we have prepared for you, so you can see what an ontology mapping looks like.


Go back to the Books ontology, choose version 1.0, and then from the Ontology Menu, choose Mappings.
Click on the Add Mapping card in the Mappings catalog (the big one that says “Add Mapping“), and from the Mapping Import tab, select the Books mapping file. You can now see the mapping in the catalog.

Similarly to the ontology versions, the first thing you see in the Mappings page after choosing a mapping is the Mapping Info tab, where you can check the description of the mapping and the templates that are defined in the mapping.
Intuitively, a template is a IRI string which is used to build a range of IRIs with the data in the database. It’s made of a constant part and a variable part, the latter between braces {}.

A mapping is made up of four fundamental components: SQL Views, Ontology Mappings, SQL View Constraints, and Templates. Each of these components has a dedicated tab in the Mappings page.

An SQL View is an SQL query over the database, to which you can assign a name. From the SQL Views tab, choose the view called books_view, and you’ll see that it has this SQL query:

SELECT bk_code as code, 
       bk_title as title,
       bk_type as type
FROM tb_books

Indeed, the table in the Books database that contains the IDs, titles, and types of the books is called tb_books, and looks like this:

BK_CODEBK_TITLEBK_TYPE
1ResonanceP
2As We GrieveP
3Runaway StormP
4NeverlandP
19A Dark CircusA
20City of StarsA
21Not My DaughterE
22The Last Train From ParisE
23Our Boomer YearsE
24Path of ThunderE

As you can see, the bk_type column contains the type of the book. So “P” is for printed books, “A” is for audio-books, and “E” is for e-books. Let’s keep this in mind, we’re going to need it in a little bit.
So with the books_view, we are extracting all the information that’s in the tb_books table. We’re going to use this information to create the instances of the :Books class, and also of its subclasses, like :E-Book, by using this SQL View in the mappings of these classes.

From this tab, you can also see which Ontology Mappings are using the chosen SQL View, from the Mappings section, and also which SQL View Constraints this view is involved in.

Keys

Similarly to what happens in relational databases, it is possible to define Keys for SQL Views. Think of Keys as primary keys in a relational table:  they uniquely identify each row in that view. You’ll see that in the books_view, code is the key. Keep in mind that it’s possible to define more than one key, each having more than one parameter.

Keys are shown both in the detail of the SQL View for which they are defined, and in the dedicated Keys tab in the SQL View Constraints tab of the Mappings module.

Let’s see how Keys can help Mastro improve its query answering process through an example.

Example. Assume that you have defined the following SQL View:
territory_view(city, province, region), with a Key on the column city

which you use twice to map the object property :partOf with the following templates:
Mapping 1:

  • Domain: http://testexample.com/city
  • Range: http://testexample.com/province

Mapping 2:

  • Domain: http://testexample.com/province
  • Range: http://testexample.com/region

Now, you ask the following SPARQL query:

SELECT ?x, ?y, ?z
WHERE {?x :partOf ?y.
       ?y :partOf ?z.}

which we can rewrite, using a more compact logic-based notation, as {x,y,z | :partOf(x,y), partOf(x,z)}

The Mapping Rewriting step of the query answering process will produce the following rewriting: {x,y,z | territory_view(x,y,z'), territory_view(x,y',z)}

However, this rewriting, thanks to the Key on column city (the x), can be simplified into {x,y,z | territory_view{x,y,z}, thus avoiding a useless self-join.

An Ontology Mapping is a link between an entity in the ontology and a conjunction of one or more SQL Views, possibly with some filters.
So an Ontology Mapping has basically three components:

  1. one of the entities in the ontology (i.e., the head of the mapping)

2. a select-project-join SQL query over the SQL Views (i.e., the body of the mapping).
In the SQL Query, when defining joins between different tables, Monolith requires to use explicit joins instead of implicit joins. So, for instance, use

SELECT b.bk_title as book_title, 
e.ed_code as edition_code 
FROM tb_books b join tb_edition e on b.bk_code = e.bk_id

instead of

SELECT b.bk_title as book_title, 
e.ed_code as edition_code 
FROM tb_books b, tb_edition e 
WHERE b.bk_code = e.bk_id

3. an IRI template, which is formed by a fixed part and one or more template variables, between braces

In Monolith, Ontology Mappings are organized either by the entity which they map (the By entity submenu of the Ontology Mappings tab), or by their ID (the All Mappings submenu).
Let’s see an example. From the Ontology Mappings tab, go to the By entity submenu, and select the :E-Book class. You’ll see that it has one mapping, in which the SQL query over the views is

SELECT book_view.code AS code 
FROM book_view 
WHERE book_view.type = 'E' 

This is because, as we saw earlier, in the tb_books table, the value for the books type (column bk_type) which indicates that a books is an e-book is “E“. You can see a couple of examples in the table above, e.g., Not My Daughter, The Last Train From Paris, and so on…

Finally, the IRI template is

http://www.obdasystems.com/books/book-{code}

So http://www.obdasystems.com/books/book- is the fixed part of the IRI template, and {code} is the template variable.
This means that instances of the E-Books class are built using the IRI template http://www.obdasystems.com/books/book-{code}, and extracting the codes from the book_view view, but, again, only for those books for which the field “type” is ‘E‘.

SQL View Constraints are relationships that you can define between SQL Views in the mappings, which, along with the Keys in the views, will be used by Mastro at run-time to optimize its query answering process. Monolith allows you to create two different kinds of view constraints: Inclusions and Denials.
Once created, SQL View Constraints are shown both in the SQL View Constraints tab of the Mappings module, and in the SQL Views tab, under the SQL Views which are involved in them.

Inclusion Constraints

Inclusion Constraints determine inclusion relationships between pairs of (columns of) SQL Views. So for each Inclusion Constraint, you will have an Included SQL View, and an Including SQL View. The number of columns that are involved in the inclusion, for each of the two views, must be the same.

Let’s see an example. Go to the SQL Views tab, and pick the unedited_book_view from the SQL View tree. In the Constraints section of the view, you will find the following inclusion (with the Included view on the left of the arrow and the Including one on the right):

unedited_book_view(code) → book_view(code)

This inclusion means that each value in the code column of the unedited_book_view will also be a value in the code column of the book_view.
It is also possible to define inclusions which involve more than one column of the included and including view.

Like we did with Keys, let’s see the role that Inclusions play in Mastro query answering process through a couple of examples.

Example 1. Consider the following SPARQL query, which asks for every man that has a name, but without returning the name:

SELECT ?x
WHERE {?x a :Man.
       ?x :name ?y}

Now, assuming that the ontology doesn’t contain axioms that involve :Man or :name, the Mapping Rewriting of the query, assuming the SQL Views man_view(x) and name_view(x,y), would be the following: {x | man_view(x), name_view(x,y)}.

However, if we define the inclusion man_view(x) → name_view(x), then the above rewriting will be simplified like this: {x | man_view(x)}.
This rewriting will then in the SQL Rewriting step become the following SQL Query (assuming this is the SQL code of the view):

SELECT MV.ID
FROM (SELECT ID
      FROM TABLE_P
      WHERE SEX = 'M') as MV 

which can be further simplified into

SELECT ID
FROM TABLE_P
WHERE SEX = 'M'

Example 2. Consider the following SPARQL query, which asks for every person:

SELECT ?x
WHERE {?x a :Person.}

and let’s assume that the ontology contains the following axioms:

SubClassOf(:Man :Person)
SubClassOf(:Woman :Person)
SubClassOf(:Person ObjectSomeValuesFrom(:name owl:Thing))

So, every man is a person, every woman is a person, and every person has a name.
According to the above ontology, the SPARQL query is rewritten (in the Ontology Rewriting step) into this query (we’ll use the compact logic notation for brevity):

{x | :Person(x)} ⋃ {x | :name(x,y)} ⋃ {x | :Man(x)} ⋃ {x | :Woman(x)}

So, a union of four queries. The Mapping Rewriting step will produce something like

{x | name_view(x,y)} ⋃ {x | man_view(x,y)} ⋃ {x | woman_view(x)}

However, if we define the inclusions

man_view(x) → name_view(x)
woman_view(x) → name_view(x)

then the above query is simplified into

{x | name_view(x,y)}}

which will then be transformed into an SQL in the SQL Rewriting step according to the definition of the view name_view.

Denial Constraints

Denial Constraints are a general form of logical disjunctions, which basically define that joining two (or more) SQL Views will produce an empty result set. In logic, we can write this as

man_view(x),woman_view(x) -> FALSE

Intuitively, knowing these Denial Constraints lets Mastro discard Mapping Rewritings which will surely produce no answers in the query evaluation step. Let try an example.

Example. Consider the following SPARQL query, which asks for anything that is part of a university.

SELECT ?x
WHERE {?x :partOf ?y.
       ?y a :University.}

and assume that you have defined the following mappings (in compact notation for brevity):

View1(Dipartment_ID, UniversityID) -> :partOf(Dipartment_ID, UniversityID)
View2(Branch_ID, Bank_ID) -> :partOf(Branch_ID, Bank_ID)
View1(Dipartment_ID, UniversityID) -> :University/UniversityID)

and also this Denial Constraint:

View1(x,y),View2(x,y') -> FALSE

Assuming no new rewritings are produced in the Ontology Rewriting step, the Mapping Rewriting step will produce the following rewriting:

{x | View1(x,y), View1(x',y)} ⋃ {x | View1(x,y), View2(x,y')}

However, the Denial Constraint tells us that the second query in the above union will produce an empty results set, and so it can be safely eliminated prior to evaluation.

As explained earlier, a Template is a IRI string which is used to build a range of IRIs from the data in the database. It’s made of a constant part and a variable part, the latter between braces {}.

When mapping an entity of the ontology, you’ll have to use templates to define how an object of the chosen entity is build (or, in the case of an object property, you’ll have to use two templated, one for the objects in the domain and one for the objects in the range).

From the Templates tab of the Mappings page, you will have access to all the templates that you have so far defined in your mapping, and for each one, you will see all the mappings you have used it in.

Now let’s try to create a new mapping from scratch. Go back to the Mappings catalog of the Books ontology, click on the Add Mapping card, and this time move to the Mapping Creator tab. You’ll be asked to provide an Version for the mapping, and a description. Just make sure you give it a version that is different from the one in the Mappings file you uploaded previously.

Once that is done, you should see two mappings in the Mapping catalog. For each mapping in the catalog, the Duplicate button is available, to create a new copy of the mapping, for example to have a backup before you begin editing.
Click on the card of the new one, and you can begin adding SQL Views, Ontology Mappings, and SQL View Constraints.

Creating an SQL View

Let’s start with creating a new SQL View. Go the SQL Views tab, press the “Add SQL Views” button near the search bar, and you will see the SQL View editing drawer pop out. Let’s try creating a view that extracts information regarding book editions from the database.
The table in the Books database which contains this information is called tb_edition, and has the following structure, with some sample rows:

ED_CODEED_TYPEPUB_DATEN_EDTEDITORBK_ID
10X2000-09-2313424
12E2010-02-181761
39X2007-02-0323220
56S2005-02-071129

We need to understand which information in the table is relevant for our ontology (meaning that we will use it in the Ontology Mappings).
So, take a look at the ontology. The information it shows regarding Editions is the following:

  • edition number (data property :editionNumber)
  • date of publication (data property :dateOfPublication)
  • two different types of editions, special editions and economic editions (classes :SpecialEdition and :EconomicEdition)
  • the fact that each edition is edited by an editor (the object property :editedBy)
  • the fact that a book can have an edition (the object property :hasEdition)

So, this means that you are going to need pretty much all the information in the tb_edition table to create the ontology mappings for the above entities. So, the SQL View, which you can simple call “edition_view” (or anything you like) will be:

SELECT
  ed_code as code,
  ed_type as type,
  pub_date as date,
  n_edt as edt,
  editor as id,
  bk_id
FROM
  tb_edition

If you want you can also add a description to the view.
Before you finish, you can check if your SQL code is correct, by selecting the Books datasource, and clicking the Test Query button. This will give you a preview of the results of the query.
Also, remember to define the Key for the view. In this case, the primary key of the tb_edition table is column ed_code, so you can choose “code” from the Key editor.

Creating an Ontology Mapping

Now let’s try defining an Ontology Mapping using the SQL View you just created. Go to the Ontology Mappings tab, then the By entity submenu, and click on the object property :hasEdition from the object property tree.
There obviously aren’t any mappings yet, so you can create the first one. Click on the Add Mapping button, and the Ontology Mapping Editor drawer will pop out.
You’ll see that the Entity has been filled out for you (but you can pick a new one if you change your mind). So you have to define the SQL code of the mapping, and the two templates (:hasEdition is an object property, so you have to build the instances of both the domain classes and the range classes, which are, respectively, :Book and :Edition).
Before you start typing in the SQL code, try pressing the “Help” button. You will be shown the SQL Views, templates, and prefixes that are already defined in the Mapping, which will help you define a correct Ontology Mapping.

As we discussed earlier, the SQL code in an Ontology Mapping is a select-project-join SQL query over the SQL Views in the Mapping.
So you will need to define the SELECT, FROM, and WHERE components of the SQL query.
In the SELECT component, you’ll need to include all fields in edition_view which you will use in the templates for the domain and range of the object property. In this case, you’ll want to use code to build the instances of :Edition, and bk_id to build the instances of :Book. It’s also always a good idea to use aliases, to make the templates a little shorter (if not, Monolith will do it for you). So:

SELECT edition_view.code as code,
       edition_view.bk_id as book_id

In the FROM component, you can include one or more SQL Views, and join them using equi-joins. In this case, you will only need edition_view, but in the more general case, assuming you have two views such as V1(x1,x2,x3) and V2(y1,y2,y3) and you want to join them on x1 = y1 and x2 = y2, you can write something like this:

FROM V1 JOIN V2 ON V1.x1 = V2.y1 AND V1.x2 = V2.y2

Lastly, you can define the WHERE component, in which you can use the following predicates to impose conditions on the results that will be extracted from the SQL Views: AND, >=, <=, <>, >, <, =, IS NULL, IS NOT NULL, IN, NOT IN, NOT LIKE, LIKE. For this mapping, you don’t have to impose any condition on the edition_view.code and edition_view.bk_id fields, because both ed_code and bk_id in the tb_edition table are not nullable, and you aren’t looking for any particolar conditions on the ID codes to create the instances of the classes :Edition and :Book. So your final SQL code for the Ontology Mapping of :hasEdition will be:

SELECT edition_view.code as code,
       edition_view.bk_id as book_id
FROM edition_view

The only thing missing now are the templates.
When defining templates in a Mapping, the most important thing to remember is to be consistent. Pick a template for a class, and stick to it whenever possible.
In this case, you can use (for example) http://www.obdasystems.com/books/book-{bk_id} for the domain template, and http://www.obdasystems.com/books/edition-{code}. Remember to use the “+” button to automatically add the template variables (between braces {}) in the template.

Creating SQL View Constraints

To create a new Inclusion Constraint between SQL Views, move to the Inclusions Page (SQL View Constraints -> Inclusions), and press the “Add Inclusion Constraint” button near the search bar. The Editing Drawer will slide out, and you can select the Included View on the left hand side column, and the Including View on the right hand side column.
For example, select edition_view as the Included View, and book_view as the Including View.

Now, from the drop-down menus, pick the parameters for each view that will be considered in the Inclusion Constraint. Select bk_id from edition_view, and code from book_view. Then, press the Save button.
The new Inclusion Constraint will have been added to the list.

Denial Constraints in Monolith can be simply expressed as SQL Queries over the SQL Views of the mapping. These queries are interpreted by Mastro as being extensionally empty. Therefore, the SELECT statement of the query can always be simply defined as *.
To create a new Denial Constraint, move to the Denials Page (SQL View Constraints -> Denials), and press the “Add denial” button from the Denials tree on the left hand side. Then, simply provide a name for the Denial, and its SQL Code. Remember that, just like for Ontology Mappings, in the SQL Code of the Denial, when defining joins between different tables, Monolith requires to use explicit joins instead of implicit joins.
For example:

SELECT * 
FROM edition_view e 
	join unedited_book_view u 
    	on e.bk_id = u.code  

Now you have a Books ontology, a Books database, and some mappings. You’re almost ready to start querying the ontology!
Querying the ontology means running SPARQL queries through Mastro’s query answering module, and so before you can start querying, you have to launch a Mastro Endpoint.

An Endpoint is basically an instance of the Mastro reasoner, which has been created by specifying an ontology version, a mapping, and a datasource. As usual, you can optionally provide a description.
From the Navigation Menu, click on the Mastro icon , and you will land on the Mastro Endpoints page. On the left hand side of the page, you can see the Endpoints tree, which will list all the endpoint you have created.
From the Endpoints tree, press the Add Mastro Endpoint button, and the Create Mastro Endpoint drawer will pop open.
From here, to create the Endpoint, choose a (unique) name, (optionally) a description, an ontology, an ontology version, a mapping, and finally a datasource. Then, press Create, and the new endpoint will be added to the Endpoints tree.

By clicking on any Endpoint in the tree, you can manage them: press Run Endpoint to boot up the Mastro Endpoint, Stop Endpoint to shut it down, and Delete Endpoint to delete it.
Mastro Endpoint that are running will be shown in the Endpoint tree highlighted in green, with a “Play” icon next to their name.
Any running Endpoint for the selected ontology will be available for query answering from the Ontology SPARQL page.

Try creating and then running an Endpoint for the Books ontology, mappings, and datasource.


So, you are finally ready now to run some queries over the Books ontology.
From the Ontology Menu, click on the SPARQL link, and you’ll land on the Ontology SPARQL page, from where you’ll be able to manage and run your SPARQL queries.

The Mastro reasoner currently supports the SELECT and CONSTRUCT query forms.

Running a query in Monolith is fairly straightforward:

  1. Pick an endpoint (it has to be running on the ontology you are working with).
  2. Type in the SPARQL code of the query (to help you out, Monolith fills in the PREFIX section of the query for you), such as

    SELECT ?ebook ?title
    WHERE {?ebook a :E-Book.
    ?ebook :title ?title.}

    to get all the E-Books with their titles
  3. Press the Run button.

That’s it!
Your query is running, and you’ll start seeing the results in the table below the query.

Starting from version 2.0 of Monolith, the SPARQL endpoint allows you to choose between three query execution modes:

  • Standard execution mode, which outputs the query results to Monolith’s interface, and which is coupled with an Answer Buffer to limit the number of produced results;
  • File streaming mode, which streams the results directly to your chosen output file;
  • Result count mode, which runs the query in background and produces the result count.

These execution modes are designed to help you whether you want to inspect a portion of the query results directly in your browser window, or you are querying large volumes of data and you want them streamed directly to a physical file.

Once the query is finished and you have the results, you can download them. You can choose between different export options for both the standard and the file streaming execution mode: The results will be downloaded in a CSV, JSON, XML, and PowerBI (.pbids) formats for SELECT queries, and RDF (TURTLE syntax) format for CONSTRUCT queries.
If your query is a CONSTRUCT query, then the results of the query will be a set of RDF triples, which can be exported to a Knowledge Graph (either to an existing one or to a new one).

You can save your most important queries for future re-execution in the Query Catalog by pressing the Store in Catalog button and providing an ID for each query.
Try importing the SPARQL queries in the file you downloaded into the catalog by clicking on the Upload query catalog button. You will see three queries:

  • all_books
  • special_editions
  • economic_editions

Clicking on any query in the catalog will open up a new query tab, from where you will be able to run the query.
Also, you can turn the Query Catalog on or off through the Toggle catalog button.
Finally, you can export your catalog by clicking on the Download query catalog button.

When you save a query to the Query Catalog, you can give it a description, and also assign one or more Query Tags to it. The query tagging system will help you easily classify and search the queries in your catalog. From the Settings Tab of the Settings section, you can add however many tags you like, assigning to each a name, a color, and (optionally) a description. You will notice that the dataquality tag is pre-defined in the system. This tag should be used to identify queries that represent user-defined business data integrity rules: so queries that in theory should not produce any answers (more on that in the Data Quality section).

Mastro supports (almost) all of SPARQL’s syntax. Specifically, in the table below, you can see which operators, functions, and query forms you can use to query the ontology through Mastro.

Graph PatternsBGP, FILTER, OPTIONAL, UNION
NegationMINUS
Property PathsINVERSEPATH, SEQUENCEPATH
AggregatesCOUNT, SUM, MIN, MAX, AVG, GROUP BY, HAVING
SubqueriesSUBQUERIES
Solution Sequences and ModifiersORDER BY, SELECT, *, DISTINCT, OFFSET, LIMIT
Query FormsSELECT, CONSTRUCT
Functional Forms||, &&, =, !=, <, >, <=, >=, IN, NOT IN
Functions on StringsSUBSTR, UCASE, LCASE, CONTAINS, CONCAT, REGEX, STRLEN, STRSTARTS, STRENDS, STRBEFORE, STRAFTER
Functions on NumericsROUND, CEIL, FLOOR
Functions on Dates and TimesNOW, YEAR, MONTH, DAY, HOURS, MINUTES, SECONDS

Now that you know which SPARQL terms you can use, you need to know how to combine them. Here’s Mastro’s SPARQL Grammar (As in SPARQL’s official documentation, the EBNF notation used in the grammar is defined in Extensible Markup Language (XML) 1.1 [XML11] section 6 Notation):

ConstructQuery::=ConstructClause ‘WHERE‘ ConstructBody
ConstructBody::=(SelectQuery | UCQPattern)+
SelectQuery::=SimpleSelect |SubSelect
SubSelect::=SelectClause ‘{‘SimpleSelect ‘}’
SimpleSelect::=SelectClause WhereClause SolutionModifier
SelectClause::=SELECT‘ ( ‘DISTINCT‘)? ( ( Var | ( ‘(‘ Expression ‘AS‘ Var ‘)’ ) )+ | ‘*‘ )
Expression::=COUNT‘ ‘(‘ ‘DISTINCT‘? ( ‘*‘ | Var ) ‘)’ | ‘SUM‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘MIN‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘MAX‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘AVG‘ ‘(‘ ‘DISTINCT‘? Var ‘)’
WhereClause::=WHERE‘ (UCQPattern|CQPattern)+
UCQPattern::=CQPattern (‘UNION‘ CQPattern)*
CQPattern::=TriplesBlock OptionalGraphPattern* MinusGraphPattern? Filter*
TriplesBlock::=Triple ( ‘.’ TriplesBlock? )?
Triple::=Term  IRI  Term
Term::=Var | IRI
OptionalGraphPattern::=OPTIONAL‘ TriplesBlock Filter*
MinusGraphPattern::=MINUS‘ TriplesBlock Filter*
Filter::=FILTER‘ Constraint ((‘||’ | ‘&&’) Constraint )*
Constraint::=RelationalExpression
RelationalExpression::=NumericExpression ( ‘=’ NumericExpression | ‘!=’ NumericExpression | ‘<‘ NumericExpression | ‘>’ NumericExpression | ‘<=’ NumericExpression | ‘>=’ NumericExpression )?
NumericExpression::=INTEGER | DECIMAL | DOUBLE | VariableExpression 
SolutionModifier::=GroupClause?  OrderClause? LimitOffsetClauses?
GroupClause::=GROUP‘ ‘BY‘ VariableExpression (‘HAVING‘ VariableExpression)?
OrderClause::=ORDER‘ ‘BY‘ OrderCondition+
OrderCondition::=( ( ‘ASC‘ | ‘DESC‘ ) VariableExpression)
LimitOffsetClauses::=LimitClause OffsetClause? | OffsetClause LimitClause?
LimitClause::=LIMITINTEGER
OffsetClause::=OFFSETINTEGER
VariableExpression::=Any expression built with a combination of variables, constants, IRI, functions, and aggregates

Activating (or deactivating) the Ontology Rewriting step of Mastro’s query answering process means that the axioms in the ontology will (or won’t) be considered when computing the results of the query.

Let’s see an example of how the Ontology Rewriting process can impact the results of the query.
Try running the all_books query from you Query Catalog.
You’ll see that the query will produce a total of 31 results, and also 9 Ontology Rewritings (you can see each of them from the Ontology Rewritings tab in the Query Report section). Each Ontology Rewriting is a new SPARQL query in which one axiom in the ontology has been used to reformulate the original query by replacing one ontology entity.
For instance, since :E-Book is a subclass of :Book, one of the rewritings of the query will be:

SELECT ?x0 
WHERE { ?x0 <rdf:type> <http://www.obdasystems.com/books/E-Book>}

Now, try pressing the Reasoning toggle button to disable Mastro’s Ontology Reasoning step, and run the query again. You’ll see that the query now produces less results (only 27), and has just one rewriting, i.e., the original query.

Now, let’s try something different.
Go to the Mappings page from the Ontology Menu, and from the Ontology Mappings tab, select the :Book class, and delete its one and only mapping.
Then, go back to the SPARQL page, and try running the query again, with reasoning turned on. You’ll see that you will get your 31 results back again, even if the :Book class doesn’t have any mappings now. These results have been produced by the SPARQL queries computed during the Ontology Reasoning step of Mastro’s process.
Finally, try running the same query again, but with Reasoning turned off. At this point, you shouldn’t be surprised to see that the query produces no results at all!


One of the Mastro Reasoner’s main capabilities is to automatically extract data quality rules (or data integrity constraints) from the OWL 2 ontology, and transform them into SPARQL queries. By running these queries through Mastro’s query answering process, the system allows you to extract data from your sources which violates these rules. So, in essence, Mastro checks your business data quality rules for you, reformulates them in terms of SPARQL queries in such a way as to produce query answers which violate these constraints, and shows you these results.

To leverage these capabilities, Monolith features the dedicated Data Quality section of the Ontology Menu.

Access this section, and you will find two different tabs, the Check Sets Tab and the History Tab.

The Check Sets Tab is where you will build your Data Quality checks: click on the Add Check Sets button, and the Create Ontology Constraint Set drawer will slide out. Here you can give your data quality check set a name, and select the integrity rules you want to add to the set. For each rule, you can set a priority, from 1 to 3, with 3 being the maximum priority. When the system runs the checks, it will follow the order set by the priorities you have chosen.

There are three different kinds of rules you can select from (and more will be added in upcoming releases):

  • Empty Queries: these are the queries in your Query Catalog which you have tagged with the special dataquality tag, meaning that they should not produce any answers. So these are basically user-defined business rules which Mastro will check for you, and provide violations of, if there are any.
  • Disjoint Constraints: these are the constraints which Mastro extracts from the DisjointClasses axioms of the ontology. So here you are choosing one or more pairs of classes which are disjoint from one another in the ontology.
  • Functionality Constraints: these are the constraints which Mastro extracts from the Functionality axioms of the ontology, both on object properties and on data properties. So you can simply choose one or more object or data properties which are functional (or inversely functional for some object properties) in the ontology.
  • Key Constraints: these are the constraints which Mastro extracts from the HasKey axioms of the ontology, which are used to indicate which combination of object and data properties are the identifiers for the class which is involved in the axiom. In this case, you can choose between the listed classes, where for each one you will see the list of identifying properties.
  • Participation Constraints: these constraints are typically extracted from specific kinds of SubClassOfaxioms in the ontology. They basically are used to impose that each instance of a certain class must necessarily be involved in a given object property, or myst have a given data property.
  • Universal Constraints: like Participation Constraints, these constraints are also extracted from specific kinds of SubClassOfaxioms in the ontology. However, they are used to specify that an instance of a certain class can be linked, through an object property, to instances only of a given class.
  • Cardinality Constraints: cardinality constraints determine restrictions on the minimum and/or maximum number of occurrences for which an instance of a class can be involved in a certain object or data property. Like Participation and Universal constraints, these are extracted from specific kinds of SubClassOfaxioms in the ontology.

Once you have built your set, you can run it, to see if there are any violations in the data of the constraints you have chosen.

Before running the set, select an Endpoint from the dropdown menu, choose whether you want to activate the Answer Buffer, which will limit the number of produced violations to the buffer you specify, and turn the Autosave toggle button on or off, if you want to save the execution to the History Log for later reference (which we will get to in the next section, for now just leave it on).

When you’re ready, press the Run button, and Monolith will start sending Mastro the constraints to check. During the execution, you will see the lights come on in the Status and Outcome columns. The Status column tells you whether the constraint has been checked correctly (green light if everything was ok, red light if there was an error, in which case you might want to check your mappings); the Outcome column tells you whether Mastro has found any violations to the constraint (again, green light if no violations were found in the data, red light if there were some).

After a constraint has been run, you can click on the little magnifying glass button at the end of its row, and Monolith will show you the detailed results of its execution:

  • the SPARQL query that was run to check the constraint
  • the witnesses, which in Monolith dialect means the violations of the constraint
  • the Query Execution details, so you can see the actual SQL query that was sent to the DBMS

The History Tab is where you can go back and see all the Data Quality Checks you have run (assuming you saved them). For each check, the log will provide you the ID, the date of execution, the timestamp of when it finished, the endpoint, and our handy little magnifying glass button, which opens up the full report for each check set.

Try clicking on it: you will see that the aggregate results are provided through charts and graphs based on priority and/or constraint type, plus you will see the number of violations for each constraint, and, by selecting a constraint, you will see the detailed results of its execution in the table at the bottom of the page.


Mastro’s Authorization View Profiles allow you to define data access policies, by telling the system which ontology entities can be queried by which user groups. Basically, you are creating a view of the ontology (or a subset of its entities): each chosen entity will return data if queried, and each excluded one will not (so a SPARQL query which involves an excluded entity will not return any result).

This is particularly helpful in large enterprise settings, where different users, possibly from different departments or business units, may have limited access to the data underlying the ontology, according to their sector or privileges. Rather than creating an ontology and mappings for each of these different user groups, you can simply create different views over the same ontology and mappings, and Mastro will take care of the rest.

Click on the AVP menu item in the Ontology Menu, and you will land on the AVP page.

From here, click on the Create an Authorization View Profile button, and the AVP drawer will slide out. From here, simple provide a Name for the AVP, the usual optional description, and choose, from the hierarchy list in the Permissions tree, the entities you want to include (or exclude) from the view by torning the toggle buttons on or off.

Remember: if the toggle button is showing yellow, the entity is in the view, and you can use it in your SPARQL queries to get results; if it’s showing gray, the entity is not in the view, so your SPARQL queries with that eneity won’t return results (in other words, you can’t query the entity).

Try turning some of the entities on or off through the toggle buttons. You’ll see that turning something on/off may cause other entities, typically their children in the entity tree to be turned on/off.

For example, turn :Edition off. :EconomicEdition, :SpecialEdition, and so on will also be turned off. This is because we want to be sure that your AVP are safe, meaning that we don’t want to accidentally provide data for entities outside the AVP when running the SPARQL queries. So if you don’t want to show instances of the class :Edition to some user groups, you also don’t want them to query the :EconomicEdition or :SpecialEdition entities, since instances of these classes are also instances of the :Edition class.

Basically, the selections in the Permissions tree are mimicking Mastro’s ontology reasoning!

Once you are satisfied with your selections, click on the Save button. The AVP will be added to the AVP Catalog on the left, and you will be shown a recap of the choices you made (entities turned off are shown in red).

From the AVP Catalog you can edit or delete an AVP at any time.

Now that you have created an AVP, you want to use it to define your data access policies.

This means creating a Mastro Endpoint on the AVP you have just created. So, go the Mastro Endpoint page, and follow the usual steps to create an Endpoint, but also choose an Authorization View Profile from the drop-down list.

That’s it, you’re done! Everything also is managed in the User Administration page under the Roles and Permissions tab, so we’ll get to that later.


Monolith allows you to create and manage Knowledge Graphs (or KGs).

From the Navigation Menu, choose Knowledge Graph. This will bring you to the Knowledge Graph Catalog, from which you can create your KG. Press the Add Knowledge Graph button, and choose an IRI and Title and (optionally) a description for your KG.
Let’s use the IRI http://www.obdasystems.com/myFirstKG/ and call it “MyFirstKG”.

Additionally, you can specify information regarding both the Publisher and Rights Holder of the KG. This information can be useful if you intend to publish the KG.

When you press Submit, MyFirstKG will have been added to the Knowledge Graph Catalog. Click on its card to access it.

The Knowledge Graph Menu lets you navigate the sections of Monolith’s Knowledge Graph module, just like we showed you for Ontologies: Info, Import, Explore, and SPARQL.

From the Info page you can consult all the meta-data of the KG.

  • The IRI
  • The description
  • When it was created and by which user
  • The metadata regarding publisher and rights holder

You can also download the KG in either RDF/XML or N-Triples syntax.

From the Import page you can add RDF data files (in RDF/XML, N-Triples, N3, or Turtle syntax) to your KG. Let’s give it a try to see how this works.

Put this address into your browser: http://dbpedia.org/data/Rome.ntriples. You will download an RDF file (in N-Triples syntax) which contains all the information relative to the city of Rome from the DBPedia Database.

Now, click on the “Click or drag file” card, and select the file you just downloaded. You will see a new card, with the name of the file, pop up in the page. Click on it, and three buttons will appear: Import, Reset Status, and Delete.

Press the Import button. This will import the RDF data into your KG. You can also choose if you want to import all the data in the file, just the data from the default graph in the file, or from a specific named graph. If you’re not sure what this means, you can ignore it, and just press Ok.

Once the data is imported, you will see the card highlighted, and a yellow check sign next to it. This way you can easily see which files have been imported.

You can perform import and delete operations on multiple files at once. The Reset Status button let you reset your selection.

You can also import CSV files into your KG. Let’s try a simple example.

Download the above CSV file, click on the “Click or drag file” card again, and choose it.

The first thing you will see is a pop-up window in which you can tell Monolith how the CSV is set up (which separator it uses, etc.) to help him parse the file. Use the semicolon character as the CSV Separator, and leave everything else as it is.

Press Ok and a new window, the Import Settings one, will pop up. Since CSV files contain tabular data, you will need to tell Monolith how to convert the data in the CSV tuples into triples. With Monolith, you can do so using a custom SPARQL query: intuitively, you use the BGP in the CONSTRUCT clause to create the triples, and the BIND operators in the WHERE clause to create the IRIs of the objects, from the CSV columns.

Let’s see how this applies to our example. Here’s the contents of the CSV file:

UsernameIdentifierFirst nameLast name
booker129012RachelBooker
grey072070LauraGrey
johnson814081CraigJohnson
jenkins469346MaryJenkins
smith795079JamieSmith
Preview of the CSV

Let’s assume we want to create the RDF triples using the Friend of a Friend (http://xmlns.com/foaf/spec/) vocabulary. So we built the IRIs from the identifier, and then create the triples to model their username (http://xmlns.com/foaf/spec/#term_nick), their first names (http://xmlns.com/foaf/spec/#term_givenName) and their last names (http://xmlns.com/foaf/spec/#term_lastName). The SPARQL query would look like this:

CONSTRUCT
{
?v1 <http://xmlns.com/foaf/spec/#term_nick> "{Username}"^^http://www.w3.org/2001/XMLSchema#string .
?v1 <http://xmlns.com/foaf/spec/#term_givenName> "{First name}"^^http://www.w3.org/2001/XMLSchema#string .
?v1 <http://xmlns.com/foaf/spec/#term_lastName> "{Last name}"^^http://www.w3.org/2001/XMLSchema#string .
}
WHERE
{
BIND("http://my-first-kg.com/{Identifier}" AS ?v1)
}

That’s it! Now your CSV data will have been transformed into RDF triples, and once again as soon as the import is complete, you will see the highlighted card, with the yellow checkmark.

Now that you have imported data into your KG, go to the Explore Page from the Knowledge Graph Menu.

You can choose whether to explore the KG starting from its classes, or see a list of the RDF triples it contains.

To do the former, stay on the Class Index tab you just landed on. You have three options: Class Index, Class Bubbles, and Class Word Cloud.

In the Class Index tab, you will see a list of all the Classes in the KG. By clicking on any one of them, you will be shown the resources in the KG that are instances of the selected Class. Each resource will be shown by its label (specified through RDF statements in which rdfs:label is the predicate), if it has one, otherwise by its IRI.

Now click on any of the resources that you see for the chosen Class. This will bring you to the Resource Page, where you will see all the relevant information for this resource:

  • the IRI and its Label
  • the Class it belongs to
  • its descriptions (specified through RDF statements in which rdfs:comment is the predicate)
  • all the RDF statements in which the resource is the subject (Direct Relations), and all the RDF statements in which the resource is the object (Inverse Relations). These RDF statements are grouped together by their predicate, so in the first column you will see the predicate resource, and in the second column you will see the Class to which the resources belong to. Click on any one of the Class IRIs in the second column, and Monolith will show you all the resources (IRIs and labels) that appear in the RDF statements.

Every IRI shown in the Resource Page is clickable: this is how you move from one IRI to the next, in order to explore the KG.

You can also download all the information in the Resource Page into an RDF file, by clicking the Download button and choosing the preferred syntax.

In the Class Bubbles and Class Word Cloud, you’ll see a bubble graph or a word cloud of all the classes, in decreasing size according to their number of instances.

Finally, go to the SPARQL Page. Here you can run any SPARQL query over your KG. Just like for ontology queries, you can store your most significant queries in the Query Catalog, download the results of your queries, and provide a description of each query.


The User Administration page is accessible from the Main Menu for all users with administrator role (such as the admin profile), and allows to manage Users, Roles and Permissions, which dictate who can do what inside Monolith.

From the Roles and Permissions Tab, the administrator can define a new Role (through the Add Role button), give it a name, and assign permissions to this role.

Click the Add Role button, and the usual drawer will pop out. You will see three sections:

  • Ontology
  • Datasource
  • Knowledge Graph

Each section allows you to choose, for any item in the list, whether to grant Read/Write access (All), Read Only access, or no access at all. You’ll notice that for each Ontology, there are three sub-sections:

  • Versions
  • Mappings
  • Endpoints

You can assign specific read/write rights to any of the items in these subsections, provided that the rights that are granted to these items are equal to or more specific than the ones granted to their ontology (you wouldn’t want to give read and write access to and endpoint for an ontology in which you have provided no access at all for instance).

Once a Role is created, it can be then modified or deleted.

For example, try creating a Role called OntologyReader, and click on the Read Only button near the Books item in the Ontology section. Any user that is assigned to this Role will only be able to see the ontology, its versions, mappings and endpoints, but won’t be able to edit it in any way. All defined Roles will be shown in the Role Catalog on the left-hand side of the page.

The system administrator can create new Users from the Users Tab: each User must have a username, an email address, one or more roles, and optionally, a name and surname.

Once the User has been created, an email will be automatically sent to the provided email address containing a synthetically-generated password.

The information provided when creating the User will be shown in the User Tab of the Settings Page.

From here, the user can change the provided password, as well as his name, surname, and email address.

All Monolith downloads