Monolith User Manual

This is the User Manual for Monolith, our Semantic Enterprise Knowledge Graph Platform. Monolith is a combination of an Ontology-based Data Management (OBDM) Platform and an Enteprise Knowledge Graph IDE. Monolith provides these features through the Mastro Web Server, which connects Monolith to the Mastro OBDM reasoner. To learn more about OBDM and Mastro, visit our website.

Throughout this tutorial we’ll take you through Monolith’s features, and to do so, we’ll use the Books ontology as a running example. This is what you are going to need:

  • The Books database: download this file.
  • The Books ontology. Monolith supports both OWL 2 ontologies and Graphol ontologies (if you aren’t familiar with Graphol, you should take a look at Eddy!). Download one of these two files:
  • The mappings between the ontology and the database: download this file.
  • Some SPARQL queries to get you started: download this file.

Before we begin, you’ll want to create the Books database. The SQL script you downloaded will create the Books MySQL database.
If you are more comfortable working with PostgreSQL, you can use our Superheroes ontology for the tutorial.

Monolith comes packaged as a .zip file. Extract this file to a new directory on your system.

Aside from setting up your data connectors through their JDBC drivers, which we will get to in a second, pretty much the only thing you need (actually, you don’t strictly need to do this either, it’s optional) to do before running Monolith is setting up the MASTRO_HOME environment variable.

This variable will tell the system which directory in your file system it should use to store all its files (configuration files, ontology and mapping files, etc.). So, according to your operating system, set up the MASTRO_HOME environment variable wherever you like on your file system, and you are ready to go (if you are not sure how to do so, a quick Google search will help you out). Or, you can decide to stick with the default settings, and the MASTRO_HOME will be set up in your user home directory.

Monolith uses JDBC connections to interact with DBMSs. To install the JDBC Driver for your DBMS, simply add the driver class name to the drivers file in the monolith/jdbc/ folder, and copy the driver jar into the same folder.

Open a command line console and move inside this directory. To start the Mastro Web Server in Windows systems, double click on the run.bat file in the main directory of the folder, or in Linux/OSX run the following command:

$ ./

The Monolith web application will now be accessible at http://localhost:8989/monolith/#/.

First things first, you need to log into Monolith. You can use the default user, by typing in

Username: admin
Password: admin

and the default address of the Mastro Web Services: localhost (or, if you are less lazy, http://localhost:8989/mws/rest/mwsx)

Now that you’re logged into the Home Page, you have access to Monolith’s main modules from the Navigation Menu, as well as your most recent ontologies and knowledge graphs.
Before you do anything else, you should create your first ontology.

In Monolith, ontologies are like projects: you can create a new one, add new versions of an ontology, create mappings from the ontology to a database, and query the ontology from the SPARQL query panel.

From the Navigation Menu, choose Ontology . This will bring you to the Ontology Catalogue, from which you can create your first ontology. Press the Add Ontology button, and choose a name and (optionally) a description for your ontology.
Let’s call the ontology “Books”.
The Books ontology will have been added to the Ontology Catalogue.

Choose the Books ontology: now you can add a new Version (either a .owl or a .graphol file) to the ontology. Let’s try the .graphol file, to see what Monolith is capable of.
Once the new version has been loaded successfully, it will have been added to the Ontology Version catalogue for the Books ontology. It’s called version 1.0.
Now select the card in the catalogue to open the Ontology module of Monolith.

The Ontology Menu lets you navigate the sections of Monolith’s Ontology module:

  • Info: here you can consult all the meta-data of the ontology version.
    • The Ontology IRI and Version IRI
    • The description
    • The prefixes and imports defined in the ontology
    • The number of axioms, classes, object properties, and data properties in the ontology
  • Navigation: the Navigation page lets you inspect all the entities in the ontology by showing their usage in the ontology’s axioms. Entities are accessible from the Ontology Entity tree, where classes, object properties, data properties and individuals of the ontology are listed hierarchically.
    Because you loaded the version of the ontology through a Graphol file, from each entity page you can access the entity in the ontology Graphol diagrams through the Graphol button in the upper right-hand corner, and you will be redirected to the Graphol viewer (more on this below).

    Entities in OWL can be shown in different ways: through their full IRI, through their prefixed IRI, their label, etc. You can choose how you want to render OWL entities in through the Rendering tab of the Settings module (follow the Settings link in the Navigation Menu and go to the Rendering tab). From here on out, we’ll be using the entityPrefixIRI rendering mode.
  • Graphol: Monolith features the Grapholscape viewer for Graphol ontology diagrams, which you can use if you aren’t all that familiar with OWL 2, or if you just want to see a nice diagram of the ontology. We love ontology diagrams, so we highly recommend it!
  • OWL: the OWL page shows the rendering of the ontology in OWL 2. You can choose whether to see the ontology in OWL’s Functional, Turtle, or RDF/XML syntaxes. Also, you can toggle between the full ontology and its OWL 2 QL approximation. The latter is the version of the ontology which is actually used by Mastro for SPARQL query answering.
  • Mappings: here you can load a new mapping for the ontology. We’ll get to that in a little bit.
  • SPARQL: Monolith’s SPARQL query endpoint, from which you can run ontology queries through Mastro. More on that later…

Before you can link your ontology to some data, you have to tell Monolith where that data is going to be coming from. This means creating a Datasource.
Follow the Datasources link in the Navigation Menu, and you’ll be right in the Datasources page, from where you can create a new datasource by pressing on the “Create a datasource” button.
To create the Books datasource, simply:

  • type in Books as the name of the datasource
  • choose the MySQL jdbc driver
  • type in the URL of the Books database, so something like:
  • and then type in the username and password of your MySQL server

After creating the datasource, you can test the connection, modify it, or delete it by clicking on the buttons in the lower right-hand corner of the datasource card.
Now that you have your first datasource, you are ready to map data to your ontology!

Keep in mind that for some versions of MySQL, to get the JDBC driver to work with UTC time zone, you have to specify the serverTimezone explicitly in the connection string. So this would be the URL:


Before you try making a mapping of your own from scratch, let’s load the Books mapping which we have prepared for you, so you can see what an ontology mapping looks like.

Go back to the Books ontology, choose version 1.0, and then from the Ontology Menu, choose Mappings.
Click on the Add Mapping card in the Mappings catalogue (the big one that says “Add Mapping“), and from the Mapping Import tab, select the Books mapping file. You can now see the mapping in the catalogue.

Similarly to the ontology versions, the first thing you see in the Mappings page after choosing a mapping is the Mapping Info tab, where you can check the description of the mapping and the templates that are defined in the mapping.
Intuitively, a template is a IRI string which is used to build a range of IRIs with the data in the database. It’s made of a constant part and a variable part, the latter between braces {}.

A mapping is made up of four fundamental components: SQL Views, Ontology Mappings, SQL View Constraints, and Templates. Each of these components has a dedicated tab in the Mappings page.

An SQL View is an SQL query over the database, to which you can assign a name. From the SQL Views tab, choose the view called books_view, and you’ll see that it has this SQL query:

SELECT bk_code as code, 
       bk_title as title,
       bk_type as type
FROM tb_books

Indeed, the table in the Books database that contains the IDs, titles, and types of the books is called tb_books, and looks like this:

2As We GrieveP
3Runaway StormP
19A Dark CircusA
20City of StarsA
21Not My DaughterE
22The Last Train From ParisE
23Our Boomer YearsE
24Path of ThunderE

As you can see, the bk_type column contains the type of the book. So “P” is for printed books, “A” is for audio-books, and “E” is for e-books. Let’s keep this in mind, we’re going to need it in a little bit.
So with the books_view, we are extracting all the information that’s in the tb_books table. We’re going to use this information to create the instances of the :Books class, and also of its subclasses, like :E-Book, by using this SQL View in the mappings of these classes.

From this tab, you can also see which Ontology Mappings are using the chosen SQL View, from the Mappings section, and also which SQL View Constraints this view is involved in.


Similarly to what happens in relational databases, it is possible to define Keys for SQL Views. Think of Keys as primary keys in a relational table:  they uniquely identify each row in that view. You’ll see that in the books_view, code is the key. Keep in mind that it’s possible to define more than one key, each having more than one parameter.

Keys are shown both in the detail of the SQL View for which they are defined, and in the dedicated Keys tab in the SQL View Constraints tab of the Mappings module.

Let’s see how Keys can help Mastro improve its query answering process through an example.

Example. Assume that you have defined the following SQL View:
territory_view(city, province, region), with a Key on the column city

which you use twice to map the object property :partOf with the following templates:
Mapping 1:

  • Domain:
  • Range:

Mapping 2:

  • Domain:
  • Range:

Now, you ask the following SPARQL query:

SELECT ?x, ?y, ?z
WHERE {?x :partOf ?y.
       ?y :partOf ?z.}

which we can rewrite, using a more compact logic-based notation, as {x,y,z | :partOf(x,y), partOf(x,z)}

The Mapping Rewriting step of the query answering process will produce the following rewriting: {x,y,z | territory_view(x,y,z'), territory_view(x,y',z)}

However, this rewriting, thanks to the Key on column city (the x), can be simplified into {x,y,z | territory_view{x,y,z}, thus avoiding a useless self-join.

An Ontology Mapping is a link between an entity in the ontology and a conjunction of one or more SQL Views, possibly with some filters.
So an Ontology Mapping has basically three components:

  1. one of the entities in the ontology (i.e., the head of the mapping)

2. a select-project-join SQL query over the SQL Views (i.e., the body of the mapping).
In the SQL Query, when defining joins between different tables, Monolith requires to use explicit joins instead of implicit joins. So, for instance, use

SELECT b.bk_title as book_title, 
e.ed_code as edition_code 
FROM tb_books b join tb_edition e on b.bk_code = e.bk_id

instead of

SELECT b.bk_title as book_title, 
e.ed_code as edition_code 
FROM tb_books b, tb_edition e 
WHERE b.bk_code = e.bk_id

3. an IRI template, which is formed by a fixed part and one or more template variables, between braces

In Monolith, Ontology Mappings are organized either by the entity which they map (the By entity submenu of the Ontology Mappings tab), or by their ID (the All Mappings submenu).
Let’s see an example. From the Ontology Mappings tab, go to the By entity submenu, and select the :E-Book class. You’ll see that it has one mapping, in which the SQL query over the views is

SELECT book_view.code AS code 
FROM book_view 
WHERE book_view.type = 'E' 

This is because, as we saw earlier, in the tb_books table, the value for the books type (column bk_type) which indicates that a books is an e-book is “E“. You can see a couple of examples in the table above, e.g., Not My Daughter, The Last Train From Paris, and so on…

Finally, the IRI template is{code}

So is the fixed part of the IRI template, and {code} is the template variable.
This means that instances of the E-Books class are built using the IRI template{code}, and extracting the codes from the book_view view, but, again, only for those books for which the field “type” is ‘E‘.

SQL View Constraints are relationships that you can define between SQL Views in the mappings, which, along with the Keys in the views, will be used by Mastro at run-time to optimize its query answering process. Monolith allows you to create two different kinds of view constraints: Inclusions and Denials.
Once created, SQL View Constraints are shown both in the SQL View Constraints tab of the Mappings module, and in the SQL Views tab, under the SQL Views which are involved in them.

Inclusion Constraints

Inclusion Constraints determine inclusion relationships between pairs of (columns of) SQL Views. So for each Inclusion Constraint, you will have an Included SQL View, and an Including SQL View. The number of columns that are involved in the inclusion, for each of the two views, must be the same.

Let’s see an example. Go to the SQL Views tab, and pick the unedited_book_view from the SQL View tree. In the Constraints section of the view, you will find the following inclusion (with the Included view on the left of the arrow and the Including one on the right):

unedited_book_view(code) → book_view(code)

This inclusion means that each value in the code column of the unedited_book_view will also be a value in the code column of the book_view.
It is also possible to define inclusions which involve more than one column of the included and including view.

Like we did with Keys, let’s see the role that Inclusions play in Mastro query answering process through a couple of examples.

Example 1. Consider the following SPARQL query, which asks for every man that has a name, but without returning the name:

WHERE {?x a :Man.
       ?x :name ?y}

Now, assuming that the ontology doesn’t contain axioms that involve :Man or :name, the Mapping Rewriting of the query, assuming the SQL Views man_view(x) and name_view(x,y), would be the following: {x | man_view(x), name_view(x,y)}.

However, if we define the inclusion man_view(x) → name_view(x), then the above rewriting will be simplified like this: {x | man_view(x)}.
This rewriting will then in the SQL Rewriting step become the following SQL Query (assuming this is the SQL code of the view):

      WHERE SEX = 'M') as MV 

which can be further simplified into


Example 2. Consider the following SPARQL query, which asks for every person:

WHERE {?x a :Person.}

and let’s assume that the ontology contains the following axioms:

SubClassOf(:Man :Person)
SubClassOf(:Woman :Person)
SubClassOf(:Person ObjectSomeValuesFrom(:name owl:Thing))

So, every man is a person, every woman is a person, and every person has a name.
According to the above ontology, the SPARQL query is rewritten (in the Ontology Rewriting step) into this query (we’ll use the compact logic notation for brevity):

{x | :Person(x)} ⋃ {x | :name(x,y)} ⋃ {x | :Man(x)} ⋃ {x | :Woman(x)}

So, a union of four queries. The Mapping Rewriting step will produce something like

{x | name_view(x,y)} ⋃ {x | man_view(x,y)} ⋃ {x | woman_view(x)}

However, if we define the inclusions

man_view(x) → name_view(x)
woman_view(x) → name_view(x)

then the above query is simplified into

{x | name_view(x,y)}}

which will then be transformed into an SQL in the SQL Rewriting step according to the definition of the view name_view.

Denial Constraints

Denial Constraints are a general form of logical disjunctions, which basically define that joining two (or more) SQL Views will produce an empty result set. In logic, we can write this as

man_view(x),woman_view(x) -> FALSE

Intuitively, knowing these Denial Constraints lets Mastro discard Mapping Rewritings which will surely produce no answers in the query evaluation step. Let try an example.

Example. Consider the following SPARQL query, which asks for anything that is part of a university.

WHERE {?x :partOf ?y.
       ?y a :University.}

and assume that you have defined the following mappings (in compact notation for brevity):

View1(Dipartment_ID, UniversityID) -> :partOf(Dipartment_ID, UniversityID)
View2(Branch_ID, Bank_ID) -> :partOf(Branch_ID, Bank_ID)
View1(Dipartment_ID, UniversityID) -> :University/UniversityID)

and also this Denial Constraint:

View1(x,y),View2(x,y') -> FALSE

Assuming no new rewritings are produced in the Ontology Rewriting step, the Mapping Rewriting step will produce the following rewriting:

{x | View1(x,y), View1(x',y)} ⋃ {x | View1(x,y), View2(x,y')}

However, the Denial Constraint tells us that the second query in the above union will produce an empty results set, and so it can be safely eliminated prior to evaluation.

As explained earlier, a Template is a IRI string which is used to build a range of IRIs from the data in the database. It’s made of a constant part and a variable part, the latter between braces {}.

When mapping an entity of the ontology, you’ll have to use templates to define how an object of the chosen entity is build (or, in the case of an object property, you’ll have to use two templated, one for the objects in the domain and one for the objects in the range).

From the Templates tab of the Mappings page, you will have access to all the templates that you have so far defined in your mapping, and for each one, you will see all the mappings you have used it in.

Now let’s try to create a new mapping from scratch. Go back to the Mappings catalogue of the Books ontology, click on the Add Mapping card, and this time move to the Mapping Creator tab. You’ll be asked to provide an Version for the mapping, and a description. Just make sure you give it a version that is different from the one in the Mappings file you uploaded previously.

Once that is done, you should see two mappings in the Mapping catalogue. For each mapping in the catalogue, the Duplicate button is available, to create a new copy of the mapping, for example to have a backup before you begin editing.
Click on the card of the new one, and you can begin adding SQL Views, Ontology Mappings, and SQL View Constraints.

Creating an SQL View

Let’s start with creating a new SQL View. Go the SQL Views tab, press the “Add SQL Views” button near the search bar, and you will see the SQL View editing drawer pop out. Let’s try creating a view that extracts information regarding book editions from the database.
The table in the Books database which contains this information is called tb_edition, and has the following structure, with some sample rows:


We need to understand which information in the table is relevant for our ontology (meaning that we will use it in the Ontology Mappings).
So, take a look at the ontology. The information it shows regarding Editions is the following:

  • edition number (data property :editionNumber)
  • date of publication (data property :dateOfPublication)
  • two different types of editions, special editions and economic editions (classes :SpecialEdition and :EconomicEdition)
  • the fact that each edition is edited by an editor (the object property :editedBy)
  • the fact that a book can have an edition (the object property :hasEdition)

So, this means that you are going to need pretty much all the information in the tb_edition table to create the ontology mappings for the above entities. So, the SQL View, which you can simple call “edition_view” (or anything you like) will be:

  ed_code as code,
  ed_type as type,
  pub_date as date,
  n_edt as edt,
  editor as id,

If you want you can also add a description to the view.
Before you finish, you can check if your SQL code is correct, by selecting the Books datasource, and clicking the Test Query button. This will give you a preview of the results of the query.
Also, remember to define the Key for the view. In this case, the primary key of the tb_edition table is column ed_code, so you can choose “code” from the Key editor.

Creating an Ontology Mapping

Now let’s try defining an Ontology Mapping using the SQL View you just created. Go to the Ontology Mappings tab, then the By entity submenu, and click on the object property :hasEdition from the object property tree.
There obviously aren’t any mappings yet, so you can create the first one. Click on the Add Mapping button, and the Ontology Mapping Editor drawer will pop out.
You’ll see that the Entity has been filled out for you (but you can pick a new one if you change your mind). So you have to define the SQL code of the mapping, and the two templates (:hasEdition is an object property, so you have to build the instances of both the domain classes and the range classes, which are, respectively, :Book and :Edition).
Before you start typing in the SQL code, try pressing the “Help” button. You will be shown the SQL Views, templates, and prefixes that are already defined in the Mapping, which will help you define a correct Ontology Mapping.

As we discussed earlier, the SQL code in an Ontology Mapping is a select-project-join SQL query over the SQL Views in the Mapping.
So you will need to define the SELECT, FROM, and WHERE components of the SQL query.
In the SELECT component, you’ll need to include all fields in edition_view which you will use in the templates for the domain and range of the object property. In this case, you’ll want to use code to build the instances of :Edition, and bk_id to build the instances of :Book. It’s also always a good idea to use aliases, to make the templates a little shorter (if not, Monolith will do it for you). So:

SELECT edition_view.code as code,
       edition_view.bk_id as book_id

In the FROM component, you can include one or more SQL Views, and join them using equi-joins. In this case, you will only need edition_view, but in the more general case, assuming you have two views such as V1(x1,x2,x3) and V2(y1,y2,y3) and you want to join them on x1 = y1 and x2 = y2, you can write something like this:

FROM V1 JOIN V2 ON V1.x1 = V2.y1 AND V1.x2 = V2.y2

Lastly, you can define the WHERE component, in which you can use the following predicates to impose conditions on the results that will be extracted from the SQL Views: AND, >=, <=, <>, >, <, =, IS NULL, IS NOT NULL, IN, NOT IN, NOT LIKE, LIKE. For this mapping, you don’t have to impose any condition on the edition_view.code and edition_view.bk_id fields, because both ed_code and bk_id in the tb_edition table are not nullable, and you aren’t looking for any particolar conditions on the ID codes to create the instances of the classes :Edition and :Book. So your final SQL code for the Ontology Mapping of :hasEdition will be:

SELECT edition_view.code as code,
       edition_view.bk_id as book_id
FROM edition_view

The only thing missing now are the templates.
When defining templates in a Mapping, the most important thing to remember is to be consistent. Pick a template for a class, and stick to it whenever possible.
In this case, you can use (for example){bk_id} for the domain template, and{code}. Remember to use the “+” button to automatically add the template variables (between braces {}) in the template.

Creating SQL View Constraints

To create a new Inclusion Constraint between SQL Views, move to the Inclusions Page (SQL View Constraints -> Inclusions), and press the “Add Inclusion Constraint” button near the search bar. The Editing Drawer will slide out, and you can select the Included View on the left hand side column, and the Including View on the right hand side column.
For example, select edition_view as the Included View, and book_view as the Including View.

Now, from the drop-down menus, pick the parameters for each view that will be considered in the Inclusion Constraint. Select bk_id from edition_view, and code from book_view. Then, press the Save button.
The new Inclusion Constraint will have been added to the list.

Denial Constraints in Monolith can be simply expressed as SQL Queries over the SQL Views of the mapping. These queries are interpreted by Mastro as being extensionally empty. Therefore, the SELECT statement of the query can always be simply defined as *.
To create a new Denial Constraint, move to the Denials Page (SQL View Constraints -> Denials), and press the “Add denial” button from the Denials tree on the left hand side. Then, simply provide a name for the Denial, and its SQL Code. Remember that, just like for Ontology Mappings, in the SQL Code of the Denial, when defining joins between different tables, Monolith requires to use explicit joins instead of implicit joins.
For example:

FROM edition_view e 
	join unedited_book_view u 
    	on e.bk_id = u.code  

Now you have a Books ontology, a Books database, and some mappings. You’re almost ready to start querying the ontology!
Querying the ontology means running SPARQL queries through Mastro’s query answering module, and so before you can start querying, you have to launch a Mastro Endpoint.

An Endpoint is basically an instance of the Mastro reasoner, which has been created by specifying an ontology version, a mapping, and a datasource. As usual, you can optionally provide a description.
From the Navigation Menu, click on the Mastro icon , and you will land on the Mastro Endpoints page. On the left hand side of the page, you can see the Endpoints tree, which will list all the endpoint you have created.
From the Endpoints tree, press the Add Mastro Endpoint button, and the Create Mastro Endpoint drawer will pop open.
From here, to create the Endpoint, choose a (unique) name, (optionally) a description, an ontology, an ontology version, a mapping, and finally a datasource. Then, press Create, and the new endpoint will be added to the Endpoints tree.

By clicking on any Endpoint in the tree, you can manage them: press Run Endpoint to boot up the Mastro Endpoint, Stop Endpoint to shut it down, and Delete Endpoint to delete it.
Mastro Endpoint that are running will be shown in the Endpoint tree highlighted in green, with a “Play” icon next to their name.
Any running Endpoint for the selected ontology will be available for query answering from the Ontology SPARQL page.

Try creating and then running an Endpoint for the Books ontology, mappings, and datasource.

So, you are finally ready now to run some queries over the Books ontology.
From the Ontology Menu, click on the SPARQL link, and you’ll land on the Ontology SPARQL page, from where you’ll be able to manage and run your SPARQL queries.

The Mastro reasoner currently supports the SELECT and CONSTRUCT query forms.

Running a query in Monolith is fairly straightforward:

  1. Pick an endpoint (it has to be running on the ontology you are working with).
  2. Type in the SPARQL code of the query (to help you out, Monolith fills in the PREFIX section of the query for you), such as

    SELECT ?ebook ?title
    WHERE {?ebook a :E-Book.
    ?ebook :title ?title.}

    to get all the E-Books with their titles
  3. Press the Run button.

That’s it!
Your query is running, and you’ll start seeing the results in the table below the query.

Once the query is finished and you have the results, you can download them. The results will be downloaded in a CSV file for SELECT queries, and in a TURTLE (ttl) file for CONSTRUCT queries.
If your query is a CONSTRUCT query, then the results of the query will be a set of RDF triples, which can be exported to a Knowledge Graph (either to an existing one or to a new one).

You can save your most important queries for future re-execution in the Query Catalog by pressing the Store in Catalog button and providing an ID for each query.
Try importing the SPARQL queries in the file you downloaded into the catalog by clicking on the Upload query catalog button. You will see three queries:

  • all_books
  • special_editions
  • economic_editions

Clicking on any query in the catalog will open up a new query tab, from where you will be able to run the query.
Also, you can turn the Query Catalog on or off through the Toggle catalog button.
Finally, you can export your catalog by clicking on the Download query catalog button.

Mastro supports (almost) all of SPARQL’s syntax. Specifically, in the table below, you can see which operators, functions, and query forms you can use to query the ontology through Mastro.

Solution Sequences and ModifiersORDER BY, SELECT, *, DISTINCT, OFFSET, LIMIT
Functional Forms||, &&, =, !=, <, >, <=, >=, IN, NOT IN
Functions on NumericsROUND, CEIL, FLOOR
Functions on Dates and TimesNOW, YEAR, MONTH, DAY, HOURS, MINUTES, SECONDS

Now that you know which SPARQL terms you can use, you need to know how to combine them. Here’s Mastro’s SPARQL Grammar (As in SPARQL’s official documentation, the EBNF notation used in the grammar is defined in Extensible Markup Language (XML) 1.1 [XML11] section 6 Notation):

ConstructQuery::=ConstructClause ‘WHERE‘ ConstructBody
ConstructBody::=(SelectQuery | UCQPattern)+
SelectQuery::=SelectClause WhereClause SolutionModifier
SelectClause::=SELECT‘ ( ‘DISTINCT‘)? ( ( Var | ( ‘(‘ Expression ‘AS‘ Var ‘)’ ) )+ | ‘*‘ )
Expression::=COUNT‘ ‘(‘ ‘DISTINCT‘? ( ‘*‘ | Var ) ‘)’ | ‘SUM‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘MIN‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘MAX‘ ‘(‘ ‘DISTINCT‘? Var ‘)’ | ‘AVG‘ ‘(‘ ‘DISTINCT‘? Var ‘)’
WhereClause::=WHERE‘ (UCQPattern|CQPattern)+
UCQPattern::=CQPattern (‘UNION‘ CQPattern)*
CQPattern::=TriplesBlock OptionalGraphPattern* MinusGraphPattern? Filter*
TriplesBlock::=Triple ( ‘.’ TriplesBlock? )?
Triple::=Term  IRI  Term
Term::=Var | IRI
OptionalGraphPattern::=OPTIONAL‘ TriplesBlock Filter*
MinusGraphPattern::=MINUS‘ TriplesBlock Filter*
Filter::=FILTER‘ Constraint (|| | && Constraint )*
RelationalExpression::=NumericExpression ( ‘=’ NumericExpression | ‘!=’ NumericExpression | ‘<‘ NumericExpression | ‘>’ NumericExpression | ‘<=’ NumericExpression | ‘>=’ NumericExpression )?
NumericExpression::=INTEGER | DECIMAL | DOUBLE | VariableExpression 
SolutionModifier::=GroupClause?  OrderClause? LimitOffsetClauses?
GroupClause::=GROUP‘ ‘BY‘ VariableExpression (‘HAVING‘ VariableExpression)?
OrderClause::=ORDER‘ ‘BY‘ OrderCondition+
OrderCondition::=( ( ‘ASC‘ | ‘DESC‘ ) VariableExpression)
LimitOffsetClauses::=LimitClause OffsetClause? | OffsetClause LimitClause?
VariableExpression::=Any expression built with a combination of variables, constants, IRI, functions, and aggregates

Activating (or deactivating) the Ontology Rewriting step of Mastro’s query answering process means that the axioms in the ontology will (or won’t) be considered when computing the results of the query.

Let’s see an example of how the Ontology Rewriting process can impact the results of the query.
Try running the all_books query from you Query Catalog.
You’ll see that the query will produce a total of 31 results, and also 9 Ontology Rewritings (you can see each of them from the Ontology Rewritings tab in the Query Report section). Each Ontology Rewriting is a new SPARQL query in which one axiom in the ontology has been used to reformulate the original query by replacing one ontology entity.
For instance, since :E-Book is a subclass of :Book, one of the rewritings of the query will be:

WHERE { ?x0 <rdf:type> <>}

Now, try pressing the Reasoning toggle button to disable Mastro’s Ontology Reasoning step, and run the query again. You’ll see that the query now produces less results (only 27), and has just one rewriting, i.e., the original query.

Now, let’s try something different.
Go to the Mappings page from the Ontology Menu, and from the Ontology Mappings tab, select the :Book class, and delete its one and only mapping.
Then, go back to the SPARQL page, and try running the query again, with reasoning turned on. You’ll see that you will get your 31 results back again, even if the :Book class doesn’t have any mappings now. These results have been produced by the SPARQL queries computed during the Ontology Reasoning step of Mastro’s process.
Finally, try running the same query again, but with Reasoning turned off. At this point, you shouldn’t be surprised to see that the query produces no results at all!

Monolith allows you to create and manage Knowledge Graphs (or KGs).

From the Navigation Menu, choose Knowledge Graph. This will bring you to the Knowledge Graph Catalogue, from which you can create your KG. Press the Add Knowledge Graph button, and choose an IRI and Title and (optionally) a description for your KG.
Let’s use the IRI and call it “MyFirstKG”.

Additionally, you can specify information regarding both the Publisher and Rights Holder of the KG. This information can be useful if you intend to publish the KG.

When you press Submit, MyFirstKG will have been added to the Knowledge Graph Catalogue. Click on its card to access it.

The Knowledge Graph Menu lets you navigate the sections of Monolith’s Knowledge Graph module, just like we showed you for Ontologies: Info, Import, Explore, and SPARQL.

From the Info page you can consult all the meta-data of the KG.

  • The IRI
  • The description
  • When it was created and by which user
  • The metadata regarding publisher and rights holder

You can also download the KG in either RDF/XML or N-Triples syntax.

From the Import page you can add RDF data files to your KG. Let’s give it a try to see how this works.

Put this address into your browser: You will download an RDF file (in N-Triples syntax) which contains all the information relative to the city of Rome from the DBPedia Database.

Now, click on the “Click or drag RDF file” card, and select the file you just downloaded. You will see a new card, with the name of the file, pop up in the page. Click on it, and three buttons will appear: Import, Reset Status, and Delete.

Press the Import button. This will import the RDF data into your KG. You can also choose if you want to import all the data in the file, just the data from the default graph in the file, or from a specific named graph. If you’re not sure what this means, you can ignore it, and just press Ok.

Once the data is imported, you will see the card highlighted, and a yellow check sign next to it. This way you can easily see which files have been imported.

You can perform import and delete operations on multiple files at once. The Reset Status button let you reset your selection.

Now that you have imported data into your KG, go to the Explore Page from the Knowledge Graph Menu.

Here you will be shown a list of all the Classes in the KG. By clicking on any one of them, you will be shown the resources in the KG that are instances of the selected Class. Each resource will be shown by its label (specified through RDF statements in which rdfs:label is the predicate), if it has one, otherwise by its IRI.

Now click on any of the resources that you see for the chosen Class. This will bring you to the Resource Page, where you will see all the relevant information for this resource:

  • the IRI and its Label
  • the Class it belongs to
  • its descriptions (specified through RDF statements in which rdfs:comment is the predicate)
  • all the RDF statements in which the resource is the subject (Direct Relations), and all the RDF statements in which the resource is the object (Inverse Relations). These RDF statements are grouped together by their predicate, so in the first column you will see the predicate resource, and in the second column you will see the Class to which the resources belong to. Click on any one of the Class IRIs in the second column, and Monolith will show you all the resources (IRIs and labels) that appear in the RDF statements.

Every IRI shown in the Resource Page is clickable: this is how you move from one IRI to the next, in order to explore the KG.

You can also download all the information in the Resource Page into an RDF file, by clicking the Download button and choosing the preferred syntax.

Finally, go to the SPARQL Page. Here you can run any SPARQL query over your KG. Just like for ontology queries, you can store your most significant queries in the Query Catalogue, download the results of your queries, and provide a description of each query.

The User Administration page is accessible from the Ontology Menu for all users with administrator role (such as the admin profile), and allows to manage Users, Roles and Permissions, which dictate who can do what inside Monolith.

From the Roles and Permissions Tab, the administrator can define a new Role (the Add Role button), give it a name, and assign one or more Permissions to this role. Each permission is a triple <Domain, Action, Resource>, where

  • the Domain indicates the section of the system to which apply the permission (e.g., Ontology, Mapping, Knowledge Graph, etc.);
  • the Action indicates what the permission allows the user to do, expressed in terms of HTTP verbs, or methods: GET, PUT, POST, DELETE;
  • the Resource indicates the specific resource which the permission is applied to.

So, for instance, if the permission is <Ontology, GET, *> it means that whichever role has this permission will be able to see all the ontologies.

Once a Role is created, it can be then modified or deleted.

For example, try creating a Role called OntologyReader, and assign the above permission. All defined Roles will be shown in the Role catalogue on the left-hand side of the page.

The system administrator can create new Users from the Users Tab: each User must have a username, an email address, one or more roles, and optionally, a name and surname.

Once the User has been created, an email will be automatically sent to the provided email address containing a synthetically-generated password.

The information provided when creating the User will be shown in the User Tab of the Settings Page.

From here, the user can change the provided password, as well as his name, surname, and email address.