Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Inhalt:

Sessions

Session information is continuously updated…

Web of Data Stack - Identifying standards and technologies | Christopher Pollin, Gerlinde Schneider |
Tue 15:30 - 17:30

This session offers a brief and concise introduction to the basic principles and technologies to the Web of Data and Linked Open Data (LOD). No prior knowledge is assumed. Participants should gain a clear understanding of the concepts behind linked open data, how they are used, and how they are created. The session includes the following parts:

The objective is to give the participants an understanding for the interaction of the individual components, in order to create the basis on which the following sessions can build on.

Querying the web of data (Rest and SPARQL)| Matthias Schlögl |
Wed 9:00-10:30

In the web there are many machine readable and open-access resources that can be used to enrich data or even compile a whole dataset. This session is an introduction to Rest APIs and SPARQL endpoints. We will explore how these interfaces can help us to take advantage of the LOD and enrich datasets. Using Python Jupyter Notebooks we will start with querying local RDFs before we use Rest requests to enrich the local data. Finally, we will use SPARQL to query reference resources and enrich our local data.

 

Controlled Vocabularies and SKOS | Ksenia Zaytseva |
Wed 11:00 - 12:30

Thesauri, taxonomies and other forms of controlled vocabularies represent a conceptual backbone of the (humanities) research, playing an ever increasing role in various aspects of the data management process.  These resources are indispensable to determine common understanding allowing to systematically categorize and enrich research data in a consistent manner, as well as foster the data interoperability and integration among projects and web applications.

The session will introduce participants to SKOS standard which is a W3C recommendation for publication and use of vocabularies as linked data. Participants will learn about the main principles of how to create a vocabulary in SKOS, how vocabularies quality impacts search functionality, as well as what open source tools exist for managing vocabularies. A hands-on part will include a practical exercise to create and visualize SKOS vocabulary.

 

Linked Data Curation using Open Refine | Christian Steiner |
Wed 14:00 - 15:30

OpenRefine, originally developed by Google (then called Google Refine), is an open-source desktop application primarily used for data wrangling purposes (data cleaning, preparation and enrichment). OpenRefine operates on the level of tables, similar to relational databases. It enables users to explore data sets, normalize them and automatically enrich them with services available via APIs. Transformation expressions can be written in General Refine Expression Language (GREL), Jython (i.e. Python) and Clojure.

In this session, we will learn how to benefit from OpenRefine’s methods to prepare and normalize various sets of different data. We will talk about the available data types and how to profit from their transformations. However, our focus will be the potentials of automatic data enrichment with OpenRefine as well as the options to easily turn our data into Linked Open Data (LOD). Finally we will also discuss how to contribute to the LOD community by exposing the concepts we will add with OpenRefine.

 

Working with Recogito | Rebecca Kahn, Rainer Simon |
Wed 16:00 - 17:30

The availability of digitised collections and born digital data, is changing the way we study and understand the humanities. This amount of information has even greater potential for research when semantic links can be established, and relationships between entities highlighted. The practice of connecting historical data sources according to their common reference to places (expressed via URIs stored in gazetteers) allows researchers to connect and work across materials, from archaeological collections to literary texts, historic biographies to ancient maps, and opens up powerful new ways of visualising this data. This workshop is designed to show how digital tools can be used enhance and assist the traditional humanities scholarly practices of interpretation, association and occasional serendipity, in order to discover distributions, anomalies and patterns.

In Part I participants will be introduced to Recogito, an award winning tool developed by Pelagios that enables annotation of place references in text, images and data through a user-friendly online platform. The principal function of Recogito is the ability to allow non-experts to produce semantic annotations, while at the same time allowing the user to export the data produced as valid RDF, XML, GeoJSON and TEI formats. Participants will be able to explore Recogito’s newer features, such as collaborative annotation, Named Entity Recognition, and relationship tagging, as they upload and annotate text and images and then download their annotations in the available data formats. By exploring sources and discussing the related challenges of using a wide range of sources, participants will be able to explore how Recogito might be used to support their own research.

This will lead in to the second part of the workshop, which will show how cultural heritage data (including annotations created in Recogito) can be used with a variety of other tools and platforms, as part of research practice.

 

GLAM and LOD | Rebecca Kahn, Rainer Simon |
Wed 18:00 - 19:00

This lecture will introduce participants to a range of ways in which linked open GLAM data has been, and can be used for humanities research. It will introduce two tools for sharing and using linked data - namely Wikidata and Peripleo.  Peripleo is a map-based visualisation for exploring Linked Open Data relationships. Working with over 8 million objects, across almost 3000 years, the lecture will show how it is possible to search across a linked data ecosystem for places, documents, objects and keywords, as well as historical concepts. In this way, the humanistic concepts of uncertainty and exploration can be modelled in linked data.

The lecture will also demonstrate how Wikidata has become an important source of linked open data, as well as a mechanism for connecting other collections of data (in particular GLAM sources) to each other. Using examples from a range of library, museum and gallery sources, the lecture will demonstrate how Wikidata can be used as an interconnection format, to create hybrid data sources. They will also be shown how it is possible to develop new knowledge connections using a pipeline of tools and resources, including Recogito, the Pleiades gazetteer of ancient places, and Wikidata.


Modelling cultural heritage data with CIDOC CRM | George Bruseker |
Thu 11:00 - 15:00

Analytic documentation of historical facts, especially those already rendered in digital form (e.g. databases and spreadsheets), serve as an important repository of primary material for the study of the past and its relation to the present. The documentation that is digitized,created and maintained through the work of researchers in the humanities and social sciences as well as by employees of GLAM institutions generates valuable sets of data which can be used as an empirical ground from which to study society and culture.  In approaching the use and reuse of cultural heritage data, researchers are faced with the challenge of bringing together pre-existing resources or organizing new resources in a coherent and reusable manner.

Conceptual modelling is the task of analyzing and organizing information sources according to a coherent method and towards use for particular research ends. A formal ontology provides a standardized conceptual model that aims to stay true to the ontological commitment of a community and which supports the conceptual modelling process through allowing the common (re)expression of information falling within the purview of that’s community’s discourse at a generic level.

The CIDOC CRM is the ISO (21127:2014) standard ontology for cultural heritage information. In continuous development since 1996, this standard is in it’s 6th version and has 8 official harmonized extensions for modelling and representing more specific areas of information. In these sessions, participants will be introduced to the CIDOC CRM model, its basic logic and structures. Using a hands on example, participants will be challenged to apply the model to a given scenario. Participants will work in teams and are encouraged to raise questions in a common, interactive learning environment. Participants can expect to finish the session with a basic knowledge of how to practically apply CIDOC CRM to a concrete cultural heritage data modelling scenario.


Formal Foundations of Ontologies and Reasoning | Ivan Varzinczak |
Fri 9:00 - 12:30

This course provides an introduction to Description Logics (DLs), a family of logic-based knowledge representation formalisms with interesting computational properties and a variety of applications. It turns out DLs are well-suited for representing and reasoning about terminological knowledge and constitute the formal foundations of semantic-web ontologies. There are different flavours of description logics with specific expressive power and applications, an example of which is ALC and on which we shall have a strong focus in this course. We start by motivating the need for formal foundations in the specification of and reasoning with ontologies. We then present the description logic ALC, its syntax, semantics, logical properties and proof methods, especially the tableau-based one. Finally, we illustrate the usefulness of DLs with the popular Protégé ontology editor, a tool allowing for both the design of DL-based ontologies and the ability to perform reasoning tasks with them.

 

Introduction to Linked Open Data in Linguistics | John McCrae, Thierry Declerck |
Fri 14:00
- 17:30

This session has the main goal of giving participants practical skills in the fields of linked data and semantic technologies as applied to linguistics and lexical data. We will introduce a variety of state-of-the-art multilingual representation formats and application scenarios in which to leverage and exploit multilingual semantic data.

After developing a short initial ontology-lexicon, participants will learn step by step how to represent multilingual data with their ontology-lexicon and how to ground it linguistically.  At the end of the session, participants will be able to use Linguistic Linked Open Data (LLOD) for the semantic representation of linguistic data.

Institutsleitung

Univ.-Prof. Dr.phil. M.A.

Georg Vogeler

Elisabethstraße 59/III, 8010 Graz



Institut

Elisabethstraße 59/III, 8010 Graz

Telefon:+43 (0)316 380 - 5790


Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Zusatzinformationen:

Ende dieses Seitenbereichs.