Milestones
Objectives
Members Login
Impressum
Webmaster
image

Overview on Technology, Methodology and Application Scenarios

There were three main research activities:

  • Firstly, the definition of Knowledge Content Objects (KCOs) and their attendant technical infrastructure called the "Knowledge Content Carrier Architecture" (KCCA). The KCCA is a distributed systems middleware which allows the inclusion of heterogeneous content systems into a METOKIS federation of content management systems.
  • Secondly, the project investigated how the DOLCE foundational ontology could be specialised to cater for KCOs on the one hand, and how it could be enhanced to also include tasks and plans which were deemed particularly important for the modelling of content based workflows.
  • Thirdly, a methodology was developed alongside the technology, so as to teach organisations how to develop METOKIS-like Information Systems in the future. The research activities were complemented by the development of three application cases each of which was conducted by a partnership of one technology partner and one partner with commercial interest in the usage of the application. The cases were very different in character because the project should demonstrate that KCO and KCCA could offer a "canonical model" for a whole class of knowledge based content management systems.

The project results were validated through each of the application cases. The indications from the validation are that the project vision is still far advanced with respect to the current state of the art in industry, and that adopters of the technology need to pick and choose what is suitable in their respective system environments. In other words, building systems based on METOKIS-technology will remain a challenge as long as Semantic Web technologies remain unsupported by mainstream methodologies and tool suites. However, METOKIS provided significant new knowledge for anybody undertaking such a venture. The tangible outcomes of the project are the open specification of Knowledge Content Objects, based on the DOLCE foundational ontology and represented in the web ontology language OWL, as well as an implementation of the KCCA which can manage KCOs as RDF models in the Jena RDF toolkit.

One of the highly successful "side-effects" of the project was the improved design of a "semantic WIKI" which was used for the modelling and presentation of the clinical trials application. For the project partners, METOKIS provided an important stepping stone towards truly semantic knowledge content systems. We hope that the public results of METOKIS can have similarly beneficial impact on Europe's technological progress.

Technology - a new knowledge content object model

The main idea of Knowledge Content Objects (KCO) is to create machine readable semantic enrichments that can be associated with a given content. Semantically enriched information objects (aka KCOs) can be transferred between various systems and thus, knowledge is able to flow between them. This way, a METOKIS system provides a higher level of utility for a set of users because the KCOs carry inside all information required for transferring, using, and further developing them.

KCOs can be used for communication as it is possible to undertake a cross-media information transfer without any losses. The analogy would be that you are writing a special sort of letter to someone else. The inner structure of this letter helps computers to separate out what is meant for them and what is for the humans to interpret. This special sort of "letter" can be exchanged between humans and humans like normal information, but also between humans and machines, or even solely between machines.

For the universal use, the following generic layers of a KCO have been developed:

Facets Facet Elements Short Description
Content Description (CD) Multimedia characterization How to access the content. This description includes information about the Content Format, Encoding, Storage Location, etc
Content Classification
How to classify the content: Keywords and concepts assigned to the content object based on a classification schema. Dublin Core or LOM are such classification schemas. The IPTC thematic thesaurus and the ICON Class classification system are well known controlled vocabularies in their domains.
Propositional Description What the content means. This semantic description is about the subject of the Content Object and not about the Content object itself.
Presentation Description (PR) Spatio-temporal rendition How to present the content: Description of how the content (and the Knowledge) of the KCO is presented to users. Presentation includes the rendering, rendition as well as interaction models.
Interaction-based rendition
Community Description (CO) User task and
User Community
How the Content was used/changed: List of actions performed with the KCO during its lifecycle.
Usage history List of actions performed with the KCO during its lifecycle.
Business Description (BS) Negotiation protocol How to trade the content: Business processes define special plans (negotiation protocols) and roles (auctioneer, seller, buyer,) related to some business activity. This facet can be viewed as a specialisation of the community facet.
Pricing scheme
Contract information
Trust & Security none How to protect the content (vendor's interest) and how to induce trust (consumer's interest)
Self-description none How to understand the KCO: Specification of the (inner) structure of the KCO (active facets, ontologies used, ...) in machine-interpretable form.

Architecture for distributed knowledge and content management

The METOKIS Architecture defines a middleware platform for building semantic information systems, providing components and services enabling interoperability amongst varied content management platforms. It is designed to deliver functionality that addresses the following objectives:

  • Content Level Interoperability
  • Knowledge Level Interoperability
  • Task Level Interoperability
  • Workflow and Collaborative Work Interoperation

 

 

The Metokis platform consists of two key parts: KCCA (Knowledge Content Carrier Architecture) Platform and KCTP (Knowledge Content Transfer Protocol). The KCCA Platform acts as a middleware providing support for building content management applications.

Figure: KCCA Infrastructure

 

 

The following are the core components of the KCCA middleware:

  • KCCA Repository: provides interfaces with databases for storage of content, metadata, ontologies and KCOs (Knowledge Content Objects). The metadata within the KCCA middleware is stored in a RDF database with the possibility of integration with relational databases.
  • KCCA Middleware Components: provide specific components and modules that enable building up of the actual middleware. The components include: Authentication, Workflow Engine, Session Management, Inference Engine, Rule Layer and System Registry.
  • KCCA Services Container (Request Broker): provides support for system and domain level services such as digital rights management, registry services etc. It also includes services which provide access, query and manipulation of KCOs.
  • KCTP (Knowledge Content Transfer Protocol): provides a light-weight request/response protocol implemented by the KCCA Middleware that allows applications to perform operations on KCOs. It provides access to/from KCCA Middleware to other external KCCA Systems

Economic Framework and Knowledge Service Methodology

Business Models and Economic Basis for Knowledge Sharing. In a nutshell, the most convincing argument for the ontology-based approach suggested by METOKIS lies in the innovation drivers, rather than the traditional improvement drivers.

Knowledge Services

Knowledge worker productivity is a critical issue. Conservative estimates indicate that just under a tenth of knowledge workers' time is wasted thanks to failure to find documents, having to recreate documents in different formats, and similar tasks. The cost to the organisation of this lost time is thousands of Euros per knowledge worker per annum. Improving this productivity means optimising the way content is moderated, whether it is through "IT agents" (such as search engines, web services, or publishing tools), "human agents" (such as work colleagues, experts or journalists) or a blend of both.

 

Figure: Business drivers for ontology enhanced digital content

 

The Knowledge Service Methodology provides a framework for specific "solution blueprints" to the problem of optimising this moderating layer. With the range of different possible IT-human agent combinations, a prescriptive "one size fits all" method is clearly not feasible. Instead, a methodology is needed that outlines the stages needed to be met in any economically feasible solution. Off each stage of this methodology, hang libraries of methods some of which may be generic, and some tailored to specific organisational needs.

To optimise the moderating layer, then, context needs to be embedded in such a way that the KCCA, the Semantic Web and IT agents can make use of it. Context comes from consensus, and the most effective and only economically viable way of embedding it is through modelling the emergent consensus. The implications for metadata are that the most economically viable and tractable metadata is that which is automatically generated (rather than human authored).

The Knowledge Service Methodology

The Knowledge Service Methodology provides a process to move towards this goal. There are six steps to the methodology, as follows:

  • Step 1: Map system - "Landscape"
    Identify project consensus points. Identify actors. Identify communication channels ("formal" and "emergent"). Perform content & tools audit.
  • Step 2: Seed system - "Sow"
    Given the map of the knowledge system, identify those individuals who have the greatest effect on the group's consensus.
  • Step 3: Encourage emergence - "Grow"
    Encourage emergence through conversation-based social software such as blogs and instant messaging, and through document-based social software such as wikis and discussion boards. Support human intervention in this process, such as face-to-face networking.
  • Step 4: Remove obstacles to emergence - "Prune"
    Isolate unwanted behaviours, of both human and IT agents in the system and remove.
  • Step 5: Develop taxonomies - "Harvest"
    Use emerging taxonomies for content, user and task to construct models on which IT agents can act.
  • Step 6: Bridge consensus & action - "Plough"
    Feedback models and constraints into the KCCA. Return to Step 2.
  • The Knowledge Service Methodology is supported by a preceding analysis of business models and economic considerations of knowledge sharing in various business settings.

Validation of the KCO / KCCA concepts in three independent use cases

One of the basic assumptions of METOKIS was and still is, that in the knowledge economy, many activities will be supported by computing applications and therefore, these activities will take place in a virtual space which is essentially shaped by our knowledge about the activities in question and by our knowledge about the "things" that we manipulate through our activities. We had three very different application partners who were willing to test the assumption by defining their application cases using the methods and tools provided by METOKIS:

  • A production workflow for educational content (Partners empolis and Klett)
  • Supporting news dissemination to special interest groups (KVIEW and Templeton)
  • A tool for defining, managing and visualising new clinical trials (Salzburg Research, Ymega)

Each of the application was required to validate the following elements:

  • Modelling - how well were task taxonomies and KCOs suited to the domain models of each application case?
  • Methodology - how well were the business and service modelling methods suited to each of the applications?
  • Interoperation - how well was the KCCA suited to exchange of information between heterogeneous information systems?

We adapted the Goal-Question-Metric method (GQM) by Rombach and Basili, to the needs of our validation, by devising specific GQM questions for each of the application cases.

Educational Workflow - dynamic publishing of digital learning materials

The objective of the use case from KLETT was to develop a continuous work flow for the production of digital learning modules (CBTs) and to support the process of defining a new CBT (Computer Based Training) module in order to improve the reuse of existing content for new CBTs. The system was implemented using the Orenge toolsuite (Publication Build ) from partner empolis.

Figure: Using and Re-using objects for the production of digital learning modules.

 

See the Demonstration.

Semantically enabled News Services

The Rapid Browser system is designed to acquire and present news feeds from disparate sources and to enable users to find, share and act upon news items. With the semantic extensions afforded by KCOs and KCCA, the end user is able to filter news according to source and subject, defined according to a domain taxonomy. Based upon their role, Rapid Browser users have access rights determined by contract to different feeds. Using a set of pre-defined actions (corresponding to a task ontology), they may create, share and manipulate news items. Such actions include support for editing and publishing workflows. For example, a group moderator may create topics or agenda items, publish topics to a blog for feedback, and use the system to publish automatically a newsletter, being the export of a news filter based upon the agreed agenda topics. The KCO Business Description may be used to trigger rights warnings to a user about possible misuse of content and the KCCA is used to enable research across multiple RAPID Browser KCO repositories.

Figure: Scenario for creating agendas for executives meeting

 

See the Demonstration.

Clinical Trial Protocol Management

The objective of the Clinical Trials use case was to support the move from purely document-driven design of such studies to a knowledge based design. We built a system for the semantically enhanced specification of clinical trial protocols. Our exploitation plans foresee that a fully-fledged system will be developed iteratively and incrementally from the current prototype.


Figure: Visualization of a complex tasks

 

See the Demonstration.

Exploitation plans and further work

The partners of the METOKIS project intend to exploit the knowledge within their business domains. Empolis plans a direct uptake of the technologies in new releases of their system Publication Build V4.0. The innovation aspect can be seen on the one hand in the ontology based consolidation of user tasks and the assortment of content objects, and on the other hand in the fact that these Knowledge Content objects are not only aggregated from local sources but also potentially from external media repositories which are accessible via the web and managed by the METOKIS Knowledge Content Carrier Architecture.

KnowledgeView and Templeton College will create a Knowledge & News Service Platform for creating and operating information services. It provides a means for editorially active knowledge workers to share and act upon news and other information and to increase the productivity of their interactions with one another. KnowledgeView is intending to develop this as a directly operated hosted service facility using extensions to its RAPID Browser product family.

YMEGA Establishment will build its Congruens System 43 for Clinical Trials. The clinical trial system is the platform, which will be enhanced to handle KCOs. The system will consist of two main features. A protocol-controlled clinical trial management system (protocol section and generator prototype, which allows to generate forms, reports, and data interfaces necessary for trial management from protocol data) and a comprehensive, integrated metadata management system (metadata section and generator prototype).

Klett will use the knowledge gained in the project, to build concrete domain and task ontologies in order to support their business processes of highly modular delivery of knowledge products.

SRFG and ISTC-CNR will use the theoretical results (DPO - DOLCE + D&S Plan Ontology) and the practical outcomes (KCCA middleware, KCO implementations) of the project for further research activities. SRFG is also able to transfer significant aspects of this knowledge into two national competence centres - Salzburg NewMediaLab and eTourism Center Salzburg.

All partners under the lead of MCM Institute St. Gallen are also developing a methodological handbook that introduces a Knowledge Service Methodology and integrates it with Business Modelling and an economic model for describing incentives for knowledge sharing. The handbook is aimed at practitioners who wish to implement domain and task ontologies within their ICT-environment. The handbook will include a section concentrating on the design view of how to design the organisation to implement the KCCA and a section concentrating on the moderator view of how to support the change management process within an organisation. Within the handbook the three application cases are described including their underlying business models. In Austria, the project has already led to a two-year follow-on research project combining GRID computing, semantic web services and intelligent objects based on the KCO model.

To summarise, the METOKIS project has defined Knowledge Content Objects and developed an infrastructure for their manipulation, and it has shown by way of three use cases and attendant examples, how such objects can be exchanged by different information systems serving different purposes.

As a "trail blazer" project, we have concluded that the majority of results will be made available to interested parties, particularly in conjunction with concrete exploitation projects (commercially and academic). The partners are too small and the knowledge created is too diverse to seriously envisage software patents. Engaging in such activities would be counter-productive, in our opinion. We have therefore chosen the route of making results available to the public where this helps in disseminating the intelligent objects vision, and to keep control over such dissemination by making the knowledge only accessible on a case-by-case basis, depending on each partner's assessment of the costs and benefits of doing so.