Demo – Paper 527

RelVis: Benchmarking OpenIE Systems

Rudolf Schneider, Tom Oberhauser, Tobias Klatt, Felix A. Gers and Alexander Löser

Demo


download Download paper (preprint)

Abstract

We demonstrate RelVis, a toolkit for benchmarking Open Information Extraction(OIE) systems. RelVis enables the user to perform a comparative analysis among OIE systems like ClausIE, OpenIE 4.2, Stanford OpenIE or PredPatt. It features an intuitive dashboard that enables a user to explore annotations created by OIE systems and evaluate the impact of five common error classes. Our comprehensive benchmark contains four data sets with overall 4522 labeled sentences and 11243 binary or n-ary OIE relations.

Leave a Reply (Click here to read the code of conduct)

avatar
  Subscribe  
Notify of