Task Description

1 Description


One of the essential functions of natural language is to talk about spatial relationships between objects. Linguistic constructs can express highly complex, relational structures of objects, spatial relations between them, and patterns of motion through space relative to some reference point.
Learning how to map this information onto a formal representation from text is a challenging problem. At present no well-defined framework for automatic spatial information extraction exists that can handle all of these issues. In our recent paper [1], the task of spatial role labeling is introduced and an annotation scheme proposed that is language-independent and facilitates the application of machine learning techniques. Our framework consists of a set of spatial roles based on the theory of holistic spatial semantics with the intent of covering all aspects of spatial concepts, including both static and dynamic spatial relations. For a basic example, in the following sentence:


Give me [the gray book](trajector) [on](spatialindicator) [the big table](landmark).


The phrase headed by the token book is referring to a trajector object, the phrase headed by the token table is referring to the role of a landmark and these are related by the spatial expression on denoted as spatial indicator. The spatial indicator (often a preposition) establishes the type of spatial relation.


Analogous to semantic role labeling we define spatial role labeling as the task of automatic labeling of words or phrases in a sentence with a set of spatial roles. More specifically, it involves identifying and classifying the spatial arguments of the spatial expressions mentioned in a sentence.
Since a number of spatial relations could be inferred from a sentence, a spatial pivot is assumed per each relation and is called spatial indicator. The spatial arguments are related to this pivot. The basic arguments/roles include the roles of trajector and landmark. While the main spatial elements are identified and classified at the sentence level, the next step is to map the spatial relations to a kind of formal spatial semantics, that could be performed in different granularity levels.


The shared task is defined in three parts.
– The first part considers labeling the spatial indicators and their tra jector(s) / landmark(s). In other words at this step the extraction of the individual roles is important and will be evaluated.
– The second part is a kind of link prediction task and the goal is to extract triples containing (spatial-Indicator, tra jector, landmark). The evaluation is based on the correctly predicted triplets.
– The Third part concerns with formulation of the semantics of spatial information. At the most coarse-grained level this includes labeling the spatial relations with region, direction, distance tags to indicate the type of the expressed spatial relations in the sentence [2]. In this task we consider the classification of general type only. Hence each relation(extracted triple) will be classified to its general spatial semantics.
See a more detailed example below.


A woman(tr) and a child(tr) are walking over(sp−ind) the square(lm) .


Additional tags:

Motion: walking, General type: direction, Specific type: relative, Spatial value : above, (dynamic), Path:middle, FoR=relative

Spatial role labeling is a key task for applications that are required to answer questions about,or have to reason about, spatial relationships. Examples include systems that perform text-to-scene conversion, generation of textual descriptions from visual data, robot navigation tasks and giving directional instructions, geographical information systems (GIS) and many others. Recent research trend in this area and establishing workshops like “Computational Models of Spatial Language Interpretation” (COSLI 2010) indicates an increasing interest in this kind of tasks.

2 Training/Test data


The main annotated corpus is a subset of IAPR TC-12 Benchmark. This contains images taken by tourists with descriptions in different languages. The texts describe ob jects in the image, and their absolute and relative positions in the image. This makes the corpus a rich resource for spatial information. However the descriptions are not always limited to spatial descriptions. Therefore they are less domain-specific and contain free explanations about the images. We have annotated the textual descriptions with basic spatial roles of tra jector, landmark and their corresponding spatial indicators for the first task in addition to the other two parts related to dynamic and formal spatial semantics. Some additional data with smaller size but from various resources, in different domains is also annotated. Roles are assigned both to phrases and their headwords. 325 sentences were annotated by two annotators for the first task. Since the inter-annotator agreement was very good i.e. kappa=0.89 we continued with one annotator. The annotators received a one hour explanation and a set of instructions provided to them.


3 Evaluation methodology


The systems will be evaluated based on the precision, recall and F1-measure over classifying each individual spatial element such as tra jector, landmark, etc. In addition systems will be evaluated on how well they are able to retrieve tuples of (tra jector, spatial-indicator, landmark), in terms of precision, recall and F1-measure. A stricter evaluation will also be carried out where systems are evaluated on how well they are able to retrieve the spatial relations and their general semantics i.e {region,direction,distance}.


4 Availability to participants


The original corpus is available free of charge and without copyright restrictions. Currently the main corpus has been annotated for about 1400 sentences producing about 2000 relations. More-over about 600 sentences from other corpora are annotated. The additional corpora have been selected from the domains where extraction of spatial relations are important issue. Currently we release a trail version of the data containing 600 annotated sentences selected from CLEF-Image benchmark.


5 Current trial data format

The data is in one XML file. Each sentence has an identifier. In each sentence all the playing-role words are assigned identifiers depending to their specific roles of trajector/landmark/spatial-indicator. The identifier of each role is (t|l|s).w.(the index of the related word in the sentence).
The sentences are tokenized based on blank as the separator and the token index is the numerical part of the identifier starting from 0. Spatial relations or links are assigned identifiers too, and relate the roles to each other. Below is one example copied from the data. For more examples and details about the general annotation scheme see [1].


SemEval-3 Task Proposal: Spatial Role Labeling 3
<SENTENCE id=’s11’>
<CONTENT>there are red umbrellas in a park on the right .< /CONTENT>
<TRAJECTOR id=’tw3’> umbrellas< /TRAJECTOR>
<LANDMARK id=’lw6’> park< /LANDMARK>
<SPATIAL INDICATOR id=’sw4’>in < /SPATIAL INDICATOR>
<RELATION id=’r0’ sp=’sw4’ tr=’tw3’ lm=’lw6’ general type= region/ >
<SPATIAL INDICATOR id=’sw7’>on the right < /SPATIAL INDICATOR>
<RELATION id=’r1’ sp=’sw7’ tr=’tw3’ lm=’lw6’ general type= direction / >
< /SENTENCE>


References
1. P. Kordjamshidi, M van Otterlo, and M. F. Moens. Spatial role labeling: task definition and annotation
scheme. In LREC, 2010.
2. P. Kordjamshidi, M van Otterlo, and M. F. Moens. From language towards formal spatial calculi.
In Workshop on Computational Models of Spatial Language Interpretation (CoSLI 2010, at Spatial
Cognition 2010), 2010.

3. Parisa Kordjamshidi, Martijn van Otterlo, and Marie-Francie Moens. Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing, TSLP 8(3) , Dec. 2011.