Entity-Relation Extraction as Multi-turn Question Answering
Entity-Relation Extraction as Multi-turn Question Answering
- 發現領域內的問題
-
task formalization level
- 三元組本身的知識表達能力有限,比如,Musk case裡的hierarchical dependency,時間、地點、職位、人物需要在一個更高維度的空間表達
-
algorithm level
- 輸入:a raw sentence with two marked mentions
輸出:whether a relation holds between the two mentions
- hard for neural models to capture all the lexical, semantic and syntactic cues in this formalization
- (1) entities are far away
(2) one entity is involved in** multiple triplets;**
or (3) relation spans have overlaps
- (1) entities are far away
- 輸入:a raw sentence with two marked mentions
-
- related work
- Extracting Entities and Relations
- pipelined approach
- 優點:**flexibility **of integrating different data sources and learning algorithms
- 缺點:suffer significantly from error propagation
- joint approach
- through various dependencies
- constraints solved by integer linear programming
- card-pyramid parsing
- global probabilistic graphical models
- structured perceptron with efficient beamsearch
- table-filling approach
- search orders in decoding and global features
- shared parameters,end-to-end approach that extract entities and their relations using neural network models
- neural tagging model,multi-class classification model based on tree LSTMs
- multi-level attention CNNs
- seq2seq models to generate entity-relation triples
- through various dependencies
- reinforcement learning or Minimum Risk Training
- a global loss function to jointly train the two models under the framework work of Minimum Risk Training
- hierarchical reinforcement learning
- pipelined approach
- Machine Reading Comprehension, predicting answer spans given context
- 主要做的是 extract text spans in passages given queries
- 一種思路可以簡化成 two multi-class classification tasks
- 另一種思路,對於multi-passage MRC, directly concatenating passages, first rank the passages and then run single-passage MRC on the selected passage
- 其他有用的:Pretraining methods like BERT or Elmo
- 趨勢:a tendency of casting non-QA NLP tasks as QA tasks
- 具體的比如
- BiDAF、QANet
- Extracting Entities and Relations
- 本論文的工作
-
inspiration 來源
- identifying the relation between two predefined entities and the authors formalize the task of relation extraction as a single-turn QA task
Levy et al. (2017). Levy et al. (2017) and McCann et al. (2018)
- identifying the relation between two predefined entities and the authors formalize the task of relation extraction as a single-turn QA task
-
Idea
- model hierarchical tag dependency in multi-turn QA, identifying answer spans from the context
each entity type and relation type is characterized by a question answering template, and entities and relations are extracted by answering template questions
- question query encodes
- jointly modeling entity and relation
- exploit the well developed machine reading comprehension (MRC) models
- multi-step reasoning to construct entity dependencies
- model hierarchical tag dependency in multi-turn QA, identifying answer spans from the context
-
advantages
- capture the** hierarchical dependency of tags**, progressively obtain the entities we need for the next turn , closely akin to** the multi-turn slot filling dialogue system**
- the question query encodes important** prior information** for the relation class we want to identify
- the QA framework provides a natural way to simultaneously extract entities and relations: most MRC models support outputting special NONE tokens, indicating that there is no answer to the question
-
dataset
- ACE04, ACE05 and the CoNLL04 corpora
- a newly developed dataset RESUME in Chinese
extract biographical information of individuals from raw texts. The construction of structural knowledge base from RESUME requires four or five turns of QA
* 最大的特點:one person can work for **different **companies during **different **periods of time and that one person can hold **different **positions in **different **periods of time for the **same **company -
model
- 分解成了兩個子任務:a multi-answer task for head-entity extraction + a single-answer task for joint relation and tail-entity extraction
- 第一階段:head-entity extraction,extract this starting entity, we** transform each entity type to a question** using EntityQuesTemplates
- 這個階段抽取到的不一定就是head entities
- 第二階段:The relation and the tail-entity extraction,定義了relations chain用於multi-turn QA,因為一些的抽取取決於其他的抽取
-
Generating Questions using Templates
- type-specific
- natural language questions or pseudo-questions
-
Extracting Answer Spans via MRC
- backbone:BERT,基於多輪問答的問題對Traditional MRC models做了調整:predict a BMEO (beginning, inside, ending and outside) label
- Training and Test
- L = ( 1 − λ ) L ( head-entity ) + λ L ( tail-entity,rel ) \mathcal{L} = ( 1 - \lambda ) \mathcal{L} ( \text { head-entity } ) + \lambda \mathcal{L} ( \text { tail-entity, rel} ) L=(1−λ)L(head-entity)+λL(tail-entity,rel)
- 訓練的時候兩個共享引數,測試的時候head-entities and tail-entities are extracted separately,λ 控制兩個子任務的 tradeoff
-
Reinforcement Learning
- 一個turn中抽取的答案還會影響downstream turns,也影響later accuracies
- 由於multi-turn dialogue generation的結果比較好,所以打算也用reinforcement learning
(Mrkˇsi´c et al., 2015; Li et al., 2016; Wen et al., 2016)
- action:selecting a text span in each turn
- policy:probability of selecting a certain span given the question and the context
p ( y ( w 1 , … , w n ) = answer ∣ question, s ) = p ( w 1 = B ) × p ( w n = E ) ∏ i ∈ [ 2 , n − 1 ] p ( w i = M ) \left. \begin{array} { l } { p ( y ( w _ { 1 } , \ldots , w _ { n } ) = \text { answer } | \text { question, } s ) } \\ { = p ( w _ { 1 } = \mathrm{B} ) \times p ( w _ { n } = \mathrm{E} ) \prod _ { i \in [ 2 , n - 1 ] } p ( w _ { i } = \mathrm{M} ) } \end{array} \right. p(y(w1,…,wn)=answer∣question,s)=p(w1=B)×p(wn=E)∏i∈[2,n−1]p(wi=M) - Reward:特定句子,用正確抽取的triples作為獎勵 ,maximizes the expected reward E π [ R ( w ) ] E _ { \pi } [ R ( w ) ] Eπ[R(w)],通過sampling from the policy π π π來近似
- gradient的計算:likelihood ratio:
$\nabla E ( \theta ) \approx [ R ( w ) - b ] \nabla \log \pi ( y ( w ) | \text { question } s ) ) $
b 是 baseline value(所有之前獎勵的平均), 每輪答對就獎勵+1,final reward是所有輪次積累的獎勵 - policy networks initialization:pre-trained head-entity and tail-entity extraction model
- experience replay strategy:for each batch, half of the examples are simulated and the other half is randomly selected from previously generated examples.
- strategy of curriculum learning 用在了 RESUME dataset 上,gradually increase the number of turns from 2 to 4 at training
-
Experimental Results( SOTA results)
- 指標:micro-F1 scores, precision and recall
- 自己搞的資料集RESUME上的結果
- 先確定一個baseline:tagging+relation,entity+dependency
- entities 部分用BERT tagging models,relations部分用 CNN to representations output by BERT transformers
- 這個任務akin to a dependency parsing task at the tag-level rather than the word-level
- 具體做法:通過BERT tagging model給每個word分配tagging labels, 然後調整SOTA dependency parsing model Biaffine來construct dependencies between tags(jointly trained)
- 比較常用的ACE04, ACE05 and CoNLL04上的結果
-
Ablation Studies
-
Effect of Question Generation Strategy:natural language questions會比pseudo-questions更好,因為提供的是more fine-grained semantic information
-
Effect of Joint Training: λ 間隔0.1地測了10個數據,其中,entity-extraction並不是在 λ = 0 的時候最好(說明relation extraction的部分可以提升entity extraction的效果)
-
Case Study:和SOTA MRT model對比,可以識別出相隔比較遠的實體; 當句子裡包含兩對相同關係的時候,也可以識別出來
-
-
後續發展空間
- could easilty integrate reinforcement learning (just as in multi-turn dialog systems)
-