1. 程式人生 > >Image Paragraph論文合輯

Image Paragraph論文合輯

A Hierarchical Approach for Generating Descriptive Image Paragraphs (CPVR 2017) Li Fei-Fei.

資料集地址: http://cs.stanford.edu/people/ranjaykrishna/im2p/index.html

 

Workflow:

1.decompose the input image by detecting objects and other regions of interest

2.aggregate features across these regions to produce a pooled representation richly expressing the image semantics

3.take this feature vector as input by a hierarchical recurrent neural network composed of two levels: a sentence RNN and a word RNN.

4.sentence RNN receives the image features ,decides how many sentences to generate in the resulting paragraph, and produce an input topic vector for each sentence.

5.word RNN use this topic vector to generate the words of a single sentence.

 

Region Detector:

CNN+RPN

resize image-->pass through a CNN to get feature maps-->region proposal network(RPN) process the resulting feature maps-->regions of interest are projected onto the convolutional feature maps-->the corresponding region of the feature map is resized to a fixed size using bilinear interpolation and processed by two fully-connected layers to give a vector of dimension D for each region.

Given a dataset of images and ground-truth regions of interest, the region detector can be trained end-to-end fashion for object detection and for dense captioning.

 

Region Pooling:

elementwise maximum, Wpool and bpool are learned parameters, vi stands for a set of vectors produced by the region detector.

 

Hierarchical Recurrent Network:

Why Hierachical?

1.It reduces the length of time over which the recurrent networks must reason.

2.the generated paragraphs contain numbers of sentences, both the paragraph and sentence RNNs need only reason over much shorter time-scales, making learning an appropriate representation much more tractable

Sentence RNN: take the pooled region vector vp as input and produce a sequence of hidden states h1,h2,...,hS one for each sentence in the paragraph. Each hidden state used in two ways, produce a distributin pi to determine whether to stop and produce the topic vector ti for the i-th sentence of the paragraph ,which is the input of the word RNN.

Word RNN: the same as the LSTM components in the image captionings.

 

Training and Sampling:

training loss l(x,y) for the example (x,y) is a weighted sum of the two cross-entropy terms: a sentence loss lsent on the stopping distribution pi , and a word loss lword on the word distribution pij

 

Experiments:

Recurrent Topic-Transition GAN for Visual Paragraph Generation (ICCV 2017)
Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, Eric Xing
RTT-GAN

 

 

 

 

 

Towards Diverse and Natural Image Descriptions via a Conditional GAN (ICCV 2017)

 

 

 

Diverse and Coherent Paragraph Generation from Images (ECCV 2018)

github: https://github.com/metro-smiles/CapG_RevG_Code

The authors propose to augment paragraph generation techniques with "coherence vectors," "global topic vectors," and modeling of the inherent ambiguity of associating paragraphs with images, via a variational auto-encoder formulation.

 

 

Training for Diversity in Image Paragraph Captioning (EMNLP 2018)

github: https://github.com/lukemelas/image-paragraph-captioning