表情識別資料集整理
CK and CK+
It contains 97 subjects, which posed in a lab situation for the six universal expressions and the neutral expression. Its extension CK+ contains 123 subjects but the new videos were shot in a similar environment.
Reference: P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops CVPR4HB’10, 2010, pp. 94–101.
Website:
Modalities: Visual
說明: ck只有靜態圖片,CK+包括視訊。表情標籤為分類離散值。JAFFE
It contains 219 images of 10 Japanese females. However, it has a limited number of samples, subjects and has been created in a lab controlled environment.
Website: http://www.kasrl.org/jaffe.html
Modalities:
說明: 只有219張表情圖片。表情標籤為分類離散值。HUMAINE Database
Datafiles containing emotion labels, gesture labels, speech labels and FAPS all readable in ANVI(標籤等資訊要用ANVI工具才能開啟)
Modalities: Audio+visual + gesture
Website: http://emotion-research.net/download/pilot-db/
說明: 下載資料集后里面只有視訊,沒有標籤等資訊。Recola database
Totally 34 subjects; 14 male, 20 female
Reference: FABIEN R., ANDREAS S., JUERGEN S., DENIS L.. Introducing the RECOLA multimodal corpus of collaborative and affective interactions[C]//10th IEEE Int’l conf. and workshops on automatic face and gesture recognition. Shanghai, CN: IEEE Press, 2013:1-8.
Website: http://diuf.unifr.ch/diva/recola/index.html
Modalities: Audio+visual+ EDA, ECG(生理模態)
說明: 資料集共34個視訊,表情標籤為Arousal-Valence的連續值。標籤存在csv檔案裡。MMI
The database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters. The database is freely available to the scientific community.
Reference:
a) Induced Disgust, Happiness and Surprise: an Addition to the MMI Facial Expression Database
M. F. Valstar, M. Pantic. Proceedings of Int’l Conf. Language Resources and Evaluation, Workshop on EMOTION. Malta, pp. 65 - 70, May 2010.
b) Web-based database for facial expression analysis,M. Pantic, M. F. Valstar, R. Rademaker, L. Maat. Proceedings of IEEE Int’l Conf. Multimedia and Expo (ICME’05). Amsterdam, The Netherlands, pp. 317 - 321, July 2005.
Modalities: visual(視訊)
Website: http://mmifacedb.eu/
http://ibug.doc.ic.ac.uk/research/mmi-database/
說明: 該資料集很大,全部包括2900個視訊,標籤主要是AU的標籤,標籤存在xml檔案裡。NVIE 中科大采集的一個數據集
中科大NVIE資料集包括自發表情庫和人為表情庫,本實驗採用其中的自發表情庫。自發表情庫是通過特定視訊誘發並在三種光照下(正面、左側、右側光照)採集的表情庫,其中正面光照103人,左側光照99人,右側光照103人。每種光照下,每人有六種表情(喜悅、憤怒、哀 傷、恐懼、厭惡、驚奇)中的三種以上,每種表情的平靜幀以及最大幀都已挑出
Reference: WANG Shangfei, LIU Zhilei, LV Siliang, LV Yanpeng, et al. A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference[J]. IEEE Transactions on Multimedia, 2010, 12(7): 682-691.
Website: http://nvie.ustc.edu.cn/
Modalities: visual(圖片)
說明: 標籤以Excel檔案給出,標籤包括表情類的強度,如disgust的表情強度。標籤還包括Arousal-Valence標籤。RU-FACS database
This database consists of spontaneous facial expressions from multiple views, with ground truth FACS codes provided by two facial expression experts.
We have collected data from 100 subjects, 2.5 minutes each. This database constitutes a significant contribution towards the 400-800 minute database recommended in the feasibility study for fully automating FACS. To date we have human FACS coded the upper faces of 20% the subjects.
Reference: M. S. Bartlett, G. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan, “Automatic recognition of facial actions in spontaneous expressions,” Journal of Multimedia, vol. 1, no. 6, pp. 22–35, 2006. 3, 5
Website: http://mplab.ucsd.edu/grants/project1/research/rufacs1-dataset.html
說明: 該資料集的標籤是FACS編碼的標籤(只有部分視訊才有標籤),目前該資料集還未向研究者公開。Belfast naturalistic database
The Belfast database consists of a combination of studio recordings and TV programme grabs labelled with particular expressions. The number of TV clips in this database is sparse
Modalities: Audio-visual(視訊)
Reference: E. Douglas-Cowie, R. Cowie, and M. Schr¨oder, “A New Emotion Database: Considerations, Sources and Scope,” in ISCAITRW on Speech and Emotion, 2000, pp. 39–44.
Website: http://sspnet.eu/2010/02/belfast-naturalistic/
說明: 資料集為視訊,視訊包括speech的情感識別GEMEP Corpus
The GEneva Multimodal Emotion Portrayals (GEMEP) is a collection of audio and video recordings featuring 10 actors portraying 18 affective states, with different verbal contents and different modes of expression.
Modalities: Audio-visual
Reference: T. B¨anziger and K. Scherer, “Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) Corpus,” in Blueprint for affective computing: A sourcebook, K. Scherer, T. B¨anziger, and E. Roesch, Eds. Oxford, England: Oxford University Press, 2010
Website: http://www.affective-sciences.org/gemep
http://sspnet.eu/2011/05/gemep-fera/
說明: FERA2011比賽採用此資料集,標籤主要是分類。Paleari
Reference: M. Paleari, R. Chellali, and B. Huet, “Bimodal emotion recognition,” in Proceeding of the Second International Conference on Social Robotics ICSR’10, 2010, pp. 305–314.
該資料集我沒找到它的官網,我查看了上面那個引用文章的摘要發現那篇文章不是介紹表情資料集的。那個文章在springer上,學校的網只能查到到摘要和第一章。VAM corpus
The VAM corpus consists of 12 hours of recordings of the German TV talk-show “Vera am Mittag” (Vera at noon). They are segmented into broadcasts, dialogue acts and utterances, respectively. This audio -visual speech corpus contains spontaneous and very emotional speech recorded from unscripted, authentic discussions between the guests of the talk-show
Modalities: Audio-visual
Reference: M. Grimm, K. Kroschel, and S. Narayanan, “The Vera am Mittag German audio-visual emotional speech database,” in IEEE International Confernce on Multimedia and Expo ICME’08, 2008, pp. 865–868
Website: http://emotion-research.net/download/vam
說明: 該資料集主要是speech視訊,標籤為連續值,具體包括三個維度:valence (negative vs. positive), activation (calm vs. excited) and dominance (weak vs. strong)。SSPNet Conflict Corpus(嚴格意義上不是表情識別資料集)
The “SSPNet Conflict Corpus” includes 1430 clips (30 seconds each) extracted from 45 political debates televised in Switzerland. The clips are in French
Modalities: Audio-visual
Reference: S.Kim, M.Filippone, F.Valente and A.Vinciarelli “Predicting the Conflict Level in Television Political Debates: an Approach Based on Crowdsourcing, Nonverbal Communication and Gaussian Processes“ Proceedings of ACM International Conference on Multimedia, pp. 793-796, 2012.
Website: http://www.dcs.gla.ac.uk/vincia/?p=270
說明: 該資料集主要是政治辯論中的視訊,標籤為conflict level。Semaine database
The database contains approximately 240 character conversations, and recording is still ongoing. Currently approximately 80 conversations have been fully annotated for a number of dimensions in a fully continuous way using FeelTrace.
Website: http://semaine-db.eu/
Modalities: Audio-visual
Reference: The SEMAINE database: Annotated multimodal records of emotionally coloured conversations between a person and a limited agent G. Mckeown, M. F. Valstar, R. Cowie, M. Pantic, M. Schroeder. IEEE Transactions on Affective Computing. 3: pp. 5 - 17, Issue 1. April 2012.
說明: 通過人機對話來觸發的視訊,標籤為連續的情感維度值,不是分類。AFEW database(Acted Facial Expressions In The Wild)
Acted Facial Expressions In The Wild (AFEW) is a dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies.
Reference: Abhinav Dhall, Roland Goecke, Simon Lucey, Tom Gedeon, Collecting Large, Richly Annotated Facial-Expression Databases from Movies, IEEE Multimedia 2012.
Website: https://cs.anu.edu.au/few/AFEW.html
Modalities: Audio-visual(電影剪輯片斷)
說明: 該資料集的內容為從電影中剪輯的包含表情的視訊片段,表情標籤為六類基本表情+中性表情,annotation的資訊儲存在xml檔案中。
AFEW資料集為Emotion Recognition In The Wild Challenge (EmotiW)系列情感識別挑戰賽使用的資料集,該比賽從2013開始每年舉辦一次。
EmotiW官網:https://cs.anu.edu.au/few/SFEW database(Static Facial Expressions in the Wild)
Static Facial Expressions in the Wild (SFEW) has been developed by selecting frames from AFEW
Reference: Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. Static Facial Expressions in Tough Conditions: Data, Evaluation Protocol And Benchmark, First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies BeFIT, IEEE International Conference on Computer Vision ICCV2011, Barcelona, Spain, 6-13 November 2011
Website: https://cs.anu.edu.au/few/AFEW.html
Modalities: Visual
說明: 該資料集是從AFEW資料集中抽取的有表情的靜態幀,表情標籤為六類基本表情+中性表情,annotation的資訊儲存在xml檔案中。AVEC系列資料集
AVEC是從2011開始每一年舉辦一次的表情識別挑戰賽,表情識別的模型主要採用的連續情感模型。其中AVEC2012使用的情感維度為Arousal、Valence、Expectancy、Power; AVEC2013的情感維度為Valence和Arousal;AVEC2014的情感維度Valence、Arousal和Dominance。
AVEC2013和AVEC2014引入了depression recognition.
Modalities: Audio-visual
Website:
http://sspnet.eu/avec2011/
http://sspnet.eu/avec2012/
http://sspnet.eu/avec2013/
http://sspnet.eu/avec2014/
Reference: Michel Valstar , Björn W. Schuller , Jarek Krajewski , Roddy Cowie , Maja Pantic, AVEC 2014: the 4th international audio/visual emotion challenge and workshop, Proceedings of the ACM International Conference on Multimedia, November 03-07, 2014, Orlando, Florida, USA
說明:標籤主要是針對的情感維度,通過csv的形式給出的。LIRIS-ACCEDE資料集
LIRIS-ACCEDE資料集主要包含三個部分:
Discrete LIRIS-ACCEDE - Induced valence and arousal rankings for 9800 short video excerpts extracted from 160 movies. Estimated affective scores are also available.
Continuous LIRIS-ACCEDE - Continuous induced valence and arousal self-assessments for 30 movies. Post-processed GSR measurements are also available.
MediaEval 2015 affective impact of movies task - Violence annotations and affective classes for the 9800 excerpts of the discrete LIRIS-ACCEDE part, plus for additional 1100 excerpts used to extend the test set for the MediaEval 2015 affective impact of movies task.
Modalities: Audio-visual
Website:
http://liris-accede.ec-lyon.fr/index.php
Reference:
Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “LIRIS-ACCEDE: A Video Database for Affective Content Analysis,” in IEEE Transactions on Affective Computing, 2015.
Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos,” in 2015 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2015
M. Sjöberg, Y. Baveye, H. Wang, V. L. Quang, B. Ionescu, E. Dellandréa, M. Schedl, C.-H. Demarty, and L. Chen, “The mediaeval 2015 affective impact of movies task,” in MediaEval 2015 Workshop, 2015
說明: 該資料集既有離散的情感資料又有基於維度的情感資料。