Subject: Call-for-Participation - Data Released: MediaEval 2018 Predicting Media Memorability Task

[Apologies for cross-postings]



*******************************************************
2nd CALL FOR PARTICIPATION - DATA RELEASED
Predicting Media Memorability Task
2018 MediaEval Benchmarking Initiative for Multimedia Evaluation
Website: http://www.multimediaeval.org/mediaeval2018/memorability/
*******************************************************
Register here: https://docs.google.com/forms/d/e/1FAIpQLSfw11pDSAJb92K6lLH0DU3r85NMOj1Ww2A5R01iqQE985fqdg/viewform
*******************************************************

The Predicting Media Memorability Task focuses on the problem of predicting how memorable a video will be. It requires participants to automatically predict memorability scores for videos, which reflect the probability of a video being remembered. 

Participants will be provided with an extensive dataset of videos with memorability annotations, and pre-extracted state-of-the-art visual features. The ground truth has been collected through recognition tests, and, for this reason, reflects objective measures of memory performance. In contrast to previous work on image memorability prediction, where memorability was measured a few minutes after memorization, the dataset comes with ‘short-term’ and ‘long-term’ memorability annotations. Because memories continue to evolve in long-term memory, in particular during the first day following memorization, we expect long-term memorability annotations to be more representative of long-term memory performance, which is used preferably in numerous applications. 

Participants will be required to train computational models capable of inferring video memorability from visual content. Optionally, descriptive titles attached to the videos may be used. Models will be evaluated through standard evaluation metrics used in ranking tasks.


***********************
Target communities
***********************
Researchers will find this task interesting if they work in the areas of human perception and scene understanding such as image and video interestingness, memorability, attractiveness, aesthetics prediction, event detection, multimedia affect and perceptual analysis, multimedia content analysis, machine learning (though not limited to).


***********************
Data
***********************
Data is composed of 10,000 short (soundless) videos extracted from raw footage used by professionals when creating content. The videos are shared under Creative Commons licenses that allow their redistribution. They come with a set of pre-extracted features, such as: Dense SIFT, HoG descriptors, LBP, GIST, Color Histogram, MFCC, Fc7 layer from AlexNet, C3D features, etc.


******************************
Workshop
******************************
Participants to the task are invited to present their results during the annual MediaEval Workshop, which will be held 29-31 October 2018 at EURECOM, Sophia Antipolis, France. Working notes proceedings are to appear with CEUR Workshop Proceedings (ceur-ws.org).


******************************
Important dates (tentative)
******************************
Development data release: 24 May
Test data release: 25 June
Runs due: 1 October
Working notes papers due: 17 October
MediaEval Workshop, Sophia Antipolis, France: 29-31 October


***********************
Task coordination
***********************
Romain Cohendet, Technicolor, France (romain.cohendet at technicolor.com)
Claire-Hélène Demarty, Technicolor, France (claire-helene.demarty at technicolor.com)
Quang-Khanh-Ngoc Duong, Technicolor, France
Bogdan Ionescu, University Politehnica of Bucharest, Romania
Mats Sjöberg, Aalto University, Finland
Thanh-Toan Do, ARC Center of Excellence for Robotic Vision (ACRV), The University of Adelaide, Australia