Mastodon posts dataset

You are in taxonomy page

Mastodon posts dataset

The Aristotle University of Thessaloniki (hereinafter, AUTH) created the Mastodon posts dataset within the context of the project TEMA (Trusted Extremely Precise Mapping and Prediction for Emergency Management; Grant Agreement No. 101093003; Call: HORIZON-CL4-2022-DATA-01; Topic: HORIZON-CL4-2022-DATA-01-01) that is funded by the European Commission-European Union (starting date: 1 December 2022; end date: 30 November 2026).

General description of the dataset: The dataset comprises of 766 social media posts in Greek from the platform “Mastodon” spanning the 2023 wildfires in Greece. Each post was annotated internally with Plutchik-8 emotions. To obtain texts use the Mastodon API (https://docs.joinmastodon.org/api/) along with the provided IDs.

The Mastodon posts dataset is publicly available for scientific purposes. To obtain access to the dataset and annotation files read this Dataset-licence-agreement, sign a printed copy of the agreement ands send a scanned copy to georgebouchagiar@csd.auth.gr with:

Subject: Mastodon posts dataset download request

Please send me a username and password in order to download the Mastodon posts dataset.

Within a few business days, you will receive a reply email with username/password and detailed instructions. Please use only academic email addresses. Do not hesitate to ask any questions or to make suggestions regarding this database.

MOBISERV-AIIA Database

Introduction

Eating and Drinking Activity Recognition Database (MOBISERV-AIIA Database) was created in order to fulfill the need for a good benchmarking database for the evaluation of specialized meal intake (such as eating and drinking) activity recognition algorithms. The captured video data constitute a database gathered during AIIA Laboratory’s research under the Collaborative European Project: MOBISERV FP7-248434, An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the Elderly.

Database Overview

Meal intake procedure was captured in a controlled laboratory environment using 4 Sony XCD-V60CR digital video cameras positioned such as to capture the participants frontal, upper frontal, 45° and profile view. An instance of each camera viewing angle is shown in figure.All sequences were recorded at a resolution of 640×480 pixels, at a frame rate of 15 frames per second and were saved in avi format.In the recordings 12 persons (6 females and 6 males) participated with Caucasian origin aging between 22 and 39 years old with different facial characteristics (eye glasses, beard etc.). Each person participated in 4 distinct meal intakesessions and two videos for each one session were recorded: one depicting the participant wearing a blouse or a shirt with long sleeves and one wearing a t-shirt or a blouse with short sleeves. In total, 8 videos are available for each person for each session. This results in a database consisted of 384 video sequences.

 

 

Scenario

The meal intake scenario that each participant has performed involves eating with spoon, cutlery, fork, hands and drinking from cup, glass and straw performed in the following order:

  • He/she enters the room, sits in front of the frontal camera and starts eating cereals with a spoon (4 iterations) and he/she drinks water from a cup (4 iterations).
  • The bowl is replaced with a plate and the cup with a glass of water. The person continues eating using a knife and a fork (4 iterations) and drinks from a cup (1-2 iterations).
  • He/she eats the same food using only the fork (the hand holding the fork is changed, 4 iterations). He/she drinks again (1-2 iterations).
  • The person puts a straw in his/her glass of water and continues eating with his/her hand (4 iterations) and drinking from straw (4 iterations in total).
  • The person, also, eats with his/her hands a loaf of bread, a croissant or a sandwich (4 bites).
  • Finally, the person spends some time doing something else besides eating, like thinking/wondering, paying attention, talking, reading, writing, touching his/her hair, ear, glasses, chin or yawing.
  • He/she stands up and exits the room.

Annotation

Along with the recorded videos, two annotation txt files are available for each session. The first txt file refers to the frontal view video and annotates the human performed movement per video frame during meal intake procedure. The movement types are described in the ‘annotation.doc’ file that accompanies the txt files.

The second txt file contains information for the synchronizarion of all video files with respect to the frontal view video file. The 4 numbers written in a column form define the difference in frames between a specific frame in the video captured by the frontal view camera and the videos captured by the other cameras in the following order: frontal camera, 45° camera, profile camera, upper frontal camera.

How to Download the Dataset

The MOBISERV-AIIA Database is publicly available for scientific purposes. To obtain access to the dataset and annotation files read the MOBISERVE_Dataset_licence_agreement, sign a printed copy of the agreement send a scanned copy to georgebouchagiar@csd.auth.gr with:

Subject: MOBISERV-AIIA Database download request

Please send me a username and password in order to download the MOBISERV-AIIA Database.

Within a few business days, you will receive a reply email with username/password and detailed instructions. Please use only academic email addresses. Do not hesitate to ask any questions or to make suggestions regarding to this database. The source of the datasets should be acknowledged in all publications in which it is used as by referencing the following paper:
A. Iosifidis, E. Marami, A. Tefas, I. Pitas and K. Lyroudia, “The MOBISERV-AIIA Eating and Drinking multi-view database for vision-based assisted living”, Journal of Information Hiding and Multimedia Signal Processing, vol. 6, no. 2, pp. 254-273, 2015

MOBISERV FP7-248434
Collaborative European Project:
An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the Elderly

AUTH UAV Gesture Dataset

AUTH has created the “AUTH UAV Gesture Dataset” in the context of the “AERIAL-CORE” collaborative project, funded from the European Union’s Horizon 2020 research and innovation programme. The AUTH UAV Gesture Dataset contains videos depicting human subjects performing 6 classes of gestures. This dataset can be employed for training/evaluating gesture recognition machine learning models. It incorporates parts of the “UAV-Gesture” dataset (from the University of South Australia) and of the “NTU RGB+D” dataset (from the ROSE Lab). AUTH made the AUTH UAV Gesture Dataset available to researchers only in order to advance relevant academic research and assist exchange of information that promotes science. If one uses any part of these datasets in his/her work, he/she is kindly asked to cite the following three papers:

  • F. Patrona, I. Mademlis, I. Pitas, “An Overview of Hand Gesture Languages for Autonomous UAV Handling”, in Proceedings of the Workshop on Aerial Robotic Systems Physically Interacting with the Environment (AIRPHARO), 2021 (DOI: 10.1109/AIRPHARO52252.2021.9571027).
  • A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “NTU RGB+D: A large-scale dataset for 3D human activity analysis”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 (DOI: 10.1109/CVPR.2016.115).
  • A. G. Perera, Y. Wei Law, and J. Chahl, “UAV-GESTURE: A dataset for UAV control and gesture recognition”, in Proceedings of the European Conference on Computer Vision (ECCV), 2018 (DOI: 10.1007/978-3-030-11012-3_9).

In order to access the datasets created/assembled by Aristotle University of Thessaloniki, please complete and sign this license agreement. Subsequently, email it to Prof. Ioannis Pitas (using “AerialCore – AUTH UAV Gesture Dataset availability” as e-mail subject) so as to receive FTP credentials for downloading.

Digital libraries for 3D human face

This resource provides links to representative repositories of 3D faces. In the last few years, some 3D face datasets have been acquired and made publicly available. Typically, these datasets have been designed targeting a specific application. For example, some datasets have many identities and so are mostly used for 3D face recognition, while others also include categorized expressions and Action Units (AU) and can be used for facial expression analysis. In the following, the main features of the 3D face datasets that are most widely used in the literature are summarized.