Flood region segmentation on drone images

You are in taxonomy page

Flood region segmentation on drone images

Climate change has increased the frequency and severity of flooding, presenting significant challenges for Natural Disaster Management (NDM). Effective emergency response depends on the timely detection of flooded areas and critical objects. Advanced Deep Neural Networks (DNNs) are applied for flood region segmentation and object detection, specifically focusing on persons, vehicles, and house roofs in high-resolution drone imagery. Drones offer real-time data acquisition over extensive and inaccessible areas, capturing the dynamic nature of floods. For object detection, state-of-the-art models like YOLOv6 and DETR are utilized to accurately identify essential objects. For flood region segmentation, PSPNet and CNN-I2I are examined for their effectiveness in segmenting and mapping submerged areas. Performance metrics such as mean Average Precision (mAP) and Intersection over Union (IoU) are used to evaluate model effectiveness comprehensively. The results underscore the potential of integrating drone technology with deep learning to enhance NDM strategies, enabling rapid decision-making and reducing the impacts of flooding events.

Lecture by Evgenios Vlachos, AUTH

Lecture link

Forest fire detection and fire/burnt region segmentation on drone images

With climate change on the rise, new challenges for Natural Disaster Management (NDM) arise, leading to rapid advancements in Deep Neural Networks (DNNs), specifically in wildfire scenarios. Forest fire detection and segmentation and burnt area segmentation are critical tasks that require DNNs to achieve precise decision-making in near real-time. Given the complex and dynamic conditions of wildfires, the majority of data is sourced from drone imagery, which facilitates more efficient detection and monitoring of fire behavior. Additionally, due to the spatial variability of fire, specific metrics like image-level mean Average Precision (ImAP) can yield better results, providing better insight into the capabilities of DNNs. Computer vision methodologies can help boost results significantly by efficiently pre-processing images (e.g., HSV, RGBS). These concepts, in addition to the already powerful state-of-the-art DNNs (e.g., PIDnet, CNN I2I), can enable real-time DNN inference providing vital insight into NDM strategies.

Lecture by Matthaios-Dimitrios Tzimas

Human-Centered AI for Autonomous Vehicles

Intelligent/autonomous vehicles, such as self-driving cars, intelligent robots and Unmanned Aerial Vehicles (UAVs) must seamlessly interact with humans, e.g.,  their drivers/operators/pilots or people in their vicinity, whether being obstacles to be avoided (e.g., pedestrians) or targets to be followed and interact with (e.g., when filming a performing athelete). Furthermore, intelligent vehicles and robots have been increasingly employed to assist humans in real-world applications (e.g., for , autonomous transportation, warehouse logistics, or infrastructure inspection) To this end, autonomous vehicles should be equipped with advanced vision systems that allow them to understand and interact with humans in their surrounding environment.

This lecture overviews human-centric AI methods that can be utilized to facilitate visual interaction between humans and autonomous vehicles (e.g., through gestures captured by RGB cameras), in order to ensure their safe and successful cooperation in real-world scenarios. Such methods should: a) demonstrate increased visual perception accuracy to understand human visual cues, b) be robust to input data variations, in order to successfully handle illumination/background/scale changes that are typically encountered in real-world scenarios, and c) produce timely predictions to ensure safety, which is a critical aspect of autonomous vehicles’ applications. Deep learning and neural networks play an important role towards this end, covering the following topics: a) human pose/posture estimation from RGB images, b) human action/activity recognition recognition from RGB images/skeleton data, and c) gesture recognition from RGB images/skeleton data. Finally, embedded execution is extremely important, as it facilitates vehicle autonomy, e.g., in communication-denied environments. Application areas include driver/operator/pilot activity recognition, gesture-based control of autonomous vehicles, or gesture recognition for traffic management. The lecture will offer an overview of all the above plus other related topics and will stress the related algorithmic aspects. Some issues on embedded CNN computation (e.g., through fast convolution algorithms) will be overviewed as well.

Electrical Infrastructure Inspection

This lecture overviews the use of drones for electrical infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.

Elongated object detection and segmentation

The application of computer vision to industrial inspection poses a unique challenge in identifying elongated objects that extend beyond the image frame. This lecture offers a comprehensive overview of detection and segmentation techniques, with a particular emphasis on recent advancements in deep learning-based approaches. Throughout the lecture, we delve into the capabilities of these algorithms, showcasing their potential in enhancing the inspection of pipelines and powerlines. By doing so, we aim to demonstrate how these advanced techniques can substantially reduce the human workload and alleviate stress in industrial inspection processes.

ENFIELD Hackathon 2025: AI for Energy Efficiency (ENFIELD educational material)

The ENFIELD Hackathon 2025 was organized in the framework of the Horizon Project ENFIELD, having taken place in Tallinn, Estonia, on May, 29 and 30 2025.

The Hackathon brought together students and young researchers to develop AI-driven solutions for energy efficiency. Using real-world electricity consumption and weather data, teams competed to predict building energy consumption with precision, efficiency, and clarity.

The event included mentorship, workshops, and public demos, fostering innovation in sustainable AI.