Introduction to Containers with Docker

You are in taxonomy page

Introduction to Containers with Docker

Lecture by Prof. Lorentzo Carnevale.

Uploaded by Aristotle University of Thessaloniki.

Link to video

Introduction to Cloud/edge computing

Cloud Computing during the time has gained concrete evidence to be a disruptive technology still in its full development. Many drawbacks of the Cloud have brought to improve many their crucial aspects, like performance, security and privacy, etc. Today Edge Computing try to deal with these implications to make them less problematic and much more feasible. Starting from the NIST definition (IaaS, PaaS and SaaS), the talk looks at the last decade of ICT evolution preparing the systems for new ICT challenges and implementations, like AI algorithms on top of them.

Lecture by Prof. Lorentzo Carnevale.

Link to video

Uploaded by Aristotle University of Thessaloniki.

Attention and Transformers Networks in Computer Vision Lecture

Abstract

This lecture explores various transformer-based approaches that have emerged as powerful alternatives of Convolutional Neural Networks (CNNs) in Computer Vision, and revolutionized the way visual information is analyzed. While CNNs have been the dominant architecture for visual tasks, they face challenges in capturing long-range dependencies and handling variable-sized inputs. Recent research has shown promising results by combining convolutional layers with attention mechanisms. Specifically, attention mechanisms allow the model to focus on relevant regions of the image, enabling better contextual understanding.

Specific transformer architectures designed for computer vision tasks are introduced, including the Vision Transformer (VIT), which replaces the traditional convolutional layers with self-attention mechanisms, facilitating better global information integration. Furthermore, the Detection Transformer (DETR) introduces transformers to object detection, achieving impressive results by utilizing a set-based representation of objects. Similarly, the Segmentation Transformer (SETR) and Segment Anything Model (SAM) leverage transformer architectures for semantic segmentation tasks, demonstrating improved performance in capturing fine-grained details. The lecture also explores unsupervised transformers such as the self-distilling DinO, which leverage self-supervision to learn representations without the need for explicit labels. Finally, the Video Vision Transformer (ViViT) extends the transformer architecture to video understanding, capturing spatiotemporal dependencies and achieving state-of-the-art performance. Several applications are presented, notably in industrial surveillance (pipeline image segmentation and pipeline detection) as well as in natural disaster management (e.g., forest fire detection, detection of flooded houses).

 

Figure: Typical Vision Transformer architecture.

 

Lecture by N. M. Militsis and Prof. Ioannis Pitas

Link to the video.

AI University Education

This lecture overviews the impact of AI on education. First of all, the impact of AI on citizen education at High School level is overviewed. Then, the following question is debated: Is AI Science and Engineering a separate scientific discipline? The extend and geographical distribution of the AI studies worldwide is presented. Furthermore, the impact of  AI in teaching various other scientific disciplines at University level is analyzed. Finally, the LLM (e.g., ChatGPT) implications on education are also overviewed, notably its capabilities to reply exam questions, including mathematical and programming ones.

Deep learning algorithms for intelligent support of workers

This lecture overviews the use of deep learning-based methods and algorithms for supporting human workers in industrial environments. Deep learning algorithms are increasingly employed in the industrial sector, primarily as a part of advanced systems (e.g., intelligent machines/robots), since they offer effective and reliable solutions for ensuring human workers’ safety and reducing their stress, as well as for increasing the efficiency of the required infrastructure inspection and maintenance activities. In this direction, deep learning techniques can be used to monitor human workers and issue warnings in cases of dangerous behaviors while simultaneously ensuring their privacy, facilitate human worker-robots/machines collaboration, and automate the inspection and maintenance activities by autonomously detecting objects of interest (e.g., damages) and/or  recognizing dangerous events/situations. The lecture will offer: a) a brief introduction on the most important deep learning tools (Multi-Layer Perceptrons, Convolutional Neural Networks, Transformers, etc.), b) an in-depth analysis on how these tools are used to develop algorithms for object detection, image segmentation, human pose estimation, and human gesture recognition, and c) real-world application examples in two industrial settings: industrial pipeline inspection and electric power infrastructure inspection.

Lecture by Dr. Christos Papaioannidis.

Link to video

Visual detection of elongated objects

Visual Detection of Elongated Objects: The application of computer vision to industrial inspection poses a unique challenge in identifying elongated objects that extend beyond the image frame. This lecture offers a comprehensive overview of detection and segmentation techniques, with a particular emphasis on recent advancements in deep learning-based approaches. Throughout the lecture, we delve into the capabilities of these algorithms, showcasing their potential in enhancing the inspection of pipelines and powerlines. By doing so, we aim to demonstrate how these advanced techniques can substantially reduce the human workload and alleviate stress in industrial inspection processes.

Lecture by Dimitrios Psarras.

Link to video