top of page

Want to generate your own video summary in seconds?

Exploring Image Processing and Computer Vision: A Workshop Overview

Discover the world of image processing and computer vision through a comprehensive workshop covering topics like object detection, vision language models, and deployment to FastAPI.

Video Summary

The workshop on image processing and computer vision commenced with an enlightening introduction by Mow, a proficient data scientist at FWD insurance. The session delved into various topics, including computer vision, image processing utilizing OpenCV, object detection, vision language models, and deployment to FastAPI. Computer vision, as explained, involves the art of teaching computers to perceive and comprehend visual data by employing algorithms to recognize objects within images. These images are essentially represented as numerical values, with grayscale images spanning from 0 to 255 and color images comprising RGB channels.

The workshop provided hands-on exercises in a notebook environment, guiding participants on accessing the notebook through platforms like Google Colab or local instances. Attendees were led through the process of importing libraries, reading images, and exploring the functionalities of OpenCV. The primary objective of the workshop was to offer a practical insight into image processing and computer vision concepts.

The discussion further elaborated on various image processing functions in Python using OpenCV, such as reading images, resizing, adjusting brightness, converting to grayscale, edge detection, histogram equalization, and thresholding. It elucidated the steps involved in loading an image, checking its shape, printing specific channels, plotting the image, resizing it, altering brightness and contrast, converting it to grayscale, applying edge detection, equalizing the histogram, and thresholding the image.

Moreover, the conversation emphasized the significance of comprehending image processing concepts and the mathematical operations essential for feature extraction. It also covered the application of The Equalizer to images, including its utilization in selected regions, rotation, translation, and various image operations. Object detection was a key focus, discussing the stages of classification, localization, and object detection, underscoring the importance of utilizing pre-trained models like SSD for efficient object detection.

The dialogue extended to the use of pre-trained models for image detection, particularly concentrating on object detection utilizing the COCO dataset. Pre-trained models, being machine learning models trained on extensive datasets like COCO, containing annotated images for object detection, enable researchers to save time and resources by leveraging existing models rather than starting from scratch.

Furthermore, the conversation touched upon the significance of scaling images for precise object detection, the utilization of specialized sensors for height and age identification, and the applicability of the COCO dataset for video analysis. The process of importing libraries, defining models, loading images, and executing inference was elaborated upon in detail.

Object detection, modifying labels of boundary boxes, fine-tuning models, and the introduction of multimodal models like CLIP were discussed. Object detection involves identifying objects in images using pre-trained models or fine-tuning them for specific tasks. Modifying labels of boundary boxes necessitates fine-tuning the model, while multimodal models integrate information from diverse mediums like text and images to enhance performance.

The conversation also shed light on using pre-trained models like CLIP from Hugging Face for image classification. The discussion encompassed the process of detecting objects in images through a mathematical operation known as softmax to derive probabilities. Accessing the Google Cloud Platform for code execution, model training, and transferring datasets from GCP Storage to Colab was highlighted, emphasizing the importance of utilizing Jupyter Lab for effective code execution and model training.

The deployment process was briefly touched upon, emphasizing the use of Vision Language models for combined image and text analysis. Building a Docker image for deploying a solution involves executing specific commands to bundle all dependencies into a single container file. The Docker build command amalgamates the requisite image and environment components, with the solution comprising a backend featuring an object detection endpoint that provides information based on input.

Functions for loading images, object detection, and clip prediction were defined, and the Docker file was created from an existing container, incorporating necessary packages and exposing ports for interaction. A requirements file listed essential packages like Transformers and FastAPI.

The steps to run a Docker container for object detection were elucidated, encompassing defining, installing, copying files, and executing commands. The utilization of GPU for expedited processing was demonstrated, along with querying the container using Python requests. The speaker also mentioned accessing resources on Google Drive for more detailed content on object detection.

Click on any timestamp in the keypoints section to jump directly to that moment in the video. Enhance your viewing experience with seamless navigation. Enjoy!

Keypoints

00:00:26

Workshop Introduction

The workshop on image processing, computer vision, and visual language models begins with Mingal, a data scientist manager at FWD insurance, leading the session. He has extensive experience in implementing AI models and constructing data pipelines across various Cloud platforms.

Keypoint ads

00:01:30

Workshop Overview

Mingal introduces the workshop modules, starting with an introduction to computer vision, followed by image processing using OpenCV, object detection with a pre-trained model, and an introduction to vision language models and deployment to FastAPI.

Keypoint ads

00:03:54

Definition of Computer Vision

Computer vision is defined as a branch of AI that teaches computers to see and understand visual data, enabling them to identify objects like pedestrians, traffic lights, and vehicles in images. Algorithms play a crucial role in this process, converting visual data into actionable insights.

Keypoint ads

00:05:02

Comparison with Human Vision

While humans use their eyes and brain to interpret visual information, computers rely on sensors and algorithms to process images. Deep learning models, inspired by the brain's neural networks, analyze images as numerical values, allowing mathematical operations to extract meaningful information.

Keypoint ads

00:05:46

Image Encoding

Images are encoded in black and white with values ranging from 0 to 255, where lower numbers represent darker shades of gray. Color images typically use RGB or RGBA channels, with other encoding options like HSL available.

Keypoint ads

00:06:21

Accessing Notebooks

To access notebooks, users can open them on their site or through a drive. For Google Collab, users can connect to an instance and change the runtime type to T4 for GPU access. It's important to disconnect when done to release the assigned computer instance.

Keypoint ads

00:11:44

Introduction to OpenCV library

The discussion introduces the OpenCV library, which is widely used for image processing. It is mentioned that OpenCV is adapted for image processing and has various ports available in languages like Java. The speaker highlights that OpenCV offers a range of functions and features for image manipulation.

Keypoint ads

00:12:19

Python coding with OpenCV

In the context of Python coding with OpenCV, the speaker explains the typical setup by importing necessary libraries such as CV2, NP, and M plots. The process involves reading images, whether from local files or URLs, and displaying them using Python code.

Keypoint ads

00:13:00

Image reading and processing

The process of reading images involves using the 'image read' function in OpenCV, which checks if the image source is a URL and downloads it if necessary. The image is then saved as a variable for further processing. The speaker emphasizes the importance of understanding image dimensions and channels, particularly in RGB images.

Keypoint ads

00:14:03

Image visualization and plotting

To visualize images in Python, the speaker demonstrates using the 'plot IM show' function. It is noted that the order of color channels (RGB) may need to be adjusted for correct image display. By inversing the channel order, the speaker shows how to plot images accurately for viewing.

Keypoint ads

00:16:55

Image Processing Basics

In the discussion, the speaker explains the concept of image processing basics. They mention that an image's shape can provide information about its height and width. The speaker demonstrates how to print the height and width of an image, emphasizing the importance of understanding image properties.

Keypoint ads

00:17:17

Image Resizing

The speaker delves into the topic of image resizing, highlighting the use of the CV2 resize function. They demonstrate how to downscale an image by specifying the desired dimensions, such as converting a 6,000x4,000 image to 800x600. The speaker explains the process of resizing images and printing the output.

Keypoint ads

00:18:20

Understanding Functions in Python

The speaker addresses the common question of how to understand functions in Python. They suggest using the 'help' function to gain insights into a specific function's purpose and parameters. By providing an example with the resize function, the speaker demonstrates how to access information about a function's usage and expected inputs.

Keypoint ads

00:19:22

Image Interpolation and Scaling

The discussion shifts to image interpolation and scaling, where the speaker explains the role of interpolation in downscaling images. They mention using interpolation functions like linear interpolation to condense image cells. Additionally, the speaker demonstrates how to define a scale ratio for resizing images and explains the impact of interpolation on image quality.

Keypoint ads

00:20:16

Image Enhancement Techniques

The speaker introduces various image enhancement techniques, including adjusting brightness, contrast, and blurring. They explain how to modify image properties like brightness and contrast using functions like 'convertScaleAbs' and 'blur'. By showcasing these techniques, the speaker highlights the versatility of image processing functions for enhancing visual elements.

Keypoint ads

00:21:18

Color Conversion in Image Processing

The speaker discusses color conversion in image processing, specifically converting color images to grayscale. They demonstrate the use of the 'cvtColor' function to convert a color image to grayscale and emphasize the simplicity of the conversion process. By showcasing the transformation from color to grayscale, the speaker illustrates a fundamental aspect of image processing.

Keypoint ads

00:21:54

Image Processing Basics

Image processing involves operations like extracting features from images using mathematical functions. In this case, a grayscale image is discussed, where the color channels are reduced to just height and width parameters. Operations like edge detection can be applied to extract vertical or horizontal lines from the image.

Keypoint ads

00:23:14

Historical Perspective on Image Processing

Image processing techniques have roots in mathematics from 30 to 40 years ago. Traditional methods involved hand-coding mathematical operations to extract features. Deep learning now automates this process, emphasizing the importance of understanding image representation as numerical data.

Keypoint ads

00:23:40

Image Enhancement Techniques

Various image enhancement techniques like histogram equalization and thresholding are discussed. Histogram equalization spreads out pixel intensities to improve image clarity, while thresholding extracts specific pixel ranges to enhance image details.

Keypoint ads

00:25:48

Advanced Image Processing Operations

Beyond basic techniques, operations like image rotation, translation, and blending are mentioned. These advanced operations allow for manipulating images by rotating, moving, or blending them. Additionally, extracting specific color channels from images is highlighted as a valuable skill in image processing.

Keypoint ads

00:26:00

Practical Exercises and Application

Practical exercises involving image blending and extracting specific color channels are suggested for hands-on learning. These exercises aim to reinforce understanding of image processing concepts by applying them in real-world scenarios. Participants are encouraged to explore these exercises independently for a deeper understanding of image processing techniques.

Keypoint ads

00:27:03

Image Processing Operations

Image processing involves applying functions to images to avoid manual mathematical calculations, simplifying tasks like downscaling images by selecting appropriate pixels.

Keypoint ads

00:27:50

Object Detection Overview

Object detection progresses through stages like classification, where images are labeled (e.g., identifying a cat), followed by localization to determine the object's position within a bounding box.

Keypoint ads

00:28:02

Object Detection Applications

Object detection finds applications in image tagging, scene understanding, content moderation, navigation, facial recognition, and self-driving cars, enabling tasks like obstacle recognition and detour planning.

Keypoint ads

00:30:00

Semantic Segmentation

Semantic segmentation involves extracting detailed masks of objects in images, useful for tasks like medical image diagnosis or identifying specific regions in photos.

Keypoint ads

00:30:39

Preventing Duplicates in Object Detection

To prevent duplicate object detection, unique classes or attributes must be defined for each object, ensuring distinct identification based on characteristics like appearance or behavior.

Keypoint ads

00:32:12

Object Detection Model Training

When training an object detection model, assigning class labels to different individuals is crucial. However, it's important to note that there isn't a pre-trained model that can identify everyone accurately. Models are typically trained for specific classes, such as identifying people, buses, or stop signs. The model's confidence score indicates how certain it is about its predictions, considering factors like image quality and object visibility.

Keypoint ads

00:33:12

Model Architecture Complexity

The architecture of object detection models can be complex, involving various layers with specific functions. Understanding the theoretical concepts behind each layer and the need for different layers is essential. While the workshop may focus more on practical applications, it's beneficial to grasp the backbone model's role as a pre-trained image classification network and how additional components like SSD (Single Shot Detector) enhance localization and detection capabilities.

Keypoint ads

00:34:33

Object Size and Image Processing

Object detection models like SSD address challenges posed by objects appearing at different sizes and aspect ratios. To handle this, SSD employs techniques like image pyramids to scale images at various resolutions for better object detection. Additionally, Anchor Boxes are used to predict bounding box locations accurately, ensuring objects are correctly identified regardless of their size or orientation.

Keypoint ads

00:35:48

Modern Object Detection Algorithms

Modern object detection algorithms, such as the latest version of a popular model, offer advanced capabilities beyond simple classification. These algorithms can detect, segment, mask, and track objects, providing a comprehensive solution for various tasks. While more complex, these algorithms offer enhanced functionality and versatility, making them valuable tools for diverse applications.

Keypoint ads

00:36:53

Notebook for Image Detection

The notebook for image detection involves running a pre-trained model to perform object detection tasks. It covers the fundamentals of object detection, utilizing pre-trained models to identify and localize objects in images. While the workshop may not delve into fine-tuning specific algorithms, participants can explore additional resources like documentation on modern object detection algorithms for further understanding.

Keypoint ads

00:37:02

Pre-trained Model Definition

A pre-trained model is a machine learning model already trained on a large dataset, such as the Coco dataset, with parameters and weights determined by previous training. This model is used to detect objects within images based on common objects and contexts.

Keypoint ads

00:37:44

Coco Dataset Overview

The Coco dataset is a collection of annotated images created by researchers, where objects like cats are hand-labeled with bounding boxes. This dataset allows pre-trained models to recognize objects like cats in images without the need for additional training.

Keypoint ads

00:39:20

Benefits of Using Pre-trained Models

Utilizing pre-trained models saves time and resources by leveraging existing model patterns, such as recognizing features like color, shape, and texture. Researchers can apply these models to new tasks without starting from scratch, enhancing efficiency in object detection.

Keypoint ads

00:40:24

Available Pre-trained Models

Various pre-trained models for object detection are accessible, like those listed on the PyTorch website under object detection. These models come with pre-trained weights and parameters, offering a range of options for researchers to choose from based on their specific needs.

Keypoint ads

00:41:09

Identifying Height and Age with Object Detection

Object detection can potentially infer height and age based on skin texture and distance detection capabilities. While height estimation may require special sensors for accurate scaling, age estimation could be based on training models to recognize age-related skin characteristics.

Keypoint ads

00:42:02

Downscaling Images for Object Detection

Images in datasets like Coco may need to be downscaled to account for objects appearing smaller at a distance. Downscaling ensures that the model can accurately detect objects of varying sizes, compensating for differences in scale within the dataset.

Keypoint ads

00:42:29

Introduction to Coco Dataset and Video Processing

The Coco dataset is effective for detecting objects in videos as it treats video as a sequence of images. Regardless of distance or size, the dataset can accurately detect objects. Video processing involves analyzing multiple frames per second, treating each frame as an image.

Keypoint ads

00:43:03

Importing Libraries and Defining Models

To begin the process, various libraries need to be imported to avoid writing code from scratch. The notebook covers the import of necessary libraries and defines models such as the SSD (Single Shot Detector) 300 and VGG6 for object detection.

Keypoint ads

00:44:01

Loading SSD Model and Parameters

The SSD model, specifically SSD 300 VGG6, is loaded along with its optimized parameters. The model architecture includes the VGG backbone for feature extraction and classification layers for object detection.

Keypoint ads

00:45:11

Image Loading and Preprocessing

Images are loaded using a function called 'doc image' and converted to tensors for deep learning analysis. Normalization of images is done by adjusting mean values to match training data from the Coco dataset, ensuring compatibility with the model.

Keypoint ads

00:45:56

Inference and Object Detection

During inference, the model predicts object classes by running the processed image through the network. The model decodes the predictions using a dictionary to identify detected objects and their corresponding classes.

Keypoint ads

00:46:52

Analyzing Object Detection Results

The detection results consist of bounding boxes and confidence scores for identified objects. By examining the output, the model can accurately classify objects with high confidence levels, providing insights into the detected classes and their locations.

Keypoint ads

00:47:32

Object Detection Process

The process of object detection involves loading a pre-trained model, loading an image, preprocessing the image by applying normal transformations to convert it into a tensor, obtaining the label ID to label encode, passing the image through the model for detection, and finally getting a list of detections with their corresponding values.

Keypoint ads

00:48:41

Modifying Labels of Boundary Boxes

To change the labels of boundary boxes in object detection, one can modify the detections themselves by adjusting the confidence score threshold. However, directly modifying the class requires fine-tuning the model by training it on specific objects or faces, assigning labels accordingly, and feeding the images during training.

Keypoint ads

00:50:48

Fine-Tuning Pre-Trained Models

When using a pre-trained model for object detection, the process involves loading the model, loading an image, preprocessing the image, and passing it through the model for detection. Fine-tuning a pre-trained model for specific objects or classes requires additional steps like training the model on new data and adjusting labels accordingly.

Keypoint ads

00:50:48

Introduction to Multimodal Models

Multimodal models integrate information from multiple sources such as text, images, videos, and audio to make predictions. By combining data from different modalities, these models offer a richer understanding of the input, leading to improved performance and more comprehensive predictions.

Keypoint ads

00:51:26

Benefits of Multimodal Models

Multimodal models provide a holistic view of data by incorporating information from various sources, resulting in enhanced performance and better predictions. They can analyze text and images together to generate more insightful outputs, showcasing a deep understanding of the input data.

Keypoint ads

00:52:57

Introduction to CLIP Model

CLIP is a multimodal model that combines text and image inputs to make predictions. By training on diverse data sources, CLIP can understand and interpret both textual and visual information, enabling it to generate accurate and contextually relevant outputs.

Keypoint ads

00:53:00

Image and Text Model

The model discussed in the transcript combines image and text data for predictions without the need for conventional CNN training. It can process text and image together, providing a fuzzy understanding of the relationship between the two. The model was trained on 400 million text-image pairs and is pre-trained.

Keypoint ads

00:54:00

Accessing T4 GPU

Participants can access T4 GPUs on Google Colab for training models. The competition provides access to T4 GPUs through a Vex instance notebook on GCP. Each team has a dedicated instance with a team name for GPU access. Participants can evaluate model accuracy using notebooks available in the basic lessons.

Keypoint ads

00:56:30

Vision Language Model

The Vision Language Model (VLM) combines text and image data by tokenizing text into subwords and embedding them with corresponding images in a shared vector space. This approach allows the model to recognize similarities between text and images without explicit labeling, enabling image classification and generation without extensive training like the Coco dataset.

Keypoint ads

00:58:38

Importing Libraries for Hugging Face

To work with Hugging Face, one needs to import the Transformers library, specifically the Click processor and Click model. These libraries are essential for tasks like image classification.

Keypoint ads

00:59:12

Image Preprocessing and Model Loading

After importing the necessary libraries, the next step involves loading an image pre-processor and model into the system. This process allows for the transformation and analysis of images using pre-trained models.

Keypoint ads

00:59:27

Image Classification

The model enables image classification by analyzing the content of the image and providing a probability score. For example, when given an image of a cat, the model can accurately classify it as a cat with a high probability percentage.

Keypoint ads

00:59:49

Classifier Functionality

The classifier function of the model allows for the detection of specific objects or entities within an image. It provides a probability score indicating the likelihood of the presence of the detected object.

Keypoint ads

01:01:07

Softmax Layer for Probability Calculation

To make sense of the output probabilities, the model passes them through a softmax layer. This layer normalizes the values to ensure they sum up to one, providing a clearer understanding of the relative probabilities.

Keypoint ads

01:02:03

Break Announcement

A five-minute break was announced for participants to refresh before continuing with the exercise. Instructions were given on accessing the cloud platform during the break.

Keypoint ads

01:04:22

Accessing GPU on Collab

To access GPU on Collab, click on 'Connect' and change the runtime to T4 GPU. This should provision you with a TPU if there's access. Keep in mind that Google may drop you if you use too many resources since it's a free service.

Keypoint ads

01:07:27

Porting Data from GCP Storage to Collab

To port audio and image datasets from GCP Storage to Collab, consider creating a private link to download the data. Alternatively, upload the data to platforms like Hugging Face or Google Drive for better integration with Collab. Ensure you have the necessary permissions and access to the data.

Keypoint ads

01:09:56

Completing Workshop Exercises

Although there may not be enough time to complete the exercises during the workshop, participants are encouraged to attempt them later. Answers can be posted for reference. It's crucial to ensure access to the workbench on GCP with your team for collaborative work.

Keypoint ads

01:10:17

Accessing Jupiter lab

Ensure you have access to your Jupiter lab to run code and assess your work. It is crucial for running the PR model and finding models. Access is provided by the competition, so there is no need to pay or provide credit card information.

Keypoint ads

01:11:13

Importance of GPU

Access to a GPU is essential for running certain models effectively. While a good PC at home can suffice, using the provided GPU instance is recommended for optimal performance and to ensure fairness for all participants.

Keypoint ads

01:12:17

Deploying models

Deploying models can be challenging without a GPU. Models discussed include image processing, object detection, and vision language models that combine text and image data to make predictions. The vision language model can even make educated guesses on unseen data.

Keypoint ads

01:14:00

Building Docker image

To build a Docker image, access the Vex environment, open the notebook, and run the provided command 'bu dock b/t vm.app' in the terminal. This command pulls the necessary packages and environment to create a single container file for deployment.

Keypoint ads

01:15:36

Docker Build Command

To build all the necessary components for deployment as a single file, run the command 'Docker build dtvm app'. This command ensures that dependencies are packaged together for easy deployment.

Keypoint ads

01:16:22

Deployment Process

The deployment process involves deploying a solution as a backend, such as an object detection endpoint. Users can interact with this endpoint to receive information like image locations or object details.

Keypoint ads

01:17:59

Endpoint Functionality

The endpoint combines clip classification with object detection to determine the location of objects. Users can query the test endpoint with 'h h world' and an Item ID to receive relevant information.

Keypoint ads

01:18:04

Instance Running

For terminal tasks, the instance runs quickly and does not require the tab to remain open. However, for long-running tasks like notebooks or training codes, the computer must remain active to prevent interruptions.

Keypoint ads

01:19:25

Application Definition

The application script defines functions, URLs, and necessary components like models and image preprocessors. It preloads the model for efficient operation and includes functions like image loading.

Keypoint ads

01:20:28

Function for Object Detection

The speaker discusses a function for object detection that is pre-trained. They mention copying scripts from Notebook 2, using the model, weights, and preprocessor to return a detected dictionary with the location of the object in the image.

Keypoint ads

01:21:23

Function for Clip Prediction

Another function called clip predict is introduced, which uses labels like cat and human to predict the content of an image. It returns the probability of what is inside the image based on the given labels. This function combines detection and clip prediction, utilizing pre-trained models.

Keypoint ads

01:22:25

URL Definition for Querying

The speaker explains defining a URL for querying, where passing data in the form of a dictionary with image URL, labels, and threshold allows the model to predict the content of the image. This structure corresponds to the model's input requirements for processing.

Keypoint ads

01:24:03

Summary of Functions

In summary, the speaker emphasizes the need for functions that return specific information, such as the location of objects in an image or the probability of content based on labels. These functions are essential for targeting specific areas of interest or determining the content within an image.

Keypoint ads

01:24:12

Docker File Explanation

The speaker delves into the Docker file, explaining that it is based on an existing container with necessary packages. They add requirements specific to their model, install them, expose a port for interaction, copy relevant files, and run the container. This process simplifies launching the model with the required dependencies.

Keypoint ads

01:25:42

Introduction to Docker file requirements

The Docker file requirements are essential for listing the packages needed by the application. These requirements include Transformers from Hugging Face, FastAPI, and other necessary components for the application to function properly.

Keypoint ads

01:26:16

Running Docker commands

Executing Docker commands involves steps like copying files, installing dependencies, and performing housekeeping tasks. The initial run may take time to download resources, but subsequent runs will utilize cached data for faster execution.

Keypoint ads

01:27:49

Utilizing GPU for faster processing

Using a GPU accelerates processing speed significantly, enabling tasks like image processing at 30 or 60 frames per second. Without a GPU, processing can be slower but still achievable using the CPU.

Keypoint ads

01:28:01

Running the Docker container

Launching the Docker container involves specifying ports, exposing the application externally, and running necessary setup processes. Despite encountering warnings during setup, these can often be safely ignored as the model is being configured.

Keypoint ads

01:28:26

Testing the Docker setup

Verifying the Docker setup involves querying a specific URL using a Python package called 'requests'. By sending an image to the URL, the application responds with a status code and bounding box details for object detection.

Keypoint ads

01:29:44

Running the application locally

Running the application locally provides a simpler interface for testing compared to cloud-based instances. While initial startup may take time, local testing allows for direct interaction with the application without the need for complex commands.

Keypoint ads

01:31:06

Code availability on GitHub

Although the code is accessible on the drive, its availability on GitHub may be pending. Users are encouraged to check the GitHub repository for updates on code availability and access to the project.

Keypoint ads

01:31:22

GitHub Link Sharing

The speaker mentioned sharing a GitHub link during the workshop and instructed the organizer to distribute it. The code contained in the link was already available in Google Drive for reference.

Keypoint ads

01:32:18

Request for Guide

A participant requested a guide for the material or the workshop. The speaker mentioned that the workshop was one and a half hours long and advised uploading the answers later for reference.

Keypoint ads

01:32:42

Recommendation for Further Reading

The speaker recommended participants to refer to the CV VM part on Google Drive for detailed content on object detection. They highlighted that the workshop covered only a portion of the topic and suggested exploring the additional context available.

Keypoint ads

01:33:55

Conclusion of Workshop

The session was concluded due to time constraints. The speaker wished participants good luck and advised them to access the notebooks or computer instances to run the code step by step for better understanding. They also encouraged exploring other materials related to the workshop.

Keypoint ads

01:35:26

Closing Remarks

The workshop officially ended, thanking participants for their engagement and wishing everyone a safe night. The speaker expressed gratitude to the expert, M, for sharing their knowledge with the Guardians. The session concluded with well wishes for continued learning and hacking endeavors.

Keypoint ads

Did you like this Youtube video summary? 🚀

Try it for FREE!

bottom of page