top of page

Want to generate your own video summary in seconds?

Trustworthy and Responsible AI: A Comprehensive Overview

Explore the key discussions on trustworthy and responsible AI from the event organized by the Distributed and Pacy Systems Group at the University of TUU. Learn about the importance of trust in AI systems and the principles of responsible AI.

Video Summary

The event on trustworthy and responsible AI, organized by the Distributed and Pacy Systems Group of The Institute of Computer Science at the University of TUU, featured discussions on the importance of trust in AI systems. Speakers highlighted the need for AI to consistently perform intended functions, adhere to clear rules, and avoid biased or harmful outcomes. The event emphasized the principles of responsible AI, such as fairness, transparency, and privacy, with a focus on societal and ethical implications.

The event was divided into two sections, covering topics like monitoring AI capabilities, exploring ML model robustness, and improving explainability and fairness in AI. The event also addressed the challenges of AI under attack in the context of 6G networks. The speakers discussed the evolution of AI technologies, the role of sensors in monitoring and controlling disruptive technologies like fire, electricity, and trains, and the importance of trustworthy computing in ensuring the reliability and security of AI systems.

Scientists historically used artifacts like dynamometer cars to measure train properties and optimize designs. Today, AI models are built using machine learning pipelines, with components located within applications. Machine learning supports various applications like self-driving cars and online bookshops. Introducing AI sensors and dashboards can abstract complexity in estimating trustworthiness, with sensors instrumented in each step of the pipeline. Challenges include determining where to place sensors for properties like resilience and fairness.

AI dashboards involve users in tuning models, with human oversight ensuring trustworthiness. Trustworthiness is an ongoing process involving various stakeholders, with AI sensors and dashboards providing tools for monitoring and enhancing AI capabilities. In the near future, autonomous cars equipped with sensors will be prevalent, allowing for interaction through dashboards to monitor their operations. Trustworthiness in AI technologies is crucial, with considerations like fairness, performance, robustness, and resilience varying based on applications.

Sensors play a vital role in capturing trustworthy AI properties, aiding in monitoring AI behavior. Ensuring safety, reliability, and ethical use of AI models in critical domains is essential, leading to a shift towards trustworthy AI. Processes for developing machine learning models need to prioritize trustworthiness from the outset, involving stakeholders in defining objectives and metrics. Experiments analyzing the resiliency and accountability of machine learning models in emergency communication systems highlight the importance of defending against data poisoning attacks, such as label flipping attacks, to maintain model accuracy and performance.

Machine learning models are vulnerable to data poisoning attacks, with random forest being more robust. Label sanitation is an effective defense strategy. Fall detection models are sensitive to attacks and require caution with untrusted data sources. Accountability and explainability are crucial for heart attack detection models, using PTB-XL dataset. Post-talk explainable AI methods provide local explanations for model decisions. Quantitative evaluation methods assess reliability of explanations, but qualitative evaluation is also necessary for meaningful explanations.

Integrating feedback and considering trade-offs is important in developing machine learning models. The conversation discusses the importance of ensuring that machine learning training data is representative of test data to evaluate the effectiveness of data poisoning. It also explains the concept of Layer-wise Relevance Propagation (LRP) as an explainable AI method for neural networks. The conversation discusses the importance of trust in AI systems and the risks associated with overtrust. It emphasizes the need for user involvement in the design process and the interdisciplinary approach required to ensure trustworthy AI.

The discussion covers the challenges of defining trust, the impact of social considerations, and the measurement of trust. Various approaches and tools for building trust in AI systems are highlighted, along with the importance of user perception and feedback in creating trustworthy environments. The conversation reconvenes after a break, introducing a research engineer from France who discusses AI and cybersecurity. The engineer presents an AI platform for security analysis, focusing on anomaly detection in IoT networks. The platform uses explainable AI techniques to improve model resilience and performance. A deep learning model combining autoencoders and CNN achieves high accuracy in detecting cyber attacks. The engineer emphasizes the importance of balancing model explainability and resilience against adversarial attacks.

Federated Learning and Algorithmic Fairness: A Comprehensive Overview The discussion highlighted the challenges of algorithmic fairness in Federated learning, emphasizing the impact of data diversity on model accuracy and fairness. Researchers use simulation to create non-IID data sets, with label skew sampling being the predominant method. However, this approach overlooks other forms of data skew, leading to lower accuracy and fairness in heterogeneous data. The need for more diverse testing conditions and evaluation of algorithms in Federated learning was emphasized to address these issues.

The speaker discusses the complexity of quantifying fairness in federal learning settings and focuses on the network perspective in 6G networks, highlighting potential attacks on AI and solutions to mitigate them. The team at University College Dublin works on AI security, privacy, blockchain, and network authorization. They collaborate on EU projects like Confidential 6G and Robust 6G, addressing issues such as data poisoning and evasion attacks. Metrics like impact and complexity are used to measure the scale of attacks, with a focus on detecting poisoning attacks using feature attribution clustering. The conversation discussed the use of SHAP to identify feature contributions in models, the application of SHAP in Federated Learning to detect poisoned models, the use of HDBSCAN to differentiate between benign and poisoned models, and the development of a 'scaffolding attack' to deceive XAI models.

The presentation also highlighted the use of random perturbation and different datasets to simulate attacks, the use of Hinger distance to measure statistical distance, and the comparison of control and adversarial model distances. The discussion emphasized the importance of AI security in 6G networks and the need for ongoing research and innovative solutions.

Click on any timestamp in the keypoints section to jump directly to that moment in the video. Enhance your viewing experience with seamless navigation. Enjoy!

Keypoints

00:00:05

Event Introduction

The event on trustworthy and responsible AI is organized by the Distributed and Pacy Systems Group of The Institute of Computer Science, University of TUU. ABD Rashid and Rasin are the co-moderators for the evening.

Keypoint ads

00:01:01

Trustworthiness in AI

Trustworthiness in AI is compared to having a reliable friend, ensuring AI systems consistently perform intended functions, adhere to clear rules, and avoid biased or harmful decisions. Users must have confidence in AI systems to produce fair outcomes.

Keypoint ads

00:01:46

Responsible AI

Responsible AI is an extension of responsible computing, emphasizing ethical and accountable development, deployment, and application of AI technologies. It requires adherence to principles like fairness, transparency, and privacy, with a focus on societal and ethical implications.

Keypoint ads

00:02:53

Event Structure

The event is divided into two sections. The first session covers talks on engaging and monitoring AI capabilities, exploring ML model robustness, and understanding trust in AI from users' perspective. The second session focuses on improving explainability, fairness, diversity, and AI security in the context of 6G Network.

Keypoint ads

00:04:00

Speaker Introduction

Associate Professor Huba Flores, a doent at the University of Elsi, will be presenting on AI sensors and dashboard for gauging and monitoring AI's influence capabilities. His research interests span mobile and privacy computing, distributed systems, and mobile cloud computing.

Keypoint ads

00:04:09

AI Advancements

Artificial intelligence, machine learning, and deep learning are popular techniques, with advancements allowing construction of AI models like chap GPT. Other techniques, such as genetic algorithms, also contribute to AI development by learning with minimal data and faster training.

Keypoint ads

00:05:48

Advancements in AI Tools

Recent advancements in AI tools such as TensorFlow and frameworks like Feden and Flower have revolutionized the field. With just a few lines of code, developers can now easily create neural networks and train AI models in a distributed manner. This has been highlighted by Geoffrey Hinton, a prominent figure in AI, who noted the significant improvement in performance due to enhanced processing capabilities.

Keypoint ads

00:06:28

Concerns about AI Trustworthiness

As AI applications started exhibiting human-like interactions due to massive computing power, concerns arose regarding their trustworthiness. An open letter was issued about the inability to verify AI software using traditional methods, leading to discussions on the need for trustworthy computing. This concept, dating back to Bill Gates' memo, emphasizes the importance of AI being secure, reliable, and private, prompting regulatory entities to define properties for AI trustworthiness.

Keypoint ads

00:09:14

Regulations for AI Trustworthiness

Regulatory entities are drafting regulations to define properties that AI must possess to ensure trustworthiness. These regulations aim to maintain control over technology and promote national and international sovereignty in AI usage. The strategic importance of AI has led to the establishment of guidelines to safeguard users worldwide, regardless of their location.

Keypoint ads

00:09:16

Historical Perspective on Disruptive Technologies

Throughout history, humans have encountered disruptive technologies, but have managed to control them through sensor-like mechanisms for data collection and measurement. Sensors play a crucial role in quantifying and characterizing properties, objects, and aspects, enabling humans to measure and tune emerging technologies over time.

Keypoint ads

00:10:04

Examples of Controlled Disruptive Technologies

Historical examples like fire and electricity illustrate how humans learned to control disruptive technologies through monitoring, tuning, and ultimately mastering them. Fire, once harnessed and understood, became a controllable tool for various purposes. Similarly, the harnessing of electricity from natural events, as demonstrated by Benjamin Franklin's famous experiment, showcases how humans have learned to control and utilize once disruptive forces for their benefit.

Keypoint ads

00:11:04

Evolution of Trains

Trains have been a disruptive technology, allowing people to move faster than walking. The first train prototype was initially designed as a portable engine powered by steam. Gradually, train technology evolved over time, incorporating sensors to measure properties like pull force and material usage, leading to optimized train designs.

Keypoint ads

00:13:30

AI Model Construction

AI models are built using a standard machine learning pipeline that involves data ingestion, preparation, algorithm selection, training, deployment, and user interaction. This iterative process continuously improves the model as it learns from new data contributions.

Keypoint ads

00:15:09

Client-Server Architecture

The client-server architecture consists of applications interacting with a database to store information. Components like online bookstores have functionalities such as book searching, retrieval, and payment. These components are distributed across servers, with specialized individuals constructing and evolving the architecture for improved user recommendations and guidance.

Keypoint ads

00:16:21

Machine Learning Architecture

The machine learning component in the system takes user interactions stored in the database to train machine learning models. These models provide recommendations to users, contributing to a larger evolving architecture. With distributed machine learning, the system has shifted from centralized to Federated models, where a global model receives contributions from clients to improve overall performance.

Keypoint ads

00:17:30

Applications of Machine Learning

Machine learning supports various applications like self-driving cars, drone delivery, and chatbots. Existing applications such as online book shops and Netflix utilize machine learning for personalized recommendations based on user preferences and characteristics. The use of AI in applications like self-driving cars enhances user trust and performance evaluation.

Keypoint ads

00:19:02

AI Sensors and Dashboards

Introducing AI sensors and dashboards helps abstract the complexity of calculating trustworthiness in applications. Sensors for fairness, explainability, resilience, and robustness provide valuable insights into the decision-making process of AI systems. Instrumenting AI sensors in the PP lines of AI models enables continuous monitoring and information provision to users.

Keypoint ads

00:20:10

Challenges in Sensor Instrumentation

A key challenge lies in determining where to instrument AI sensors within the PP lines of AI models. Deciding on the placement of sensors for measuring properties like resilience or fairness involves considerations of implementation type and metrics used. The challenge is to identify the optimal step in the PP lines for sensor deployment to effectively monitor and evaluate AI model performance.

Keypoint ads

00:21:23

Types of Sensors for Different Applications

Different types of sensors are used depending on the application. For example, in a movie-watching scenario, fairness might be important, while in autonomous driving, performance metrics like parking and driving in different environments are crucial.

Keypoint ads

00:22:00

AI Dashboards and Human Involvement in Model Tuning

AI dashboards not only display model capabilities but also involve users in tuning the models. Human oversight, where users can provide feedback, is essential. However, challenges like complex properties and potential biases require AI sensors to negotiate trust scores for applications.

Keypoint ads

00:23:28

Human Oversight and Trustworthiness in AI

Human oversight in AI involves appointing specialized groups, not individuals, to implement feedback for model tuning. This approach aims to prevent biases and ensure specific knowledge is applied. Trustworthiness in AI is an ongoing process involving computer scientists, data scientists, developers, and various stakeholders.

Keypoint ads

00:24:00

Trustworthiness as an Ongoing Process

Trustworthiness in AI is a continuous process that requires collaboration among different professionals to define and ensure trustworthiness. AI sensors and dashboards simplify understanding and provide tools for monitoring and evaluating AI capabilities.

Keypoint ads

00:25:10

Future Vision of AI Applications

The future vision includes autonomous cars equipped with advanced models and sensors that can be monitored through dashboards. This vision is being realized through ongoing technological advancements and prototype testing, such as the current special platform demonstration.

Keypoint ads

00:26:29

Question on Trustworthiness in Different Applications

Trustworthiness varies across applications, with each having unique definitions and requirements. Understanding and defining trustworthiness is crucial for ensuring the reliability and effectiveness of AI technologies.

Keypoint ads

00:27:02

Importance of Sensor Properties in Different Applications

Sensor properties vary based on the type of application, with some sensors being more crucial for specific applications. For example, in recommender systems, focus and fairness metrics are important, while in autonomous technologies, performance, robustness, and resilience are key factors. Tailoring sensor properties to the application enhances functionality and effectiveness.

Keypoint ads

00:27:57

Incorporating Trustworthiness Mechanisms in Applications

Instrumenting applications with trustworthiness mechanisms is essential for ensuring reliability and ethical use of AI. While certain sensor properties may seem secondary for some applications, considerations like fairness become crucial when enhancing trustworthiness in AI implementations.

Keypoint ads

00:28:01

Relevance of Sensors in Data Gathering for Trustworthy AI

Sensors play a vital role in data collection for trustworthy AI, enabling the capture of specific aspects related to trustworthy AI properties. Utilizing sensors and dashboards facilitates monitoring and provides insights into AI behavior, emphasizing the significance of sensors in the current technological landscape.

Keypoint ads

00:29:23

Research on Integrating Machine Learning Models in Critical Infrastructures

Michelle Burer, a researcher at Forofa Focus in Berlin, conducted experiments on integrating machine learning models into emergency communication systems. The research focused on analyzing the resiliency and accountability of embedded models in critical infrastructures, highlighting the importance of ensuring safety, reliability, and ethical use of AI in such domains.

Keypoint ads

00:30:02

Motivation for Ensuring Trustworthy AI in Critical Domains

Ensuring trustworthy AI in critical domains is essential due to the increasing reliance on machine learning models. Developers are shifting towards trustworthy AI to prioritize features like fairness, explainability, transparency, resiliency, and accountability. This shift emphasizes the need to consider trustworthiness from the initial stages of machine learning model development to ensure safety and ethical use of AI.

Keypoint ads

00:31:02

High-Level Process for Developing Trustworthy AI

A high-level process for developing trustworthy AI involves defining trustworthiness objectives with key stakeholders, selecting or defining metrics to measure these objectives, optimizing machine learning models towards these metrics, and involving relevant stakeholders in the evaluation process. This iterative approach ensures that trustworthiness considerations are integrated into the machine learning development process from the outset.

Keypoint ads

00:32:30

Introduction to Machine Learning Models for Emergency Communication System

The speaker introduces the topic of machine learning models in the context of an emergency communication system for critical infrastructure. They discuss the architecture of the system, which includes IoT sensors at the caller site, machine learning models for detecting emergencies, and initiating voice over IP emergency calls.

Keypoint ads

00:33:52

Experiment on Fall Detection Model Resiliency

The speaker presents an experiment on analyzing the resiliency of a fall detection machine learning model against data poisoning attacks. They utilized the 'unimap sh' dataset, a multivariate time series dataset with measurements for activities of daily living and falls. Five different machine learning models were tested for identifying falls and their resiliency against data poisoning attacks.

Keypoint ads

00:35:09

Understanding Data Poisoning Attacks

The speaker explains data poisoning attacks as data modification attacks during the training stage of machine learning models. Attackers manipulate data to decrease model performance or cause misclassification. They specifically mention label flipping attacks where labels are changed, and discuss defense strategies like label sanitization to mitigate the effects of such attacks.

Keypoint ads

00:37:14

Results of Data Poisoning Attack and Defense Strategies

The speaker shows the impact of data poisoning attacks on data distribution and the effectiveness of label sanitization defense. With increased poisoning rates, the original data distribution is heavily altered. The defense strategy seems to work in maintaining the original distribution up to a certain poisoning rate. Model accuracy for the machine learning models is also presented, showing the impact of data poisoning on model performance.

Keypoint ads

00:37:59

Data Poisoning Attacks on Machine Learning Models

All five models analyzed in the study were found to be very sensitive against data poisoning attacks. Starting from a rate of 10%, the models became unusable for the application domain. Only the Random Forest model showed more robustness, withstanding a poisoning rate of up to 30%. Labor sanitation defense was highly effective, reversing the effects of data poisoning attacks up to a 30% rate, maintaining the original performance of the classification task.

Keypoint ads

00:39:14

Takeaways for Fall Detection Scenario

Fall detection models are sensitive to data poisoning attacks, especially when using data from untrusted sources. Label sanitation proves to be a promising defense strategy against random label swapping attacks. Simple experiments can be conducted to analyze the resiliency characteristics of machine learning models, which should be incorporated into the model selection process.

Keypoint ads

00:40:02

Heart Attack Detection Using Machine Learning Models

In the context of developing machine learning models for heart attack detection, the study utilized the PTB XL dataset containing 12-lead ECG waveform data from approximately 90,000 patients. The dataset was annotated by two cardiologists to indicate normal behavior or signs of a heart attack. Post-talk explainable AI methods were employed to generate local explanations for indications of a heart attack in the ECG data.

Keypoint ads

00:41:00

Model Development for Heart Attack Detection

A convolutional neural network model was developed to analyze ECG data for indications of a heart attack. The model performed one-dimensional convolution on the input ECGs to output a prediction score indicating the probability of a heart attack. The model's performance characteristics were deemed satisfactory for the intended use case.

Keypoint ads

00:41:56

Explainability in Heart Attack Detection Models

To achieve explainability in the heart attack detection model, the study employed post-talk explainable AI methods such as LRP and SHAP. Three concrete explanation approaches were developed, including a base explanation method that identified relevant input features indicating a heart attack. Quantitative evaluation methods were used to compare the effectiveness of different explanation approaches.

Keypoint ads

00:43:32

Feature Data Classification Performance

The classification performance of the model is expected to decrease when using random data features instead of relevant features. To validate this assumption, the team manipulated random data features and observed that the model's performance should be similar with or without relevant features.

Keypoint ads

00:43:54

Stability Analysis of X Method

In the second analysis, the team focused on the stability of a single X method. They aimed to determine if the X method provided consistent explanations for similar input data. This analysis required defining similarity measures in both the input space and the explanation space.

Keypoint ads

00:44:24

Consistency Analysis of XI Methods

The team conducted a consistency analysis to compare two XI methods. By generating explanations for ECG data using different methods and measuring the differences between them, they aimed to identify any discrepancies in the results. The analysis revealed differences in identifying the most relevant points between the two methods.

Keypoint ads

00:45:36

Stability Analysis Input Space

For the stability analysis in the input space, the team emphasized the importance of defining similarity measures specific to the application domain. Analyzing ECG data required finding similar ECGs, which posed challenges due to biological variability. To address this, the team separated one patient's ECG into overlapping windows to establish similarity in the input space.

Keypoint ads

00:46:36

Similarity Measure in Explanation Space

In the analysis of the explanation space, the team utilized the Fanous norm, akin to Euclidean distance in high-dimensional space. By comparing the average explanation distances for different methods, they found that LRP appeared more stable than SH, indicating lower distances and greater stability in explanations.

Keypoint ads

00:47:11

Comparison of XI Methods Metrics

The team compared the lead importance metrics of LP and SH methods to identify the most relevant leads. While some agreement was observed in identifying important leads, discrepancies were prevalent across different cases. LP and SH methods focused on different leads, showcasing the need for further analysis and understanding of the results.

Keypoint ads

00:48:10

Quantitative and Qualitative Evaluation

The team emphasized the importance of incorporating both quantitative and qualitative evaluations to assess the effectiveness of XR explanations comprehensively. While quantitative evaluations provide insights into reliability and meaningfulness, qualitative feedback from users is crucial for improving explanations and defining trade-offs between different analysis methods.

Keypoint ads

00:49:12

Summary of Presentation

In summary, the presentation highlighted the vulnerability of machine learning models to data poisoning attacks from untrusted sources. The quantitative evaluation of XI methods enables comparison of reliability and meaningfulness, but incorporating qualitative evaluations is essential for enhancing explanations and understanding user feedback for improvement.

Keypoint ads

00:49:34

Importance of Evaluation in Machine Learning

Evaluation is crucial in machine learning to generate good explanations in the application domain. It is recommended to integrate cross-aspect considerations from the beginning of model development to ensure comprehensive evaluation.

Keypoint ads

00:50:40

Methodology for Evaluating Truthfulness in Machine Learning

To evaluate truthfulness in different methods, the XI method was applied to test samples. The 10% most relevant points were identified and their activation in the neural network was turned off by setting their values to zero, effectively dropping out specific features for evaluation.

Keypoint ads

00:51:42

Data Distribution and Model Performance

Ensuring that training data has a similar distribution to test data is essential for model performance. If the training data is not representative, the model's performance may decline when faced with test data from a different distribution.

Keypoint ads

00:53:27

Role of RRP in Explainable AI

Layer-wise relevance propagation (RRP) is an explainable AI method for neural networks. It calculates how relevance from output neurons propagates back to input neurons, estimating the relevance of input features. RRP ensures the relevance per layer remains consistent, providing a direct measure of input features' importance.

Keypoint ads

00:55:30

Presentation on Users' Trust in AI

Associate Professor Sonia S presented a human-computer interaction perspective on users' trust in AI. She emphasized the importance of understanding users' perspectives on trust in technology and user experience evaluation.

Keypoint ads

00:56:04

Introduction to Trustworthy AI

The speaker begins by outlining the challenge and motivations behind addressing trustworthy AI. They emphasize the importance of understanding how users perceive trustworthy AI. The discussion will cover various perspectives, directions, and research insights accumulated over more than 10 years.

Keypoint ads

00:57:00

EU Ethical and Trustworthy AI Principles

The EU has set ethical and trustworthy AI principles that focus on obeying laws, respecting regulations, and acting ethically. Providers are required to ensure AI models are robust technically and socially. While technical aspects have been extensively researched, the social perspective, especially user trust, remains a challenge that needs deeper exploration.

Keypoint ads

00:58:27

Human-Centered Perspective on AI

The discussion highlights the importance of a human-centered perspective in AI development. It stresses the need to understand user interactions with technology to ensure that AI models are perceived as trustworthy. The European Union's emphasis on this perspective indicates a shift towards prioritizing user-centric design in AI technologies.

Keypoint ads

01:00:17

Impact of AI on Society

The evolving nature of AI has led to its widespread use, influencing societal norms and behaviors. The dynamic nature of AI-generated outputs can significantly impact user decisions, creating concerns about the potential risks posed by AI technologies. Providers must assess AI systems' performance to ensure they align with the intended purposes and mitigate unintended consequences.

Keypoint ads

01:02:31

Security Risks and Regulations for AI Providers

The discussion highlights the security risks associated with AI usage, emphasizing the need for more regulations to govern its implementation. AI providers face financial penalties for non-compliance with ethical standards, potentially leading to product launch restrictions in the European Union single market.

Keypoint ads

01:03:30

Undesired Biases and Misuse of AI Systems

AI systems have demonstrated undesired biases and potential for misuse, such as influencing voters and creating misinformation. Regulations and principles aim to minimize these risks and ensure AI models behave as intended, impacting ethics, individuals, and well-being.

Keypoint ads

01:05:33

Ensuring Trustworthiness of AI Models

To avoid risky situations and ensure user trust, new mechanisms beyond current safety functions are necessary. Developing mechanisms that invoke user safety, understanding, and trust in AI models is crucial to assuring users of their reliability and intended behavior.

Keypoint ads

01:07:14

Risks and Overtrust in AI Usage

Risks in AI usage include data breaches, privacy concerns, misuse, distrust, and cybersecurity issues. Overtrust, a significant problem, stems from past reliance on AI models without questioning potential risks. Users' overreliance on AI for decision-making can lead to shirking responsibility, posing challenges for AI providers.

Keypoint ads

01:09:13

Mitigating Risks in AI Algorithms

To mitigate risks in AI algorithms, it is crucial to balance user trust. Users should not blindly trust algorithms as excessive trust can impact their performance. Calibration of trust levels is necessary to ensure that users who trust too little adopt the algorithms for decision-making, while those who trust too much are aware of potential risks.

Keypoint ads

01:10:11

Ensuring Quality of AI Algorithm Use

Ensuring the quality of AI algorithm use involves addressing three main pillars: technical features, social dimensions, and user characteristics. This requires an interdisciplinary approach to understand AI systems not just as technical entities but as social-technical systems that interact within social structures, influencing performance.

Keypoint ads

01:11:03

Integrating User Perspectives in AI Design

Integrating user perspectives in AI design is crucial for building trustworthy systems. Lack of user participation in the design process leads to overlooking user impacts on system development. Engaging users throughout the design phases, from concept to development, is essential to ensure user trust in AI algorithms.

Keypoint ads

01:12:46

Educational Changes for AI Development

Educational changes are needed to equip engineers and computer scientists with human-computer interaction skills. The lack of interdisciplinary communication between social scientists and technical experts hinders the adoption of a holistic approach to AI development. Focusing on trustworthy AI approaches and creating safe spaces for user feedback are essential for ethical AI principles.

Keypoint ads

01:14:22

Enhancing Trust in AI Systems

Enhancing trust in AI systems requires a focus on multidisciplinary theoretical lenses on trust. Establishing a common definition of trust is crucial for comparing results. Additionally, implementing user experience evaluation methods and lenses can provide insights into user perspectives and improve the design of AI systems.

Keypoint ads

01:15:47

Importance of User Trust in AI Design

The speaker emphasizes the need to include user feedback in the design process to build trustworthy AI systems. They highlight the importance of conducting more studies on user trust to meet users' needs and understand the context in which AI is used.

Keypoint ads

01:16:20

Definition and Importance of Trust in AI Research

Trust is defined as the degree to which a user or stakeholder has confidence that a product or system will behave as intended. The speaker notes a bias in AI research studies, with a focus on Robotics and e-commerce in the USA and Germany. They stress the importance of defining trust and understanding its effects across applications for AI development.

Keypoint ads

01:18:00

Addressing Social Considerations of Trust in AI

To address social considerations of trust in AI, a human-computer trust perspective is essential. This involves balancing computer science with psychological and social aspects, incorporating cognitive science and sociology in an interdisciplinary approach. The speaker highlights the need to address functional, UX, machine-centric, human-centric, and value-centric aspects in measuring trust.

Keypoint ads

01:19:20

Measuring Trust in AI

Measuring trust in AI involves first establishing a common definition of trust. Various toolkits and metrics, such as the Ataly heuristic guidelines from the European Union, can ensure AI trustworthiness. The speaker mentions the importance of addressing subjective perceptions of trust, including users' perceptions of risk, system competence, and benevolence. Physiological metrics like EEG can also be used to assess trust.

Keypoint ads

01:21:30

Human-Centric Approach in AI Research

The speaker's research focuses on a human-centric approach to AI, emphasizing the importance of human-centered aspects in building trustworthy AI systems. They mention various frameworks and methods like the Schneiderman Trustworthy Assessment Process and the AI Human-Centered Trust Framework. The speaker concludes by highlighting the time and effort required to build and maintain trust in AI systems.

Keypoint ads

01:22:28

Introduction of Man GW

Man GW, a distinguished research engineer at Monte Mar from France, specializes in explainable AI and automated vulnerability detection, focusing on GRE books. He is associated with Multimar, a company founded in 2004 in Paris, France, known for its contributions to cybersecurity projects encompassing monitoring forz 5z iots and 6z Network. Multimar offers open-source tools for cybersecurity purposes, cyber threat intelligence, penetration testing, and Red Team services.

Keypoint ads

01:24:51

Overview of Multimar's Contributions

Multimar, established in 2004 in Paris, France, has been actively involved in cybersecurity and defense projects, monitoring forz 5z iots, and 6z Network. They have developed open-source tools for cybersecurity, offer cyber threat intelligence, penetration testing, and Red Team services. Multimar's focus includes achieving high accuracy in models while considering characteristics like justy, transparency, and I Mo.

Keypoint ads

01:25:26

AI-Based Security Applications by Multimar

Multimar has developed three AI-based security applications for intrusion detection and response. These applications involve traffic classification to identify normal user activities, detecting common cyber attacks in various environments, and root cause analysis to determine the incident's root cause and propose mitigation actions.

Keypoint ads

01:26:03

Challenges in Encryption and Data Analysis

Encryption has significantly impacted the cybersecurity landscape, with 95% of internet traffic being encrypted as of 2022. This encryption protects user privacy but also increases the complexity of security tools for network traffic analysis. The rise in data from iots and mobile devices necessitates advanced machine learning techniques to detect hidden anomalies and process large amounts of data efficiently.

Keypoint ads

01:27:40

Significance of Explainable AI in Cybersecurity

Explainable AI is crucial in cybersecurity as advanced machine learning models are often considered black boxes. Providing insights into how AI models make decisions allows users to understand and manage AI outcomes effectively. Explainable AI methods and techniques offer transparency and help users comprehend the reasoning behind AI predictions.

Keypoint ads

01:28:46

Explanation of Adversarial Attacks on AI Models

Adversarial attacks on AI models involve manipulating input data to deceive the model, leading to incorrect predictions or classifications. Attackers can add noise to images to misclassify them, with examples like misclassifying a panda as a gibbon with high confidence. These attacks can be categorized into four main types, including transfer attacks and attacks that rely on information about the model itself. Adversarial attacks can be evaluated in two settings: untargeted, where the attacker can flip labels randomly, and targeted, where labels are flipped to specific targets.

Keypoint ads

01:30:44

Impact of Adversarial Attacks on Model Performance

The relationship between adversarial attacks and model performance is crucial. While adversarial attacks can provide insights into model vulnerabilities, they can also degrade model accuracy and produce misleading explanations. Models vulnerable to attacks can have their accuracy reduced by manipulating inputs. Balancing explanation resilience and model performance is challenging, requiring careful consideration of trade-offs. Quantifying the extremities and resilience of models can help users choose the best approach for specific use cases.

Keypoint ads

01:32:12

Introduction to Multimass AI Platform

The Multimass AI platform, named MyIF, integrates various AI services into an open-source tool with an intuitive user interface. It leverages SII methods to provide explanations for models and allows users to evaluate accountability metrics. The platform stands out by enabling users to perform adversarial attacks and assess model robustness. The architecture includes server-side components for data collection and analysis, written in Node.js and Python, and a client-side interface built with React for user interaction.

Keypoint ads

01:34:41

Use Case: Anomaly Detection for IoT and Mobile Devices

Anomaly detection for IoT and mobile devices is crucial due to vulnerabilities to cyber attacks like botnets, ransomware, and DDoS attacks. Combining techniques like stacked autoencoders and convolutional neural networks can enhance anomaly detection capabilities. By processing normal and malicious traffic separately, models can effectively detect anomalies. The output of these models can be concatenated into a single vector for comprehensive anomaly detection.

Keypoint ads

01:35:42

CNN Performance Evaluation

The speaker discussed the performance evaluation of their deep learning model for anomaly detection using CNN. They achieved 99% accuracy for B net detection, 97% for infiltration detection, and successfully classified network traffic and user activity data as benign with their model.

Keypoint ads

01:37:02

Key Features for Anomaly Detection

The presentation highlighted the key features used in anomaly detection, with flow duration being the most important feature in three cases. Additionally, the number of packets with specific flags like reset or finish played a crucial role in detecting B net activities attempting to avoid detection.

Keypoint ads

01:38:09

Resilience Against Blackbox Attacks

The speaker explained how they applied SII for resiliency against blackbox attacks, including poisoning attacks at varying rates. Despite a decrease in accuracy with increased poisoning rates, their model maintained good performance, achieving 95% accuracy for infiltration detection even under a 40% poisoning rate.

Keypoint ads

01:40:12

Detecting Adverse Attacks with SII

The discussion focused on using SII to detect adverse attacks by analyzing changes in the top 20 most important features before and after an attack. The flow duration lost its significance post-attack, indicating potential changes in the training dataset or model, suggesting SII as a solution to detect such alterations.

Keypoint ads

01:41:10

Multimass AI Platform Development

The speaker presented the development of a multimass AI platform for network traffic analysis and classification. The platform allows users to build, compare, and explain models with different configurations, apply AI for explanations, perform adversarial attacks, and evaluate metrics. The tool is integrated into the spal framework and is being tested with various stakeholders.

Keypoint ads

01:42:26

Future Work and Conclusion

The presentation concluded with future work plans, including injecting more complex evasion attacks, applying defense strategies against evasion or data probing attacks, and continuing to evaluate the tool using real datasets. The speaker expressed gratitude and readiness for questions, highlighting the platform's potential for addressing social risks and utilizing technology for social benefits.

Keypoint ads

01:43:06

Introduction to Federated Learning

David Solans, a research scientist at Telefonica Research, introduces the topic of fairness and diversity in the context of Federated Learning. He explains that Federated Learning involves training AI models on data distributed among various devices, such as mobile phones, wearables, computers, servers, and sensors.

Keypoint ads

01:44:33

Comparison of Classic Approach and Federated Learning

In the classic approach to training AI models, data from different users with various devices is uploaded to a global server for training a global model. This approach offers high accuracy but raises privacy concerns as operators have access to potentially sensitive data. On the other hand, Federated Learning involves downloading a model from the server to individual client devices, training local models, and aggregating them to create a global model. This approach prioritizes privacy preservation and scalability.

Keypoint ads

01:46:47

Advantages of Federated Learning

The main advantage of Federated Learning is privacy preservation, as data is kept local to devices, ensuring no sharing with third parties. Additionally, it is scalable as it does not require a data center for model building. Moreover, Federated Learning allows for the creation of customized local models by sending the global model back to users for additional training, resulting in highly personalized models.

Keypoint ads

01:47:34

Examples of Federated Learning Applications

One prominent example of Federated Learning is the Google keyboard, where the next word prediction model is trained using this approach. Google accesses data stored on local devices, exchanges models with a global server, and achieves a 24% increase in prediction accuracy. Another notable use case is in hospitals, where Federated Learning is utilized for medical data analysis and improving healthcare outcomes.

Keypoint ads

01:48:22

Federated Learning in Healthcare

Several hospitals aim to collaboratively build a diagnosis model based on images, but legislative restrictions hinder data sharing. To address this, a solution called Federated Learning as a Service (FLaaS) was developed by Telica. FLaaS allows data distribution among organizations, with a research-oriented prototype called FLaaS offering a mobile device deployment for Federated Learning.

Keypoint ads

01:49:02

Architecture of FLaaS

FLaaS comprises four main modules: an administrator interface for setting up learning processes, a FLaaS server orchestrating the process, a notification service enabling server-client interaction, and client devices installing the app. This architecture facilitates the deployment and management of Federated Learning on mobile devices.

Keypoint ads

01:50:23

Algorithmic Fairness in AI Models

Algorithmic fairness concerns the potential bias in AI models trained on data, impacting decisions that can affect individuals. Biased decisions based on protected attributes like gender, age, and race can lead to unfair outcomes. The Gender Shades project from MIT revealed significant accuracy differences in commercial AI products based on gender and race, highlighting the importance of addressing bias in AI systems.

Keypoint ads

01:52:50

Addressing Algorithmic Bias

To mitigate algorithmic bias, strategies include preprocessing data to rebalance it, modifying algorithms during training, and postprocessing predictions to ensure fairness. These measures aim to reduce disparities in AI model outcomes based on demographic attributes, emphasizing the need for conscious efforts to address bias in AI systems.

Keypoint ads

01:53:46

Fairness Perspectives in Federated Learning

In centralized AI, different perspectives on fairness exist in Federated learning. Fairness is crucial in selecting users for the process, focusing on client selection group fairness and accuracy parity among demographic groups. Data is the main source of bias, with issues like representativity and unbalanced data affecting fairness.

Keypoint ads

01:54:31

Complexity of Assessing Fairness Characteristics

Assessing fairness characteristics in Federated learning becomes complex due to biases originating from data. Data diversity plays a significant role in addressing biases, with considerations for representativity and balanced data distribution among demographic groups.

Keypoint ads

01:55:15

Data Diversity in Federated Learning

Data diversity in Federated learning involves using IID datasets with equally split labels among devices. Non-IID settings pose challenges as local data distributions may differ from the global distribution. Researchers often simulate non-IID settings due to a lack of diverse datasets, using strategies like label skew, attribute skew, and quantity skew.

Keypoint ads

01:57:40

Challenges in Researching Data Diversity

Around 90% of research in Federated learning uses label skew sampling, overlooking attribute skew and quantity skew. Tools like 'fed art ml' aim to standardize techniques for creating non-IID datasets. Increasing data heterogeneity tends to lower accuracy in Federated learning, highlighting the importance of addressing diverse data challenges.

Keypoint ads

01:58:53

Challenges of Data Heterogeneity in Models

The speaker highlighted that as data becomes more heterogeneous, it poses challenges in converting models accurately. Higher data heterogeneity leads to more problems in model conversion, affecting accuracy and fairness. The discussion emphasized the trade-off between accuracy and fairness in models, with more heterogeneity often resulting in more unfairness.

Keypoint ads

02:00:15

Importance of Testing Algorithms Under Diverse Conditions

The speaker stressed the importance of testing algorithms under diverse conditions in Federated Learning. It was mentioned that evaluating algorithms under varied conditions is crucial to understanding their behavior accurately. The need for research in Federated Learning to assess algorithms under diverse conditions was highlighted as a key point of the talk.

Keypoint ads

02:00:23

Question on Algorithmic Fairness and Data Sharing

A question was raised regarding the potential contradiction between using Federated Learning and ensuring algorithmic fairness. The query focused on the necessity of sharing data for model mixing and the implications for algorithmic fairness. The speaker acknowledged the challenge of achieving group fairness while restricting data sharing, suggesting the application of differential privacy to address this issue.

Keypoint ads

02:01:45

Effect of Model Diversity on Convergence and Utility

The discussion delved into the impact of model diversity on convergence and utility in Federated Learning. It was explained that more diverse models lead to lower convergence quality due to increased noise in aggregation. This results in noisier outcomes in terms of utility metrics like accuracy and F1 score. The cyclical nature of retraining noisy models further exacerbates the issue, affecting the overall quality of the models.

Keypoint ads

02:03:49

Challenges in Federated Learning

Existing algorithms are not suitable for handling the levels of data diversity required in federated learning. Quantifying fairness in federal learning settings requires more complexity than centralized settings.

Keypoint ads

02:04:10

Introduction of Bart Sinasi from University College Dublin

Bart Sinasi from University College Dublin specializes in IoT networks, focusing on sensor-driven design, data collection, storage, and analysis.

Keypoint ads

02:04:26

Acknowledgment and Introduction by Bart Sinasi

Bart Sinasi expresses gratitude for being invited to speak and acknowledges the previous speakers. He highlights the importance of explainable AI and federated learning, focusing on the network perspective and upcoming challenges in 6G networks.

Keypoint ads

02:05:13

Overview of UCD NetLab Team

The UCD NetLab team, led by Madan, consists of four senior researchers, four postdocs, and 12 PhD students. They work on AI security and privacy, blockchain in 5G and beyond, networks authorization, and security automation.

Keypoint ads

02:06:36

EU Projects and Funding

The UCD team is part of various EU projects, including Confidential 6G, Robust 6G, and others like Inspire 5G Plus. The work presented is funded by Spatial, with collaborations with partners like MAN and involvement in the 6G Flagship project.

Keypoint ads

02:07:46

Focus on AI in Future Networks

Future networks, especially in 6G, serve as enablers for progress but also open avenues for complex attacks. Applications range from drones to vehicular communication, utilizing 6G capabilities like precise location and higher bandwidth. The integration of AI-driven networks and explainable AI solutions will be crucial for addressing potential attacks and ensuring network security.

Keypoint ads

02:09:24

Challenges in AI Security in 6G Networks

In the discussion, it was highlighted that AI in 6G networks presents a significant playground for adversaries, leading to potential data theft and unwanted results. Adversaries are expected to launch attacks at various stages of training and testing, making it crucial to address these challenges. The role of AI in 6G is multifaceted, acting as both an enabler and a potential target, necessitating a strategic approach to deployment and security measures.

Keypoint ads

02:10:02

Types of Attacks on AI Systems

The conversation delved into the diverse range of attacks on AI systems, including evasion attacks and data poisoning. Notably, evasion attacks involve modifying input data, while data poisoning entails injecting malicious data into training sets. The team primarily focuses on network classification systems, where attacks like model inversion, label flipping, and denial of service attacks are prevalent.

Keypoint ads

02:11:18

Network Activity Classification Systems

The team closely collaborates on network activity classification systems, aiming to analyze user behavior in a controlled environment. They primarily combat poisoning attacks during the training and testing phases. In the training phase, attackers may modify existing data or introduce adversarial examples, such as generative adversarial networks. Specific attacks like random label flipping and targeted label poisoning are key areas of concern.

Keypoint ads

02:13:22

Misclassification in AI Models

The discussion emphasized the subtle nature of attacks aimed at misclassifying AI model outputs. Attackers strategically aim to alter decisions slightly, adding imperceptible noise to evade detection. Metrics like impact and complexity are crucial for measuring the effectiveness of poison attacks, where impact compares benign and compromised model accuracy, while complexity assesses the ratio of poison data to benign data.

Keypoint ads

02:15:12

CP Usage for Evasion Adverse Sol Samples

In the discussion, it was highlighted that the complexity of the CP was of interest, particularly in its usage for the generation of evasion adverse Sol samples. The speaker mentioned working with Monash on this project, where they measured the impact of three different types of attacks: new poison data, generative adversarial networks (GAN) generated data, and random swapping and targeting poisoning.

Keypoint ads

02:16:09

Effectiveness of GAN Poisoning Attack

The GAN poisoning attack, which involves slightly modifying the output to make it hard to detect, was noted as one of the most effective ways to target machine learning models. Despite having a similar impact to other attacks, the GAN-generated data was significantly different, making it challenging to identify. This method was emphasized as a current prominent strategy for targeting ML models.

Keypoint ads

02:17:01

Poisoning Attack Detection via SHAP-Based Feature Attribution

The speaker discussed the use of SHAP-based feature attribution for detecting poisoning attacks in Federated Learning (FL). By analyzing which features contribute to specific outputs in percentages, SHAP helps identify maliciously injected poison models. This method involves evaluating each model's behavior at the aggregation server using SHAP before aggregation, distinguishing between benign and poisoned models.

Keypoint ads

02:19:33

XAI Targeting Attack: Modifying AIML Models for Wrong Explanations

A unique attack targeting eXplainable Artificial Intelligence (XAI) was introduced, involving modifying AIML models to create incorrect explanations for post-hoc explainers like SHAP and LIME. This attack, termed 'Falling Lime and SHAP,' aims to deceive by presenting a poisoned model as normal. The speaker emphasized the significance of this attack in undermining the trustworthiness of XAI explanations.

Keypoint ads

02:21:00

Malicious Model Injection

An employee aims to inject The Old Company with a malicious model without being detected. This involves including multiple services to give an advantage to one service over the others. The process includes generating data using line-based random perturbation and simulating attacks with NSL kdd and 5G n data sets, resulting in similar accuracy. The data is then put into a blackbox model with different distributions to calculate statistical distance using the hinger distance.

Keypoint ads

02:22:00

Model Distance Analysis

The analysis of model distances reveals insights into the effectiveness of malicious data injection. The control model distance shows an increase as more malicious data is injected, indicating a greater distance. In contrast, the adversarial model distance initially decreases and then approaches zero with injected data, suggesting a subtle change. The normal model distance grows from 0.1 to 0.6 with more malicious data, while the adversarial model remains flat, indicating a well-hidden malicious model.

Keypoint ads

02:24:58

AI Role in 6G Networks

Bart highlights the dual role of AI in 6G networks, emphasizing the importance of timely attack detection and addressing challenges in Federated learning. This underscores the evolving landscape of AI security and the ongoing need for research and innovative solutions for a secure 6G future.

Keypoint ads

02:25:27

Event Conclusion and Networking

The event concludes with a call to continue discussions on trustworthy and responsible AI beyond the auditorium. Attendees are encouraged to network, share experiences, and explore posters and demos in the lobby. The organizers express gratitude to all contributors and attendees for making the event a success.

Keypoint ads

Did you like this Youtube video summary? 🚀

Try it for FREE!

bottom of page