دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Denis Rothman
سری:
ISBN (شابک) : 1800208138, 9781800208131
ناشر: Packt Publishing - ebooks Account
سال نشر: 2020
تعداد صفحات: 0
زبان: English
فرمت فایل : ZIP (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 14 مگابایت
در صورت تبدیل فایل کتاب Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps. Code به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب هوش مصنوعی قابل توضیح دستی (XAI) با پایتون: هوش مصنوعی قابل اعتماد را برای برنامههای هوش مصنوعی منصفانه، ایمن و قابل اعتماد تفسیر، تجسم، توضیح و ادغام کنید. کد نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
مدل های جعبه سیاه را در برنامه های هوش مصنوعی خود حل کنید تا آنها را منصفانه، قابل اعتماد و ایمن کنید. با اصول و ابزارهای اساسی برای استقرار هوش مصنوعی توضیح پذیر (XAI) در برنامه ها و رابط های گزارش خود آشنا شوید.
ترجمه موثر بینشهای هوش مصنوعی به ذینفعان کسبوکار نیازمند برنامهریزی، طراحی و انتخابهای تجسم دقیق است. توصیف مشکل، مدل، و روابط بین متغیرها و یافتههای آنها اغلب ظریف، شگفتانگیز و از نظر فنی پیچیده است.
هندز-روی توضیحپذیر هوش مصنوعی (XAI) با پایتون به شما امکان میدهد با دستهای خاصی کار کنید. در یادگیری ماشینی پروژه های پایتون به طور استراتژیک ترتیب داده شده اند تا تسلط شما بر تجزیه و تحلیل نتایج هوش مصنوعی را افزایش دهند. این تجزیه و تحلیل شامل ساخت مدلها، تفسیر نتایج با تجسمسازیها، و یکپارچهسازی ابزارهای گزارشدهی هوش مصنوعی قابل درک و برنامههای کاربردی مختلف است.
شما راهحلهای XAI را در Python، TensorFlow 2، پلتفرم XAI Google Cloud، Google Colaboratory و سایر چارچوبها ایجاد خواهید کرد. برای باز کردن جعبه سیاه مدل های یادگیری ماشینی. این کتاب شما را با چندین ابزار AI قابل توضیح منبع باز برای پایتون آشنا میکند که میتوانند در طول چرخه عمر پروژه یادگیری ماشینی مورد استفاده قرار گیرند.
شما یاد خواهید گرفت که چگونه نتایج مدل یادگیری ماشین را کاوش کنید، متغیرهای تأثیرگذار کلیدی را مرور کنید. و روابط متغیر، شناسایی و رسیدگی به مسائل مربوط به سوگیری و اخلاقیات، و ادغام پیشبینیها با استفاده از پایتون به همراه پشتیبانی از تجسمهای مدل یادگیری ماشین در رابطهای قابل توضیح توسط کاربر.
در پایان این کتاب هوش مصنوعی، شما یک -درک عمیق مفاهیم اصلی هوش مصنوعی قابل توضیح.
این کتاب مقدمه ای بر برنامه نویسی پایتون یا مفاهیم یادگیری ماشین نیست. برای استفاده حداکثری از این کتاب، باید دانش و/یا تجربه اساسی در مورد کتابخانه های یادگیری ماشینی مانند scikit-learn داشته باشید.
برخی از خوانندگان بالقوه این کتاب عبارتند از:
< ul>Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.
Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.
Hands-On Explainable AI (XAI) with Python will enable you to work with specific hands-on machine learning Python projects strategically arranged to enhance your grip on AI results analysis. The analysis includes building models, interpreting results with visualizations, and integrating understandable AI reporting tools and different applications.
You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source explainable AI tools for Python that can be used throughout the machine learning project life-cycle.
You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting machine learning model visualizations into user explainable interfaces.
By the end of this artificial intelligence book, you will possess an in-depth understanding of the core concepts of explainable AI.
This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.
Some of the potential readers of this book include:
Cover Copyright Packt Page Contributors Table of Contents Preface Chapter 1: Explaining Artificial Intelligence with Python Defining explainable AI Going from black box models to XAI white box models Explaining and interpreting Designing and extracting The XAI executive function The XAI medical diagnosis timeline The standard AI program used by a general practitioner Definition of a KNN algorithm A KNN in Python West Nile virus – a case of life or death How can a lethal mosquito bite go unnoticed? What is the West Nile virus? How did the West Nile virus get to Chicago? XAI can save lives using Google Location History Downloading Google Location History Google\'s Location History extraction tool Reading and displaying Google Location History data Installation of the basemap packages The import instructions Importing the data Processing the data for XAI and basemap Setting up the plotting options to display the map Enhancing the AI diagnosis with XAI Enhanced KNN XAI applied to the medical diagnosis experimental program Displaying the KNN plot Natural language explanations Displaying the Location History map Showing mosquito detection data and natural language explanations A critical diagnosis is reached with XAI Summary Questions References Further reading Chapter 2: White Box XAI for AI Bias and Ethics Moral AI bias in self-driving cars Life and death autopilot decision making The trolley problem The MIT Moral Machine experiment Real life and death situations Explaining the moral limits of ethical AI Standard explanation of autopilot decision trees The SDC autopilot dilemma Importing the modules Retrieving the dataset Reading and splitting the data Theoretical description of decision tree classifiers Creating the default decision tree classifier Training, measuring, and saving the model Displaying a decision tree XAI applied to an autopilot decision tree Structure of a decision tree The default output of the default structure of a decision tree The customized output of a customized structure of a decision tree The output of a customized structure of a decision tree Using XAI and ethics to control a decision tree Loading the model Accuracy measurements Simulating real-time cases Introducing ML bias due to noise Introducing ML ethics and laws Case 1 – not overriding traffic regulations to save four pedestrians Case 2 – overriding traffic regulations Case 3 – introducing emotional intelligence in the autopilot Summary Questions References Further reading Chapter 3: Explaining Machine Learning with Facets Getting started with Facets Installing Facets on Google Colaboratory Retrieving the datasets Reading the data files Facets Overview Creating feature statistics for the datasets Implementing the feature statistics code Implementing the HTML code to display feature statistics Sorting the Facets statistics overview Sorting data by feature order XAI motivation for sorting features Sorting by non-uniformity Sorting by alphabetical order Sorting by amount missing/zero Sorting by distribution distance Facets Dive Building the Facets Dive display code Defining the labels of the data points Defining the color of the data points Defining the binning of the x axis and y axis Defining the scatter plot of the x axis and the y axis Summary Questions References Further reading Chapter 4: Microsoft Azure Machine Learning Model Interpretability with SHAP Introduction to SHAP Key SHAP principles Symmetry Null player Additivity A mathematical expression of the Shapley value Sentiment analysis example Shapley value for the first feature, \"good\" Shapley value for the second feature, \"excellent\" Verifying the Shapley values Getting started with SHAP Installing SHAP Importing the modules Importing the data Intercepting the dataset Vectorizing the datasets Linear models and logistic regression Creating, training, and visualizing the output of a linear model Defining a linear model Agnostic model explaining with SHAP Creating the linear model explainer Creating the plot function Explaining the output of the model\'s prediction Explaining intercepted dataset reviews with SHAP Explaining the original IMDb reviews with SHAP Summary Questions References Further reading Additional publications Chapter 5: Building an Explainable AI Solution from Scratch Moral, ethical, and legal perspectives The U.S. census data problem Using pandas to display the data Moral and ethical perspectives The moral perspective The ethical perspective The legal perspective The machine learning perspective Displaying the training data with Facets Dive Analyzing the training data with Facets Verifying the anticipated outputs Using KMC to verify the anticipated results Analyzing the output of the KMC algorithm Conclusion of the analysis Transforming the input data WIT applied to a transformed dataset Summary Questions References Further reading Chapter 6: AI Fairness with Google\'s What-If Tool (WIT) Interpretability and explainability from an ethical AI perspective The ethical perspective The legal perspective Explaining and interpreting Preparing an ethical dataset Getting started with WIT Importing the dataset Preprocessing the data Creating data structures to train and test the model Creating a DNN model Training the model Creating a SHAP explainer The plot of Shapley values Model outputs and SHAP values The WIT datapoint explorer and editor Creating WIT The datapoint editor Features Performance and fairness Ground truth Cost ratio Slicing Fairness The ROC curve and AUC The PR curve The confusion matrix Summary Questions References Further reading Chapter 7: A Python Client for Explainable AI Chatbots The Python client for Dialogflow Installing the Python client for Google Dialogflow Creating a Google Dialogflow agent Enabling APIs and services The Google Dialogflow Python client Enhancing the Google Dialogflow Python client Creating a dialog function The constraints of an XAI implementation on Dialogflow Creating an intent in Dialogflow The training phrases of the intent The response of an intent Defining a follow-up intent for an intent The XAI Python client Inserting interactions in the MDP Interacting with Dialogflow with the Python client A CUI XAI dialog using Google Dialogflow Dialogflow integration for a website A Jupyter Notebook XAI agent manager Google Assistant Summary Questions Further reading Chapter 8: Local Interpretable Model-Agnostic Explanations (LIME) Introducing LIME A mathematical representation of LIME Getting started with LIME Installing LIME on Google Colaboratory Retrieving the datasets and vectorizing the dataset An experimental AutoML module Creating an agnostic AutoML template Bagging classifiers Gradient boosting classifiers Decision tree classifiers Extra trees classifiers Interpreting the scores Training the model and making predictions The interactive choice of classifier Finalizing the prediction process Interception functions The LIME explainer Creating the LIME explainer Interpreting LIME explanations Explaining the predictions as a list Explaining with a plot Conclusions of the LIME explanation process Summary Questions References Further reading Chapter 9: The Counterfactual Explanations Method The counterfactual explanations method Dataset and motivations Visualizing counterfactual distances in WIT Exploring data point distances with the default view The logic of counterfactual explanations Belief Truth Justification Sensitivity The choice of distance functions The L1 norm The L2 norm Custom distance functions The architecture of the deep learning model Invoking WIT The custom prediction function for WIT Loading a Keras model Retrieving the dataset and model Summary Questions References Further reading Chapter 10: Contrastive XAI The contrastive explanations method Getting started with the CEM applied to MNIST Installing Alibi and importing the modules Importing the modules and the dataset Importing the modules Importing the dataset Preparing the data Defining and training the CNN model Creating the CNN model Training the CNN model Loading and testing the accuracy of the model Defining and training the autoencoder Creating the autoencoder Training and saving the autoencoder Comparing the original images with the decoded images Pertinent negatives CEM parameters Initializing the CEM explainer Pertinent negative explanations Summary Questions References Further reading Chapter 11: Anchors XAI Anchors AI explanations Predicting income Classifying newsgroup discussions Anchor explanations for ImageNet Installing Alibi and importing the modules Loading an InceptionV3 model Downloading an image Processing the image and making predictions Building the anchor image explainer Explaining other categories Other images and difficulties Summary Questions References Further reading Chapter 12: Cognitive XAI Cognitive rule-based explanations From XAI tools to XAI concepts Defining cognitive XAI explanations A cognitive XAI method Importing the modules and the data The dictionaries The global parameters The cognitive explanation function The marginal contribution of a feature A mathematical perspective The Python marginal cognitive contribution function A cognitive approach to vectorizers Explaining the vectorizer for LIME Explaining the IMDb vectorizer for SHAP Human cognitive input for the CEM Rule-based perspectives Summary Questions Further reading Answers to the Questions Chapter 1, Explaining Artificial Intelligence with Python Chapter 2, White Box XAI for AI Bias and Ethics Chapter 3, Explaining Machine Learning with Facets Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP Chapter 5, Building an Explainable AI Solution from Scratch Chapter 6, AI Fairness with Google\'s What-If Tool (WIT) Chapter 7, A Python Client for Explainable AI Chatbots Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME) Chapter 9, The Counterfactual Explanations Method Chapter 10, Contrastive XAI Chapter 11, Anchors XAI Chapter 12, Cognitive XAI Other Books You May Enjoy Index