Hands-On Mathematics for Deep Learning
Jay Dawani更新时间:2021-06-18 18:56:13
最新章节:Leave a review - let other readers know what you think封面
版权信息
About Packt
Why subscribe?
Contributors
About the author
About the reviewers
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the color images
Conventions used
Get in touch
Reviews
Section 1: Essential Mathematics for Deep Learning
Linear Algebra
Comparing scalars and vectors
Linear equations
Solving linear equations in n-dimensions
Solving linear equations using elimination
Matrix operations
Adding matrices
Multiplying matrices
Inverse matrices
Matrix transpose
Permutations
Vector spaces and subspaces
Spaces
Subspaces
Linear maps
Image and kernel
Metric space and normed space
Inner product space
Matrix decompositions
Determinant
Eigenvalues and eigenvectors
Trace
Orthogonal matrices
Diagonalization and symmetric matrices
Singular value decomposition
Cholesky decomposition
Summary
Vector Calculus
Single variable calculus
Derivatives
Sum rule
Power rule
Trigonometric functions
First and second derivatives
Product rule
Quotient rule
Chain rule
Antiderivative
Integrals
The fundamental theorem of calculus
Substitution rule
Areas between curves
Integration by parts
Multivariable calculus
Partial derivatives
Chain rule
Integrals
Vector calculus
Derivatives
Vector fields
Inverse functions
Summary
Probability and Statistics
Understanding the concepts in probability
Classical probability
Sampling with or without replacement
Multinomial coefficient
Stirling's formula
Independence
Discrete distributions
Conditional probability
Random variables
Variance
Multiple random variables
Continuous random variables
Joint distributions
More probability distributions
Normal distribution
Multivariate normal distribution
Bivariate normal distribution
Gamma distribution
Essential concepts in statistics
Estimation
Mean squared error
Sufficiency
Likelihood
Confidence intervals
Bayesian estimation
Hypothesis testing
Simple hypotheses
Composite hypothesis
The multivariate normal theory
Linear models
Hypothesis testing
Summary
Optimization
Understanding optimization and it's different types
Constrained optimization
Unconstrained optimization
Convex optimization
Convex sets
Affine sets
Convex functions
Optimization problems
Non-convex optimization
Exploring the various optimization methods
Least squares
Lagrange multipliers
Newton's method
The secant method
The quasi-Newton method
Game theory
Descent methods
Gradient descent
Stochastic gradient descent
Loss functions
Gradient descent with momentum
The Nesterov's accelerated gradient
Adaptive gradient descent
Simulated annealing
Natural evolution
Exploring population methods
Genetic algorithms
Particle swarm optimization
Summary
Graph Theory
Understanding the basic concepts and terminology
Adjacency matrix
Types of graphs
Weighted graphs
Directed graphs
Directed acyclic graphs
Multilayer and dynamic graphs
Tree graphs
Graph Laplacian
Summary
Section 2: Essential Neural Networks
Linear Neural Networks
Linear regression
Polynomial regression
Logistic regression
Summary
Feedforward Neural Networks
Understanding biological neural networks
Comparing the perceptron and the McCulloch-Pitts neuron
The MP neuron
Perceptron
Pros and cons of the MP neuron and perceptron
MLPs
Layers
Activation functions
Sigmoid
Hyperbolic tangent
Softmax
Rectified linear unit
Leaky ReLU
Parametric ReLU
Exponential linear unit
The loss function
Mean absolute error
Mean squared error
Root mean squared error
The Huber loss
Cross entropy
Kullback-Leibler divergence
Jensen-Shannon divergence
Backpropagation
Training neural networks
Parameter initialization
All zeros
Random initialization
Xavier initialization
The data
Deep neural networks
Summary
Regularization
The need for regularization
Norm penalties
L2 regularization
L1 regularization
Early stopping
Parameter tying and sharing
Dataset augmentation
Dropout
Adversarial training
Summary
Convolutional Neural Networks
The inspiration behind ConvNets
Types of data used in ConvNets
Convolutions and pooling
Two-dimensional convolutions
One-dimensional convolutions
1 × 1 convolutions
Three-dimensional convolutions
Separable convolutions
Transposed convolutions
Pooling
Global average pooling
Convolution and pooling size
Working with the ConvNet architecture
Training and optimization
Exploring popular ConvNet architectures
VGG-16
Inception-v1
Summary
Recurrent Neural Networks
The need for RNNs
The types of data used in RNNs
Understanding RNNs
Vanilla RNNs
Bidirectional RNNs
Long short-term memory
Gated recurrent units
Deep RNNs
Training and optimization
Popular architecture
Clockwork RNNs
Summary
Section 3: Advanced Deep Learning Concepts Simplified
Attention Mechanisms
Overview of attention
Understanding neural Turing machines
Reading
Writing
Addressing mechanisms
Content-based addressing mechanism
Location-based address mechanism
Exploring the types of attention
Self-attention
Comparing hard and soft attention
Comparing global and local attention
Transformers
Summary
Generative Models
Why we need generative models
Autoencoders
The denoising autoencoder
The variational autoencoder
Generative adversarial networks
Wasserstein GANs
Flow-based networks
Normalizing flows
Real-valued non-volume preserving
Summary
Transfer and Meta Learning
Transfer learning
Meta learning
Approaches to meta learning
Model-based meta learning
Memory-augmented neural networks
Meta Networks
Metric-based meta learning
Prototypical networks
Siamese neural networks
Optimization-based meta learning
Long Short-Term Memory meta learners
Model-agnostic meta learning
Summary
Geometric Deep Learning
Comparing Euclidean and non-Euclidean data
Manifolds
Discrete manifolds
Spectral decomposition
Graph neural networks
Spectral graph CNNs
Mixture model networks
Facial recognition in 3D
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
更新时间:2021-06-18 18:56:13