Sensible Deep Studying for Coders
On this course, containing over 30 hours of video content material, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and prompted the media to say “you may never believe what you see online again”.
We’ve labored intently with consultants from Stability.ai and Hugging Face (creators of the Diffusers library) to make sure we now have rigorous protection of the newest methods. The course contains protection of papers that have been launched after Steady Diffusion got here out – so it really goes effectively past even what Steady Diffusion contains! We additionally clarify the best way to learn analysis papers, and observe this talent by finding out and implementing many papers all through the course.
Steady diffusion, and diffusion strategies generally, are an important studying purpose for a lot of causes. For one factor, after all, you’ll be able to create wonderful stuff with these algorithms! To essentially take the approach to the subsequent degree, and create issues that no-one has seen earlier than, you could actually deeply perceive what’s occurring below the hood. With this understanding, you’ll be able to craft your personal loss features, initialization strategies, multi-model mixups, and extra, to create completely new functions which have by no means been seen earlier than. Simply as essential: it’s an important studying purpose as a result of almost each key approach in fashionable deep studying comes collectively in these strategies. Contrastive studying, transformer fashions, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and rather more are concerned in making a single picture.
To get essentially the most out of this course, you have to be a fairly assured deep studying practitioner. Should you’ve completed quick.ai’s Practical Deep Learning course then you definately’ll be prepared! Should you haven’t carried out that course, however are comfy with constructing an SGD coaching loop from scratch in Python, being aggressive in Kaggle competitions, utilizing fashionable NLP and laptop imaginative and prescient algorithms for sensible issues, and dealing with PyTorch and fastai, then you can be prepared to start out the course. (Should you’re undecided, then we strongly advocate getting beginning with Sensible Deep Studying.)
Get started now!
Content material abstract
On this course, we’ll discover diffusion strategies resembling Denoising Diffusion Probabilistic Fashions (DDPM) and Denoising Diffusion Implicit Fashions (DDIM). We’ll get our arms soiled implementing unconditional and conditional diffusion fashions, experimenting with totally different samplers, and diving into current tips like textual inversion and Dreambooth.
Alongside the way in which, we’ll cowl important deep studying subjects like neural community architectures, knowledge augmentation approaches, and numerous loss features. We’ll construct our personal fashions from scratch, resembling Multi-Layer Perceptrons (MLPs), ResNets, and Unets, whereas experimenting with generative architectures like autoencoders and transformers.
All through the course, we’ll PyTorch to implement our fashions, and can create our personal deep studying framework known as miniai
. We’ll grasp Python ideas like iterators, mills, and interior designers to maintain our code clear and environment friendly. We’ll additionally discover deep studying optimizers like stochastic gradient descent (SGD) accelerated approaches, studying charge annealing, and studying the best way to experiment with the influence totally different initialisers, batch sizes and studying charges. And naturally, we’ll make use of helpful instruments just like the Python debugger (pdb) and nbdev for constructing Python modules from Jupyter notebooks.
Lastly, we’ll contact on elementary ideas like tensors, calculus, and pseudo-random quantity era to supply a strong basis for our exploration. We’ll apply these ideas to machine studying methods like imply shift clustering and convolutional neural networks (CNNs), and can see the best way to use monitoring with Weights and Biases (W&B).
We’ll additionally deal with combined precision coaching utilizing each NVIDIA’s apex library, and the Speed up library from Hugging Face. We’ll examine numerous sorts of normalization like Layer Normalization and Batch Normalization. By the tip of the course, you’ll have a deep understanding of diffusion fashions and the abilities to implement cutting-edge deep studying methods.
Get started now!
Matters lined
Right here’s an inventory of all of the stuff that you just’ll be taught intimately and construct from scratch on this course. (Once we say “from scratch”, we imply that you just’ll depend on nothing aside from Python and its customary library.)
- Diffusion foundations
- Denoising Diffusion Probabilistic Fashions (DDPM)
- Ahead and reverse processes
- Implementing a noise prediction mannequin utilizing a neural community
- Visualizing noisy photographs at totally different timesteps
- Denoising Diffusion Implicit Mannequin (DDIM)
- DDPM/DDIM enhancements
- Various noise schedules
- Pre-conditioning
- Implementation and efficiency of various samplers
- Euler sampler
- Ancestral Euler sampler
- Heuns technique
- LMS sampler
- Implementing an unconditional steady diffusion mannequin
- Making a conditional steady diffusion mannequin
- Inverse issues
- Textual inversion
- Dreambooth
- Denoising Diffusion Probabilistic Fashions (DDPM)
- Hugging Face’s Diffusers library
- Pre-trained pipelines
- Picture-to-image pipelines
- Steerage scale
- Adverse prompts
- Callbacks
- Working with Hugging Face datasets
- Deep studying optimizers
- Stochastic gradient descent (SGD) accelerated approaches
- Studying charge annealing
- PyTorch studying charge schedulers
- Cosine Annealing
- OneCycleLR
- PyTorch studying charge schedulers
- Experimenting with batch sizes and studying charges
- Working with PyTorch optimizers
- Python ideas
- Organizing and simplifying code
- Iterators and mills in Python
- Dunder strategies
- Python knowledge mannequin
- Python debugger (pdb)
- Utilizing nbdev to create Python modules from Jupyter notebooks
- try-except blocks
- decorators
- getattr
**kwargs
and delegates
- Fundamental foundations
- Tensors
- Linear classifier utilizing a tensor
- Matrix multiplication utilizing Python and Numba
- Evaluating APL with PyTorch
- Frobenius norm
- Broadcasting in deep studying and machine studying code
- Matrix multiplication
- Einstein summation notation and torch.einsum
- GPU acceleration and CUDA
- Numba
- Calculus
- Derivatives and Infinitesimals
- Finite differencing
- Analytic derivatives
- Loss features
- Contrastive loss perform
- Perceptual loss
log_softmax()
perform and cross entropy loss
- Pseudo-random quantity era
- Wickman-Hill algorithm
- Random state in deep studying
- Tensors
- Neural community architectures
- Multi-Layer Perceptron (MLP) implementation
- Gradients and derivatives
- Chain rule and backpropagation
- PyTorch for calculating derivatives
- ReLU and linear perform courses
- ResNets
- Generative architectures
- Autoencoders
- Convolutional autoencoders
- Variational autoencoders
- Unets
- Experimenting with cross connections in Unets
- CLIP textual content encoders and picture encoders
- Transformers
- Self-attention and multi-headed consideration
- Rearrange perform
- Time embedding and sinusoidal embeddings
- Making a super-resolution U-Web mannequin
- Step by step unfreezing pre-trained networks
- Fashion switch
- Neural Mobile Automata
- Round padding
- Gradient normalization
- Autoencoders
- Deep studying methods
- Information augmentation methods
- Random erasing
- TrivialAugment
- Check time augmentation
- Dropout for enhancing mannequin efficiency
- Check time dropout for measuring mannequin confidence
- Information augmentation methods
- PyTorch
- PyTorch’s
nn.Module
andnn.Sequential
- Creating customized PyTorch modules
- Implementing optimizers, DataLoaders, and Datasets
- PyTorch hooks
- PyTorch’s
- Learner framework
- Constructing a versatile coaching framework
- Callbacks and exceptions (CancelFitException, CancelEpochException, CancelBatchException)
- Metrics and MetricsCB callback
- DeviceCB callback
- Refactoring code with context managers
- set_seed perform
- Callback class and TrainLearner subclass
- HooksCallback and ActivationStats
- Experimenting with batch sizes and studying charges
- torcheval library
- Machine Studying Methods and Instruments
- Imply shift clustering
- Gaussian kernel
- Norms
- Log sum exp trick
- Convolutional Neural Networks (CNNs)
- Convolutions and kernels
- Im2col approach
- Padding and stride in CNNs
- Receptive area
- Constructing a CNN from scratch
- Weights and Biases (W&B) for experiment monitoring
- Fréchet Inception Distance (FID) metric
- Kernel Inception Distance (KID) metric
- Blended precision coaching
- Speed up library from HuggingFace
- Collation perform
- Initialization and normalization
- Histograms of activations
- Glorot (Xavier) initialization
- Variance, customary deviation, and covariance
- Basic ReLU activation perform
- Layer-wise Sequential Unit Variance (LSUV)
- Layer Normalization and Batch Normalization
- Occasion Norm and Group Norm
Get started now!
(The “subjects lined” checklist was taken from the concatenation of the subject checklist of every lesson, and utilizing GPT 4 with this immediate: “The enter textual content incorporates a markdown checklist of subjects mentioned in quite a lot of deep studying and steady diffusion classes. The subjects from every lesson have been concatenated collectively into this checklist, due to this fact it could include duplicates (or close to dupes) and isn’t effectively organised. Create an organised markdown checklist which teams comparable subjects collectively (utilizing a hierarchy or markdown checklist objects as acceptable) and mix duplicate or very comparable subjects.” The “content material abstract” part was taken from the “subjects lined” checklist, and the GPT 4 immediate “Summarise the next markdown course define utilizing 3-4 paragraphs of casual prose within the type of Jeremy Howard. Don’t observe the identical order because the subjects within the define, however as an alternative prepare them such that essentially the most foundational and key subjects come first.”)