[Research Log] Nebula V2

date
Apr 25, 2023
slug
nebula-v2
status
Published
tags
Research
summary
Nebula v2 is here, some recent optimizations include 5 new loss functions and a new caching function:
type
Post

5 New Loss Functions for Multi-Modal Workflows

Nebula v2 is here, some recent optimizations include 5 new loss functions and a new caching function:
notion image
 
- SmoothL1Loss - MultiLabelSoftMarginLoss - PoissonNLLLoss - KLDivLoss - NLLLoss ++ Cache unique values: When you calculate unique values in y_true, cache these values to avoid recomputing them during subsequent calls to determine_loss
function. This can be done with a dictionary that maps dataset IDs to the unique values. Use PyTorch functions instead of NumPy: Since your input tensors are PyTorch tensors, avoid converting them to NumPy arrays. This will reduce memory overhead and increase speed. Replace NumPy functions with their PyTorch counterparts whenever possible. Use in-place operations: Use in-place operations, such as scatter
instead of scatter, to avoid creating additional tensors and save memory. Use built-in PyTorch loss functions: Instead of implementing your own loss functions, use the built-in PyTorch loss functions for better performance and reliability. For example, you can use torch.nn.functional.mse_loss instead of the custom MSELoss class. Use the PyTorch autograd profiler: Use the PyTorch autograd profiler to measure the execution time of your operations and find bottlenecks. This can help you find areas to optimize your code. Use PyTorch functions instead of NumPy: Since your input tensors are PyTorch tensors, avoid converting them to NumPy arrays. This will reduce memory overhead and increase speed. Replace NumPy functions with their PyTorch counterparts whenever possible.
 

Contribute to Nebula!


© APAC AI 2022 - 2024