top of page
Writer's pictureDuskfall Crew

Mastering Lora: Unleashing the Power of Large Datasets

Large Dataset Lora Tips and Tricks

Mastering Lora: Unleashing the Power of Large Datasets


Are you one of those individuals who organizes a minimum of 300 items in a folder and wonders why it takes an incredibly long time? Firstly, let me clarify that I'm not here to oversee your learning rates or schedulers, as those are better explained by experts in the field. I'm still in the process of learning about them myself. However, in this blog post, I want to discuss how you can reduce the lengthy training times on Lora training GUIs and notebooks with a rough 5e4 Unet learning rate. By implementing certain techniques, you can achieve a reasonable quality Lora output without compromising the learning capabilities.



Ok so what's the deal?

When it comes to Lora training in the realm of anime, most recommendations revolve around a 5e4 learning rate and a 1e4 text encoder. This combination promises impressive learning outcomes and high-quality Loras. However, it's important to note that there are various approaches to determining the ideal learning rates. If you find yourself struggling with the nitty-gritty details because math isn't your strong suit, fear not! I'm here to lend a helping hand. While a dataset of 50-100 or even less can still yield remarkable Loras and put you ahead of the competition, some of us are burdened with an overwhelming amount of data and countless creative ideas.

Size Lot Comparison

Here's a small guide to batch size and epochs - the steps won't matter because 90% of kohya-based lora scripts use bucketed training.


50-100 (Image Dataset Size)

If you're aiming to reduce the execution time without causing any disruptions to your Colab or rental timing, consider adjusting your batch size to approximately 3. Additionally, set your epochs to a minimum of 7, but be cautious not to exceed 10.


It's important to note that these recommendations are based on a specific learning rate that I've been taught. However, it's essential to remember that this is a matter of preference, and there might be alternative approaches that suit your needs better.


100-300

Now, let's dive into the interesting part. If you stick with a batch size of 2-3 and 10 epochs while using Holo's notebook or other scripts, you might encounter warnings indicating that you have too many steps. This issue becomes more pronounced when you approach or exceed 300 items in your dataset while leaving all the settings at default.


To address this, consider reducing the maximum number of epochs to 8. Additionally, adjust your batch size to 3, although you can experiment with 4 if you're familiar with managing loss and learning quality. However, it's advisable not to go below 5-6 epochs for optimal results.


Pushing the Boundaries: Exploring Lora Training Beyond 400-500 Images

In my daring quest for knowledge, I have ventured into uncharted territory, testing the limits of Lora training. While I haven't gone beyond approximately 400-500 images, anything beyond this threshold remains a realm of uncertainty and experimentation. Currently, I am working on a project with around 350-400 pictures.


To optimize my resources, I made some strategic adjustments. I decided to reduce the number of epochs to 5, conserving my precious colab credits until I find a way to seamlessly integrate Vast and/or Runpod with Bmaltais or Derrian's gui. As for the batch size, I boldly cranked it up to 4. Although it may result in a slight loss of style, this sacrifice is acceptable as it serves the purpose of a lora binding test for future endeavors.


Attention, Colab Users:

For those of you utilizing the free version of Colab, this guide holds even greater significance. Even with Colab Pro, disconnections can occur before the 5-hour mark, making the following information crucial.


A word of caution: around the 1.5-2 hour threshold, Colab tends to exhibit perplexing behavior, triggering script errors without any apparent cause. Rest assured, this issue typically does not impact the training process. However, it would be advantageous if disconnections occurred instead, allowing you to address any potential problems and avoid wasting valuable credits.


Venturing into the Realm of 500 and Beyond

Greetings, you enthusiasts of memes!


Ah, I recognize some familiar faces among you. We recently engaged in a conversation within the model sharing realm, discussing the challenges of mitigating lengthy 6-8 hour training sessions on Colab.


Now, let's get down to business. To optimize your training in this context, consider limiting your epochs to a maximum of 5. If your desired style does not necessitate excessive thickness or intense effects, a batch size of 5-6 should suffice.


However, it's imperative to delve deep into the realm of schedulers and learning rates. As you increase the batch size, the scope for refining timing diminishes significantly. Therefore, thorough research and understanding of these factors becomes paramount.


To be perfectly candid, I must emphasize that conducting large-scale training on Colab requires a thorough grasp of the intricacies involved. So, tread carefully and be prepared to face the challenges head-on. Hilarity may ensue!


Disclaimer:

I am not a certified LORA GUIDE level trainer. The following are techniques I have personally learned to address certain challenges and provide some guidance in your own LORA development.


As we progress, we can offer assistance in navigating colab training notebooks for your LORA projects.


However, we highly recommend that users eventually familiarize themselves with Bmaltais or Derrian Distro's gui scripts, as they offer advanced functionalities beyond the basics.


Please note that our approach may not always be flawless.


If you have any additional tips or tricks to share, please feel free to contribute!


How to Support Us:

Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38


WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/

Recent Posts

See All

Comments


bottom of page