-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
[MoE] Improve moe kernels for unsloth fine tuning #3812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: nightly
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on significantly improving the performance and usability of Mixture-of-Experts (MoE) kernels, particularly for Qwen3 models, within the Unsloth framework. The core changes involve integrating Triton-optimized grouped GEMM operations, implementing a robust caching system for kernel configurations to reduce overhead, and ensuring better compatibility with "torch.compile" by handling tracing and dynamic kernel parameters. These enhancements aim to provide a more efficient and streamlined fine-tuning experience for MoE architectures. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| N = W.shape[0] // num_experts | ||
| assert K == W.shape[1], f"K ({K}) must match W.shape[1] ({W.shape[1]})" | ||
|
|
||
| if fuse_mul_post: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore fused mul warning flag initialization
The fused-mul path now references _FUSED_MUL_WARN but the module no longer defines the flag after the TMA refactor, so any call with fuse_mul_post=True hits a NameError before executing the kernel. This is a regression from the previous version where the guard was initialized and will crash inference paths that rely on post-mul fusion.
Useful? React with 👍 / 👎.
| expert_mask = torch.nn.functional.one_hot( | ||
| selected_experts, num_classes = num_experts | ||
| ).permute(2, 1, 0) | ||
|
|
||
| token_counts_by_expert = expert_mask.sum(dim = 1).int() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use per-expert token counts for grouped GEMM
In the Triton forward for Qwen3 MoE, token_counts_by_expert is computed as expert_mask.sum(dim=1) which yields a [num_experts, num_tokens] matrix instead of the expected 1‑D counts per expert. The grouped GEMM kernels read m_sizes as a length‑num_experts vector, so this flattened matrix feeds arbitrary per‑token entries into the kernel, producing incorrect routing and outputs whenever the Triton MoE path is enabled.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable auto-tuning cache system for MoE Triton kernels, which will prevent re-tuning on each run and significantly improve startup times. The changes also enhance torch.compile compatibility and add a new Triton-optimized path for Qwen3-MoE models. While the overall direction is excellent, I've found a critical bug in the token routing logic within the new qwen3_moe_triton.py file that needs to be addressed. Additionally, there are a couple of smaller issues regarding TMA support detection and error message handling that I've flagged for improvement.
|
|
||
| gate_up_weights = torch.stack( | ||
| gate_up_weights | ||
| ) # [num_experts, 2*intermediate_dim, hidden_dim] | ||
| down_weights = torch.stack( | ||
| down_weights | ||
| ) # [num_experts, hidden_dim, intermediate_dim] | ||
|
|
||
| # Compute token counts and gather indices without array operations | ||
| expert_mask = torch.nn.functional.one_hot( | ||
| selected_experts, num_classes = num_experts | ||
| ).permute(2, 1, 0) | ||
|
|
||
| token_counts_by_expert = expert_mask.sum(dim = 1).int() | ||
|
|
||
| # Create gather indices for routing - avoid complex array operations | ||
| total_tokens = num_tokens * top_k | ||
| gather_indices = torch.zeros( | ||
| total_tokens, dtype = torch.long, device = hidden_states.device | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There appears to be a critical bug in the calculation of token_counts_by_expert and gather_indices.
token_counts_by_experton line 175 will have a shape of(num_experts, num_tokens)instead of the expected(num_experts,)required by thegrouped_gemmkernel.- The logic for creating
gather_indicesinvolves a Python loop with several GPU operations, which is inefficient for a forward pass. More importantly, it seems to produce token indices instead of the permutation indices thatgrouped_gemmexpects for sorting tokens by expert.
A more standard, efficient, and correct approach is to use torch.bincount for the counts and torch.argsort for the indices.
| gate_up_weights = torch.stack( | |
| gate_up_weights | |
| ) # [num_experts, 2*intermediate_dim, hidden_dim] | |
| down_weights = torch.stack( | |
| down_weights | |
| ) # [num_experts, hidden_dim, intermediate_dim] | |
| # Compute token counts and gather indices without array operations | |
| expert_mask = torch.nn.functional.one_hot( | |
| selected_experts, num_classes = num_experts | |
| ).permute(2, 1, 0) | |
| token_counts_by_expert = expert_mask.sum(dim = 1).int() | |
| # Create gather indices for routing - avoid complex array operations | |
| total_tokens = num_tokens * top_k | |
| gather_indices = torch.zeros( | |
| total_tokens, dtype = torch.long, device = hidden_states.device | |
| ) | |
| # Compute token counts and gather indices | |
| # Sort experts to get gather indices and token counts | |
| flat_selected_experts = selected_experts.view(-1) | |
| gather_indices = torch.argsort(flat_selected_experts, stable=True) | |
| token_counts_by_expert = torch.bincount(flat_selected_experts, minlength=self.num_experts) |
| def _check_tma_support(): | ||
| import triton.language as tl | ||
|
|
||
| gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9 | ||
| # Check for both old experimental and new stable API names | ||
| triton_has_tma_api = hasattr(tl, "make_tensor_descriptor") or hasattr( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of _check_tma_support can lead to runtime errors. It checks for the existence of either make_tensor_descriptor or _experimental_make_tensor_descriptor, but the kernels now exclusively use make_tensor_descriptor. If a user has a Triton version with only the experimental API, _SUPPORTS_TMA will be True, but the kernel launch will fail with an AttributeError. The check should be made stricter to only hasattr(tl, 'make_tensor_descriptor') to match the kernel's expectation, or the kernels should be updated to handle both API versions.
| def _check_tma_support(): | |
| import triton.language as tl | |
| gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9 | |
| # Check for both old experimental and new stable API names | |
| triton_has_tma_api = hasattr(tl, "make_tensor_descriptor") or hasattr( | |
| def _check_tma_support(): | |
| import triton.language as tl | |
| gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9 | |
| # Kernels now use the stable `make_tensor_descriptor` API. | |
| triton_has_tma_api = hasattr(tl, 'make_tensor_descriptor') | |
| return gpu_supports_tma and triton_has_tma_api |
| # Cache the results | ||
| _kernel_config_cache[cache_key] = configs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check for an AttributeError related to TMA features is outdated. The code in the kernels was updated from _experimental_make_tensor_descriptor to make_tensor_descriptor. This check should be updated to look for the new function name to provide an accurate warning to the user when the fallback to default configs occurs.
| # Cache the results | |
| _kernel_config_cache[cache_key] = configs | |
| if "AttributeError" in str(e) and "make_tensor_descriptor" in str(e): | |
| logger.warning("Unsloth: Your Triton version might be incompatible with TMA features. Falling back to default configs.") |
for more information, see https://pre-commit.ci
This might come in handy for unslothai/unsloth-zoo#396
We do prefer using grouped_mm there but this is a fallback for that before going pure pytorch mode
Needs transformers v5 there