Skip to content

Conversation

@Datta0
Copy link
Collaborator

@Datta0 Datta0 commented Jan 1, 2026

This might come in handy for unslothai/unsloth-zoo#396
We do prefer using grouped_mm there but this is a fallback for that before going pure pytorch mode
Needs transformers v5 there

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on significantly improving the performance and usability of Mixture-of-Experts (MoE) kernels, particularly for Qwen3 models, within the Unsloth framework. The core changes involve integrating Triton-optimized grouped GEMM operations, implementing a robust caching system for kernel configurations to reduce overhead, and ensuring better compatibility with "torch.compile" by handling tracing and dynamic kernel parameters. These enhancements aim to provide a more efficient and streamlined fine-tuning experience for MoE architectures.

Highlights

  • MoE Kernel Autotuning Cache: Introduced a new caching system to store and reuse auto-tuned configurations for Mixture-of-Experts (MoE) kernels, preventing redundant tuning at runtime.
  • Triton Kernel Compatibility with torch.compile: Enhanced Triton grouped GEMM kernels with the "@allow_in_graph" decorator and a tracing detection mechanism ("_is_tracing") to ensure seamless integration and prevent issues when using "torch.compile".
  • Dynamic Kernel Parameter Handling: Modified Triton kernels to accept "NUM_TOKENS" and "NUM_SMS" as dynamic parameters rather than compile-time constants, and removed "NUM_TOKENS" from autotuning keys to reduce recompilation.
  • Qwen3 MoE Triton Integration: Added a new module ("qwen3_moe_triton.py") to specifically integrate and leverage Triton grouped GEMM kernels for Qwen3 Mixture-of-Experts models, including a fallback mechanism and pre-autotuning during model loading.
  • TMA Support Refinements: Updated Triton Tensor Memory Accelerator (TMA) descriptor creation to use "tl.make_tensor_descriptor" and temporarily disabled TMA due to ongoing compatibility work, while adding auto-detection for TMA parameters.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

N = W.shape[0] // num_experts
assert K == W.shape[1], f"K ({K}) must match W.shape[1] ({W.shape[1]})"

if fuse_mul_post:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore fused mul warning flag initialization

The fused-mul path now references _FUSED_MUL_WARN but the module no longer defines the flag after the TMA refactor, so any call with fuse_mul_post=True hits a NameError before executing the kernel. This is a regression from the previous version where the guard was initialized and will crash inference paths that rely on post-mul fusion.

Useful? React with 👍 / 👎.

Comment on lines +179 to +183
expert_mask = torch.nn.functional.one_hot(
selected_experts, num_classes = num_experts
).permute(2, 1, 0)

token_counts_by_expert = expert_mask.sum(dim = 1).int()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use per-expert token counts for grouped GEMM

In the Triton forward for Qwen3 MoE, token_counts_by_expert is computed as expert_mask.sum(dim=1) which yields a [num_experts, num_tokens] matrix instead of the expected 1‑D counts per expert. The grouped GEMM kernels read m_sizes as a length‑num_experts vector, so this flattened matrix feeds arbitrary per‑token entries into the kernel, producing incorrect routing and outputs whenever the Triton MoE path is enabled.

Useful? React with 👍 / 👎.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable auto-tuning cache system for MoE Triton kernels, which will prevent re-tuning on each run and significantly improve startup times. The changes also enhance torch.compile compatibility and add a new Triton-optimized path for Qwen3-MoE models. While the overall direction is excellent, I've found a critical bug in the token routing logic within the new qwen3_moe_triton.py file that needs to be addressed. Additionally, there are a couple of smaller issues regarding TMA support detection and error message handling that I've flagged for improvement.

Comment on lines +170 to +189

gate_up_weights = torch.stack(
gate_up_weights
) # [num_experts, 2*intermediate_dim, hidden_dim]
down_weights = torch.stack(
down_weights
) # [num_experts, hidden_dim, intermediate_dim]

# Compute token counts and gather indices without array operations
expert_mask = torch.nn.functional.one_hot(
selected_experts, num_classes = num_experts
).permute(2, 1, 0)

token_counts_by_expert = expert_mask.sum(dim = 1).int()

# Create gather indices for routing - avoid complex array operations
total_tokens = num_tokens * top_k
gather_indices = torch.zeros(
total_tokens, dtype = torch.long, device = hidden_states.device
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There appears to be a critical bug in the calculation of token_counts_by_expert and gather_indices.

  1. token_counts_by_expert on line 175 will have a shape of (num_experts, num_tokens) instead of the expected (num_experts,) required by the grouped_gemm kernel.
  2. The logic for creating gather_indices involves a Python loop with several GPU operations, which is inefficient for a forward pass. More importantly, it seems to produce token indices instead of the permutation indices that grouped_gemm expects for sorting tokens by expert.

A more standard, efficient, and correct approach is to use torch.bincount for the counts and torch.argsort for the indices.

Suggested change
gate_up_weights = torch.stack(
gate_up_weights
) # [num_experts, 2*intermediate_dim, hidden_dim]
down_weights = torch.stack(
down_weights
) # [num_experts, hidden_dim, intermediate_dim]
# Compute token counts and gather indices without array operations
expert_mask = torch.nn.functional.one_hot(
selected_experts, num_classes = num_experts
).permute(2, 1, 0)
token_counts_by_expert = expert_mask.sum(dim = 1).int()
# Create gather indices for routing - avoid complex array operations
total_tokens = num_tokens * top_k
gather_indices = torch.zeros(
total_tokens, dtype = torch.long, device = hidden_states.device
)
# Compute token counts and gather indices
# Sort experts to get gather indices and token counts
flat_selected_experts = selected_experts.view(-1)
gather_indices = torch.argsort(flat_selected_experts, stable=True)
token_counts_by_expert = torch.bincount(flat_selected_experts, minlength=self.num_experts)

Comment on lines +43 to +48
def _check_tma_support():
import triton.language as tl

gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9
# Check for both old experimental and new stable API names
triton_has_tma_api = hasattr(tl, "make_tensor_descriptor") or hasattr(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of _check_tma_support can lead to runtime errors. It checks for the existence of either make_tensor_descriptor or _experimental_make_tensor_descriptor, but the kernels now exclusively use make_tensor_descriptor. If a user has a Triton version with only the experimental API, _SUPPORTS_TMA will be True, but the kernel launch will fail with an AttributeError. The check should be made stricter to only hasattr(tl, 'make_tensor_descriptor') to match the kernel's expectation, or the kernels should be updated to handle both API versions.

Suggested change
def _check_tma_support():
import triton.language as tl
gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9
# Check for both old experimental and new stable API names
triton_has_tma_api = hasattr(tl, "make_tensor_descriptor") or hasattr(
def _check_tma_support():
import triton.language as tl
gpu_supports_tma = torch.cuda.get_device_capability()[0] >= 9
# Kernels now use the stable `make_tensor_descriptor` API.
triton_has_tma_api = hasattr(tl, 'make_tensor_descriptor')
return gpu_supports_tma and triton_has_tma_api

Comment on lines +192 to +193
# Cache the results
_kernel_config_cache[cache_key] = configs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The check for an AttributeError related to TMA features is outdated. The code in the kernels was updated from _experimental_make_tensor_descriptor to make_tensor_descriptor. This check should be updated to look for the new function name to provide an accurate warning to the user when the fallback to default configs occurs.

Suggested change
# Cache the results
_kernel_config_cache[cache_key] = configs
if "AttributeError" in str(e) and "make_tensor_descriptor" in str(e):
logger.warning("Unsloth: Your Triton version might be incompatible with TMA features. Falling back to default configs.")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant