Kernels

This is the repository card of kernels-community/flash-mla that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.

How to use

# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/flash-mla")
__version__ = kernel_module.__version__

__version__(...)

Available functions

  • __version__
  • FlashMLASchedMeta
  • get_mla_metadata
  • flash_mla_with_kvcache
  • flash_attn_varlen_func
  • flash_attn_varlen_qkvpacked_func
  • flash_attn_varlen_kvpacked_func
  • flash_mla_sparse_fwd

Benchmarks

Benchmarking script is available for this kernel. Run kernels benchmark kernels-community/flash-mla.

Downloads last month
860
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support