This is the repository card of kernels-community/flash-mla that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/flash-mla")
__version__ = kernel_module.__version__
__version__(...)
Available functions
__version__FlashMLASchedMetaget_mla_metadataflash_mla_with_kvcacheflash_attn_varlen_funcflash_attn_varlen_qkvpacked_funcflash_attn_varlen_kvpacked_funcflash_mla_sparse_fwd
Benchmarks
Benchmarking script is available for this kernel. Run kernels benchmark kernels-community/flash-mla.
- Downloads last month
- 860
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support