This is the repository card of {repo_id} that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/sage-attention") # <- change the ID if needed
per_block_int8 = kernel_module.per_block_int8
per_block_int8(...)
Available functions
per_block_int8per_warp_int8sub_meanper_channel_fp8sageattn
Supported backends
- cuda
CUDA Capabilities
- 8.0
- 8.9
- 9.0a
Benchmarks
[TODO: provide benchmarks if available]
Source code
[TODO: provide original source code and other relevant citations if available]
Notes
[TODO: provide additional notes about this kernel if needed]
- Downloads last month
- 124
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support