-
Notifications
You must be signed in to change notification settings - Fork 186
Issues: pytorch/ao
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug: TypeError in SAM2ImagePredictor.predict() method
bug
Something isn't working
#1431
opened Dec 18, 2024 by
dongxiaolong
[Feature Request] W4A4 Quantization Support in torchao
topic: new feature
Use this tag if this PR adds a new feature
topic: performance
Use this tag if this PR improves the performance of a feature
#1406
opened Dec 12, 2024 by
xxw11
Does Torch Support NPU Architectures like Ascend MDC910B and Multi-GPU Quantization for Large Models?
multibackend
#1405
opened Dec 12, 2024 by
Lenan22
"Where is the overloaded function for torch.nn.functional.linear(aqt, original_weight_tensor, bias)? "
#1397
opened Dec 10, 2024 by
Lenan22
[Feature Request] add universal GEMM kernels for ARM CPU to torchao/experimental
#1394
opened Dec 10, 2024 by
metascroy
AO and Automated Mixed Precision
question
Further information is requested
topic: documentation
Use this tag if this PR adds or improves documentation
#1390
opened Dec 8, 2024 by
bhack
EfficientTAM
topic: new feature
Use this tag if this PR adds a new feature
topic: performance
Use this tag if this PR improves the performance of a feature
#1384
opened Dec 5, 2024 by
bhack
unhashable type: non-nested SymInt
autoquant
bug
#1381
opened Dec 4, 2024 by
bhack
[Feature Request] Add dynamic kernel selection to torchao/experimental
#1376
opened Dec 4, 2024 by
metascroy
[Feature Request] Support of
int8_dynamic_activation_int8_weight
with asymmetrically quantized weights
#1320
opened Nov 20, 2024 by
sanchitintel
1 task
int8_dynamic_activation_int8_weight
uses zero-points for weight when activation is asymmetrically quantized
#1317
opened Nov 20, 2024 by
sanchitintel
torch.compile(sync_float8_amax_and_scale_history) not working with triton latest main
#1311
opened Nov 19, 2024 by
goldhuang
[AQT] Failed to move compiled module with AQT to a different device
bug
Something isn't working
#1309
opened Nov 19, 2024 by
gau-nernst
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-11-18.