-
Notifications
You must be signed in to change notification settings - Fork 25
Issues: intel/auto-round
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
export.py L76: Saving quantized model to auto_awq format crash with AssertionError
#388
opened Dec 16, 2024 by
fbaldassarri
autoround tuning is not stable for llama3.1/3.3 70B and variants
#355
opened Nov 29, 2024 by
wenhuach21
consecutive quantization for the same model with different config bug
#352
opened Nov 27, 2024 by
wenhuach21
[New feature request] support exporting to guff at group_size 32
#288
opened Oct 23, 2024 by
wenhuach21
ProTip!
no:milestone will show everything without a milestone.