-
Notifications
You must be signed in to change notification settings - Fork 3k
Issues: microsoft/onnxruntime
[DO NOT UNPIN] onnxruntime-gpu v1.10.0 PyPI Removal Notice
#22747
opened Nov 6, 2024 by
sophies927
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Inconsistent Results for Output v1_0 After ONNX Runtime Optimization (Flaky Test)
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23143
opened Dec 18, 2024 by
Thrsu
Inconsistent Results After ONNX Runtime Optimization
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23142
opened Dec 18, 2024 by
Thrsu
[Bug] InvalidArgument Error After Optimizing Model with ONNX Runtime
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23138
opened Dec 18, 2024 by
Thrsu
[Bug] Inconsistent Results After ONNX Runtime Optimization
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23133
opened Dec 17, 2024 by
Thrsu
[Performance] Can onnxruntime delegate the entire process to another framework when using other providers?
ep:MIGraphX
issues related to AMD MI GraphX execution provider
performance
issues related to performance regressions
#23132
opened Dec 17, 2024 by
yht183
[Performance] CreateSession takes very long time to load .onnx file when working with FileFuzzer tool
performance
issues related to performance regressions
#23129
opened Dec 17, 2024 by
chenyihong0504
MultiHeadAttention op shall return attention probabilities
core runtime
issues related to core runtime
#23124
opened Dec 16, 2024 by
amancini-N
Support pointer-generator in BeamSearch op
core runtime
issues related to core runtime
#23123
opened Dec 16, 2024 by
amancini-N
[Feature Request] Support pointer-generator networks on T5 BeamSearch
feature request
request for unsupported feature or enhancement
#23122
opened Dec 16, 2024 by
amancini-N
2 tasks
Understanding max_mem option of OrtArenaCfg class
core runtime
issues related to core runtime
#23121
opened Dec 16, 2024 by
vsbaldeev
[BUG][CUDAProvider] No attribute with name:'activation'is defined
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23119
opened Dec 16, 2024 by
Cookiee235
[CUDAProvider] Graph Optimization output an invalid model
core runtime
issues related to core runtime
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23118
opened Dec 16, 2024 by
Cookiee235
On linux env, onnx costs high anon memory, but not been released
api:Java
issues related to the Java API
#23117
opened Dec 16, 2024 by
jiangjiongyu
FuseReluClip Unexpected data type for Clip 'min' input of 11
core runtime
issues related to core runtime
#23116
opened Dec 16, 2024 by
Cookiee235
[Bug][CUDAExecutionProvider] INVALID_ARGUMENT : unsupported conv activation mode "Sigmoid"
#23114
opened Dec 16, 2024 by
Cookiee235
6 tasks
About NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin Builds
platform:jetson
issues related to the NVIDIA Jetson platform
#23113
opened Dec 15, 2024 by
mfatih7
RuntimeError: Assertion issues related supporting the PyTorch Dynamo exporter
converter
related to ONNX converters
false
failed: No Adapter From Version $20 for GridSample
converter:dynamo
#23112
opened Dec 15, 2024 by
jikechao
[Documentation] how to modularize ONNXRT on CPU first , then on CPU with OpenVino EP then on Nvidia GPU with TRT EP simply by adding new provider libraries and all their dependencies
documentation
improvements or additions to documentation; typically submitted using template
ep:OpenVINO
issues related to OpenVINO execution provider
ep:TensorRT
issues related to TensorRT execution provider
#23104
opened Dec 13, 2024 by
jcdatin
Regarding the issue of starting services in multithreading
performance
issues related to performance regressions
#23094
opened Dec 12, 2024 by
MenGuangwen0411
[Documentation] Is there an execution provider in ONNX Runtime that supports Mali GPUs?
documentation
improvements or additions to documentation; typically submitted using template
ep:tvm
issues related to TVM execution provider
#23089
opened Dec 12, 2024 by
rokslej
Conflict constraints checking/description about PoolAttributes
#23088
opened Dec 12, 2024 by
Cookiee235
AttributeError: 'NoneType' object has no attribute 'item'
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#23086
opened Dec 12, 2024 by
Cookiee235
Cannot resolve operator 'LSTM' with webgl backend
ep:WebGPU
ort-web webgpu provider
ep:WebNN
WebNN execution provider
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
platform:web
issues related to ONNX Runtime web; typically submitted using template
#23083
opened Dec 11, 2024 by
mrdrprofuroboros
[Feature Request] use Onnxruntime TensorRT execution provider with lean tensorRT runtime
ep:TensorRT
issues related to TensorRT execution provider
feature request
request for unsupported feature or enhancement
#23082
opened Dec 11, 2024 by
jcdatin
Previous Next
ProTip!
Adding no:label will show everything without a label.