You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the simple model, It can pass the validity checking which seems a valid model.
However, when executing the inference on this model, It crashes unexpectedly and throws the crash message which shows that the MaxPool has a constraint about the attribute pad and kernel (Pad should be smaller than kernel).
I'm confused. Who can tell me which checking/description is fake? 1. the model checker provided by onnx? 2. the documentation? 3. the exception message thrown by ONNXRuntime?
Stacktrace
Traceback (most recent call last):
File "/share_container/optfuzz/ONNX/debug.py", line 83, in <module>
optimize_and_infer("../res/onnx_ut/onnx_models_gf/RuleTransformerL1/RuleTransformerL1_4.onnx")
File "/share_container/optfuzz/ONNX/debug.py", line 60, in optimize_and_infer
original_session = ort.InferenceSession(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/software/onnxruntime/build/Linux/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 465, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/software/onnxruntime/build/Linux/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 537, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /software/onnxruntime/onnxruntime/core/providers/cpu/nn/pool_attributes.h:78 onnxruntime::PoolAttributes::PoolAttributes(const onnxruntime::OpNodeProtoHelper<onnxruntime::ProtoHelperNodeContext>&, const string&, int) pads[dim] < kernel_shape[dim] && pads[dim + kernel_shape.size()] < kernel_shape[dim] was false. Pad should be smaller than kernel.
@xadupre Thank you for your reply. Indeed, reducing padding is a workaround to avoid this crash.
After investigation, I found PyTorch allows padding larger than the kernel size, and they don't throw errors in similar cases. Thus, we may fix the source code of ONNXRuntime to align with the ONNX specification.
Describe the issue
For the simple model, It can pass the validity checking which seems a valid model.
However, when executing the inference on this model, It crashes unexpectedly and throws the crash message which shows that the
MaxPool
has a constraint about the attributepad
andkernel
(Pad should be smaller than kernel).When I further checked the official documentation (https://onnx.ai/onnx/operators/onnx__MaxPool.html), it showed that "the pad can take any value greater than or equal to 0".
I'm confused. Who can tell me which checking/description is fake? 1. the model checker provided by onnx? 2. the documentation? 3. the exception message thrown by ONNXRuntime?
Stacktrace
To reproduce
Step 1. Download the simple model via this link
Step 2. Run the script:
Urgency
No response
Platform
Linux
OS Version
Ubuntu 20.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
5c1b7cc
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: