Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

开启GPU,推理的时候报错。 #2014

Open
dongwentao1 opened this issue Nov 25, 2024 · 1 comment
Open

开启GPU,推理的时候报错。 #2014

dongwentao1 opened this issue Nov 25, 2024 · 1 comment
Assignees

Comments

@dongwentao1
Copy link

W1125 12:07:24.788084 312 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.2, Runtime API Version: 10.2
W1125 12:07:24.792352 312 gpu_context.cc:306] device: 0, cuDNN Version: 8.1.


C++ Traceback (most recent call last):

No stack trace in paddle, may be caused by external reasons.


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1732536447 (unix time) try "date -d @1732536447" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 312 (TID 0x7f5cd301c700) from PID 0 ***]

--- Logging error ---
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/init.py", line 989, in emit
stream.write(msg)
UnicodeEncodeError: 'ascii' codec can't encode character '\u2019' in position 576: ordinal not in range(128)
Call stack:
File "/usr/local/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 66, in _worker
work_item.run()
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.6/site-packages/grpc/_server.py", line 553, in _unary_response_in_pool
argument, request_deserializer)
File "/usr/local/lib/python3.6/site-packages/grpc/_server.py", line 435, in _call_behavior
response_or_iterator = behavior(argument, context)
File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 73, in inference
resp = self._dag_executor.call(request)
File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/dag.py", line 420, in call
resp_channeldata.error_info))
Message: "(data_id=1 log_id=0) Failed to predict: [det] failed to predict. (data_id=1 log_id=1) [det|0] Failed to process(batch: [1]): (External) CUSOLVER error(7). \n [Hint: 'CUSOLVER_STATUS_INTERNAL_ERROR'. An internal cuSolver operation failed. This error is usually caused by a cudaMemcpyAsync() failure.To correct: check that the hardware, an appropriate version of the driver, and the cuSolver library are correctly installed. Also, check that the memory passed as a parameter to the routine is not being deallocated prior to the routine\u2019s completion.] (at /paddle/paddle/phi/backends/gpu/gpu_context.cc:548)\n. Please check the input dict and checkout PipelineServingLogs/pipeline.log for more details."
Arguments: ()

Copy link

Message that will be displayed on users' first issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants