Flask应用在预测时一直处于加载状态(TensorRT) [英] Flask app is keep on loading at the time of prediction(TensorRT)

查看:550
本文介绍了Flask应用在预测时一直处于加载状态(TensorRT)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是问题的续篇

面对问题,而在Jetson nano上运行带有TensorRt模型的Flask应用

上面是解决方法,但是当我运行flask'app'时,它将继续加载并且不显示视频.

Above is resolve but when I am running flask 'app' it keep loading and not showing video.

代码:

def callback(): 
 cuda.init() 
 device = cuda.Device(0) 
 ctx = device.make_context() 
 onnx_model_path = './some.onnx' 
 fp16_mode = False
 int8_mode = False 
 trt_engine_path = './model_fp16_{}_int8_{}.trt'.format(fp16_mode, int8_mode)
 max_batch_size = 1 
 engine = get_engine(max_batch_size, onnx_model_path, trt_engine_path, fp16_mode, int8_mode) 
 context = engine.create_execution_context() 
 inputs, outputs, bindings, stream = allocate_buffers(engine) 
 ctx.pop()

##callback function ends


worker_thread = threading.Thread(target=callback())
worker_thread.start()

trt_outputs = do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)

def do_inference(context, bindings, inputs, outputs, stream, batch_size=1):
 print("start in do_inferece")
 # Transfer data from CPU to the GPU.
 [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
 # Run inference.
 print("before run infernce in do_inferece")
 context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
 # Transfer predictions back from the GPU.
 print("before output in do_inferece")
 [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
 print("before stream synchronize in do_inferece")
 # Synchronize the stream
 stream.synchronize()
 # Return only the host outputs.
 print("before return in do_inferece")
 return [out.host for out in outputs]

推荐答案

您的worker_thread创建do_inference所需的context.您应该在callback()

Your worker_thread creates the context required for do_inference. You should call the do_inference method inside the callback()

def callback(): 
   cuda.init() 
   device = cuda.Device(0) 
   ctx = device.make_context() 
   onnx_model_path = './some.onnx' 
   fp16_mode = False
   int8_mode = False 
   trt_engine_path = './model_fp16_{}_int8_{}.trt'.format(fp16_mode, int8_mode)
   max_batch_size = 1 
   engine = get_engine(max_batch_size, onnx_model_path, trt_engine_path, fp16_mode, int8_mode) 
   context = engine.create_execution_context() 
   inputs, outputs, bindings, stream = allocate_buffers(engine) 
   trt_outputs = do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
   # post-process the trt_outputs
   ctx.pop()

这篇关于Flask应用在预测时一直处于加载状态(TensorRT)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆