如何仅从gpu交换到cpu? [英] How to swap from gpu to a cpu only?

查看:168
本文介绍了如何仅从gpu交换到cpu?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道如何在我的CPU而不是GPU上运行机器学习代码?

Hi I was wondering how can run a machine learning code onto my CPU instead of a GPU?

我尝试使设置文件上的GPU错误,但无法修复它.

I have tried making the GPU false on the setting file but it hasn't been able to fix it.

GPU = False                                                                 # running on GPU is highly suggested
CLEAN = False                                                                # set to "True" if you want to clean the temporary large files after generating result
APP = "classification"                                                       # Do not change! mode choide: "classification", "imagecap", "vqa". Currently "imagecap" and "vqa" are not supported.
CATAGORIES = ["object", "part"]                                              # Do not change! concept categories that are chosen to detect: "object", "part", "scene", "material", "texture", "color"
map_location='cpu'

CAM_THRESHOLD = 0.5                                                          # the threshold used for CAM visualization
FONT_PATH = "components/font.ttc"                                            # font file path
FONT_SIZE = 26                                                               # font size
SEG_RESOLUTION = 7                                                           # the resolution of cam map
BASIS_NUM = 7       

Traceback (most recent call last):
  File "test.py", line 22, in <module>
    model = loadmodel()
  File "/home/joshuayun/Desktop/IBD/loader/model_loader.py", line 44, in loadmodel
    checkpoint = torch.load(settings.MODEL_FILE)
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 574, in _load
    result = unpickler.load()
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 537, in persistent_load
    deserialized_objects[root_key] = restore_location(obj, location)
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 119, in default_restore_location
    result = fn(storage, location)
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 95, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

推荐答案

如果我没有记错的话,您将在代码 model = loadmodel()上遇到上述错误.我不知道您在 loadmodel()中做什么,但是您可以尝试以下几点:

If I am not wrong, you are getting the above error at code model = loadmodel(). I don't have an idea what you are doing inside loadmodel(), but you can try below points:

  • defaults.device 设置为 cpu .要完全确定,请添加 torch.cuda.set_device('cpu')
  • torch.load(model_weights)更改为 torch.load(model_weights,map_location = torch.device('cpu'))
  • Set defaults.device to cpu. To be completely sure, add a torch.cuda.set_device('cpu')
  • Change torch.load(model_weights) to torch.load(model_weights, map_location=torch.device('cpu'))

这篇关于如何仅从gpu交换到cpu?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆