Google Colab:带有GPU后端的磁盘大小 [英] Google Colab: Disk size with GPU backend
问题描述
我一直在将Google Colab与GPU后端一起使用.当我在12月使用它时,GPU后端的磁盘大小超过300 GB.现在在虚拟机上运行 df -h
会显示以下内容:
I've been using Google Colab with the GPU backend. On December when I used it, the disk size for the GPU backend was more than 300 GB. Now running df -h
on the virtual machine shows this:
Filesystem Size Used Avail Use% Mounted on
overlay 69G 33G 33G 50% /
tmpfs 64M 0 64M 0% /dev
tmpfs 6.4G 0 6.4G 0% /sys/fs/cgroup
/dev/sda1 75G 37G 39G 49% /opt/bin
tmpfs 6.4G 12K 6.4G 1% /var/colab
shm 5.9G 4.0K 5.9G 1% /dev/shm
tmpfs 6.4G 0 6.4G 0% /proc/acpi
tmpfs 6.4G 0 6.4G 0% /proc/scsi
tmpfs 6.4G 0 6.4G 0% /sys/firmware
您知道情况是否有所变化?我在网上搜索了有关此消息的消息,但找不到任何消息.以前,覆盖文件系统为359 GB.
Do you know if something has changed? I searched the web for news about this but couldn't find any. Before, the overlay filesystem was 359 GB.
预先感谢您提供任何线索.
Thanks in advance for any clues.
最好
B.
推荐答案
似乎是一个新问题.我在Github上找到了它: https://github.com/googlecolab/colabtools/issues/919 .
It seems it is a new issue. I found this on Github: https://github.com/googlecolab/colabtools/issues/919.
具有讽刺意味的是,一个建议的解决方案是挂载Google云端硬盘.因此,我购买了200 GB的Google云端硬盘.但是,磁盘大小仍然是一个问题.显然,一旦安装了Google云端硬盘,它就会开始在/root/.config/Google/DriveFS/[uniqueid]/content_cache中缓存文件.缓存无法控制其大小,它不会删除或替换任何内容,它只会累积,并占用所有磁盘,使代码崩溃.:(
What's ironic about this problem is that one proposed solution is to mount Google Drive. Therefore I bought 200 GB of Google Drive. However, the disk size is still an issue. Apparently once Google Drive is mounted it starts to cache files in /root/.config/Google/DriveFS/[uniqueid]/content_cache. The cache has no control over its size, it does not delete or replace anything, it just accumulates, and it takes all the disk making the code crash. :(
这篇关于Google Colab:带有GPU后端的磁盘大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!