这是我得到的错误....使用ssh连接
上次登录时间:Tue Sep 4 03:54:13 2018从10.5.0.7
[u19304 @ c009~] $ source activate en
(en)[u19304 @ c009~] $ cd bum
(en)[u19304 @ c009 bum] $ python3 bumcpu.py
回溯(最近的呼叫最后):
文件“bumcpu.py”,第211行,in
training_set = DLibdata(train = True)
在__init__中输入“/home/u19304/bum/loaddata.py”,第46行
self.train_data = torch.load('trn.pt')
文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第358行,载入中
return _load(f,map_location,pickle_module)
在_load中输入文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第542行
result = unpickler.load()
文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第508行,在persistent_load中
data_type(大小),位置)
RuntimeError:$ Torch:没有足够的内存:你试图分配2GB。
买新的RAM!
at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/THGeneral.cpp:204
以上来自于谷歌翻译
以下为原文
This is the error I get....Am using ssh to connect
Last login: Tue Sep 4 03:54:13 2018 from 10.5.0.7
[u19304@c009 ~]$ source activate en
(en) [u19304@c009 ~]$ cd bum
(en) [u19304@c009 bum]$ python3 bumcpu.py
Traceback (most recent call last):
File "bumcpu.py", line 211, in
training_set = DLibdata(train=True)
File "/home/u19304/bum/loaddata.py", line 46, in __init__
self.train_data = torch.load('trn.pt')
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 542, in _load
result = unpickler.load()
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 508, in persistent_load
data_type(size), location)
RuntimeError: $ Torch: not enough memory: you tried to allocate 2GB. Buy new RAM! at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/THGeneral.cpp:204
这是我得到的错误....使用ssh连接
上次登录时间:Tue Sep 4 03:54:13 2018从10.5.0.7
[u19304 @ c009~] $ source activate en
(en)[u19304 @ c009~] $ cd bum
(en)[u19304 @ c009 bum] $ python3 bumcpu.py
回溯(最近的呼叫最后):
文件“bumcpu.py”,第211行,in
training_set = DLibdata(train = True)
在__init__中输入“/home/u19304/bum/loaddata.py”,第46行
self.train_data = torch.load('trn.pt')
文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第358行,载入中
return _load(f,map_location,pickle_module)
在_load中输入文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第542行
result = unpickler.load()
文件“/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py”,第508行,在persistent_load中
data_type(大小),位置)
RuntimeError:$ Torch:没有足够的内存:你试图分配2GB。
买新的RAM!
at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/THGeneral.cpp:204
以上来自于谷歌翻译
以下为原文
This is the error I get....Am using ssh to connect
Last login: Tue Sep 4 03:54:13 2018 from 10.5.0.7
[u19304@c009 ~]$ source activate en
(en) [u19304@c009 ~]$ cd bum
(en) [u19304@c009 bum]$ python3 bumcpu.py
Traceback (most recent call last):
File "bumcpu.py", line 211, in
training_set = DLibdata(train=True)
File "/home/u19304/bum/loaddata.py", line 46, in __init__
self.train_data = torch.load('trn.pt')
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 542, in _load
result = unpickler.load()
File "/home/u19304/.conda/envs/en/lib/python3.6/site-packages/torch/serialization.py", line 508, in persistent_load
data_type(size), location)
RuntimeError: $ Torch: not enough memory: you tried to allocate 2GB. Buy new RAM! at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/THGeneral.cpp:204
举报