Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

成功跑通一次后,第二次在SSL阶段爆显存了 #1875

Open
Roy2029 opened this issue Dec 18, 2024 · 0 comments
Open

成功跑通一次后,第二次在SSL阶段爆显存了 #1875

Roy2029 opened this issue Dec 18, 2024 · 0 comments

Comments

@Roy2029
Copy link

Roy2029 commented Dec 18, 2024

在昨天成功跑通全流程,第二天开了个新实验打算试试另一个数据集,然而跑到一键三连的第二步(SSL自监督特征提取)提示爆显存了。挺莫名其妙的,因为如果显存不够,那第一次为什么能跑通呢?而且第二个数据集的最长切片比第一个数据集的最长切片要短一半,整个数据集大小也比第一个小。就算是显存不足,SSL这一步也没有可以调整batch size的超参数。求大佬指点,错误信息如下:

D:\down\GBC\填词\UVR5工作路径\分类训练素材\训练集\BGM广播片段_vocals.wav_main_vocal.wav_0009705600_0009952000.wav|训练集|JA|初回限定版に付属するブルーレイには、YouTubeで公開されてきたミュージックビデオが全て収録されている豪華仕様になってい ます。 Traceback (most recent call last):
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\GPT_SoVITS\prepare_datasets\2-get-hubert-wav32k.py", line 112, in
name2go(wav_name,wav_path)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\GPT_SoVITS\prepare_datasets\2-get-hubert-wav32k.py", line 85, in name2go
ssl=model.model(tensor_wav16.unsqueeze(0))["last_hidden_state"].transpose(1,2).cpu()#torch.Size([1, 768, 215])
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 1091, in forward
encoder_outputs = self.encoder(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 738, in forward
layer_outputs = layer(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 589, in forward
hidden_states, attn_weights, _ = self.attention(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 488, in forward
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 875.66 MiB already allocated; 12.60 MiB free; 886.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant