You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
D:\down\GBC\填词\UVR5工作路径\分类训练素材\训练集\BGM广播片段_vocals.wav_main_vocal.wav_0009705600_0009952000.wav|训练集|JA|初回限定版に付属するブルーレイには、YouTubeで公開されてきたミュージックビデオが全て収録されている豪華仕様になってい ます。 Traceback (most recent call last):
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\GPT_SoVITS\prepare_datasets\2-get-hubert-wav32k.py", line 112, in
name2go(wav_name,wav_path)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\GPT_SoVITS\prepare_datasets\2-get-hubert-wav32k.py", line 85, in name2go
ssl=model.model(tensor_wav16.unsqueeze(0))["last_hidden_state"].transpose(1,2).cpu()#torch.Size([1, 768, 215])
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 1091, in forward
encoder_outputs = self.encoder(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 738, in forward
layer_outputs = layer(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 589, in forward
hidden_states, attn_weights, _ = self.attention(
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\down\GPT-SoVITS-v2-240821\GPT-SoVITS-v2-240821\runtime\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 488, in forward
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 875.66 MiB already allocated; 12.60 MiB free; 886.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered:
在昨天成功跑通全流程,第二天开了个新实验打算试试另一个数据集,然而跑到一键三连的第二步(SSL自监督特征提取)提示爆显存了。挺莫名其妙的,因为如果显存不够,那第一次为什么能跑通呢?而且第二个数据集的最长切片比第一个数据集的最长切片要短一半,整个数据集大小也比第一个小。就算是显存不足,SSL这一步也没有可以调整batch size的超参数。求大佬指点,错误信息如下:
The text was updated successfully, but these errors were encountered: