We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
from transformers import AutoModelForCausalLM, AutoTokenizer import os os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
tokenizer = AutoTokenizer.from_pretrained("./", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("./", device_map="auto", trust_remote_code=True) inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
pred = model.generate(**inputs, max_new_tokens=16,repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
The Python snippets:
Command lines:
Extra dependencies:
Steps to reproduce:
No response
The text was updated successfully, but these errors were encountered:
我找了好几处pad_token_id 查看都是0 但是就是报错,有没有大佬帮忙解决下
Sorry, something went wrong.
No branches or pull requests
Required prerequisites
System information
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
tokenizer = AutoTokenizer.from_pretrained("./", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("./", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:2')
pred = model.generate(**inputs, max_new_tokens=16,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
Problem description
Reproducible example code
The Python snippets:
Command lines:
Extra dependencies:
Steps to reproduce:
Traceback
No response
Expected behavior
No response
Additional context
No response
Checklist
The text was updated successfully, but these errors were encountered: