# python – NLTK word tokenizer crashes dosent even work what should i do it gives a big error i dont understand?

So I was working on my first sentiment analysis project and I tried to use a word tokenizer by using the function nltk.word_tokenizer(example)

also used another syntax didn’t work and got an error, searched for the error but realized it might be an issue with the library itself kindly try to solve this problem

example = df['Text'][50]
print(example)
type(example)
nltk.word_tokenize(example)
nltk.word_tokenize()

{
"name": "LookupError",