Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # batch_size = 32: In each batch contains 32 instaces. # batch
- # It has been proved that training in a batch gives better performance than learn a single data at a time
- # device = torch.device("cuda"): Use GPU. If you do not have a GPU enviornment, use torch.device("cpu") instead
- # repeat = False: Do not repeat the iterator for multiple epochs
- # sort_key = lambda...: A key to use for sorting examples in order to batch together examples with similar lengths
- # and minimize padding. The sort_key provided to the Iterator constructor overrides the sort_key attribute of
- # the Dataset, or defers to it if None
- train_iter, test_iter = data.BucketIterator.splits(
- (train, test), batch_size=32, device=torch.device("cuda"), repeat=False, sort_key=lambda x: len(x.Text))
Add Comment
Please, Sign In to add comment