python - How to run Pytorch model in normal non-parallel way? -




i going through this script, , there code block takes 2 options account, dataparallel , distributeddataparallel here:

if not args.distributed:     if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):         model.features = torch.nn.dataparallel(model.features)         model.cuda()     else:         model = torch.nn.dataparallel(model).cuda() else:     model.cuda()     model = torch.nn.parallel.distributeddataparallel(model) 

what if don't want either of these options, , i want run without dataparallel. how do it?

how define model runs plain nn , not parallelizing anything?

  • dataparallel wrapper object parallelize computation on multiple gpus of same machine, see here.
  • distributeddataparallel wrapper object lets distribute data on multiple devices, see here.

if don't want it, can remove wrapper , use model is:

if not args.distributed:     if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):         model.features = model.features         model.cuda()     else:         model = model.cuda() else:     model.cuda()     model = model 

this keep code modification minimum. of course, since parallelization of no interest you, drop whole if statement along lines of:

if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):     model.features = model.features model = model.cuda() 

note code assumes running on gpu.





wiki

Comments

Popular posts from this blog

Asterisk AGI Python Script to Dialplan does not work -

python - Read npy file directly from S3 StreamingBody -

kotlin - Out-projected type in generic interface prohibits the use of metod with generic parameter -