- From: Murat Maga <maga@uw.edu>
- To: "shapeworks-users@sci.utah.edu" <shapeworks-users@sci.utah.edu>, "Elizabeth Jurrus" <liz@sci.utah.edu>
- Subject: [shapeworks-users] Downsampling groomed volumes
- Date: Mon, 22 Feb 2016 18:19:09 +0000
- Accept-language: en-US
- Authentication-results: sci.utah.edu; dkim=none (message not signed) header.d=none;sci.utah.edu; dmarc=none action=none header.from=uw.edu;
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:23
Hello,
After three days of non-stop crunching on 32 core system with openMP,
particle distribution hasn't finished yet (it still has two more splits to
go). I don't want to cancel the job but I also need some results to take a
look for an upcoming conference.
I am wondering if I can simply down sample groomed files and submit a new job
with the new parameters? Or stated differently, what would have more impact
on the compute time at the particle distribution stage? The size of the
volume or specific parameters (shown below)?
<iterations_per_split>1000</iterations_per_split>
<starting_regularization>100</starting_regularization>
<ending_regularization>0.1</ending_regularization>
<number_of_particles>1024</number_of_particles>
<optimization_iterations>5000</optimization_iterations>
Best,
M
- [shapeworks-users] Downsampling groomed volumes, Murat Maga, 02/22/2016
Archive powered by MHonArc 2.6.18.