- From: Shireen Elhabian <shireen@sci.utah.edu>
- To: Murat Maga <maga@uw.edu>
- Cc: "shapeworks-users@sci.utah.edu" <shapeworks-users@sci.utah.edu>
- Subject: Re: [shapeworks-users] Downsampling groomed volumes
- Date: Thu, 25 Feb 2016 12:04:08 -0700
Hi Murat,
Regarding the parameters you listed:
m_procrustes_interval allows shapeworks to run procrustes every specific
number of iterations, if it is set to zero. procrustes will be performed each
iteration
m_recompute_regularization_interval: this is how often the covariance matrix
regularization parameter is recomputed. This regularization parameter decays
exponentially with iterations where starting_regularization and
end_regularization parameters controls such decay along with the recompute
interval.
m_procrustes_scaling: is a flag to factor out scaling from the shape model,
if set to one, this means that procrustes part solves for rotation,
translation as well as scaling … for some datasets, scaling is actually a
shape feature to me modeled, so in such case, this flag needs to be set to
zero.
m_adaptivity_mode: this controls the spread of the particles, with 0 meaning
that particles are evenly spread over the surface of each given shape sample.
m_keep_checkpoints: a flag that will indicate if you want to save
intermediate results while performing optimization, mostly we use this for
debugging purposes.
Notes:
- the coordinates reported: lpts (particle positions in local/sample-wise
coordinates) wpts (particle positions in world coordinates after procrustes).
hence if the input shapes are already aligned, the particles in lpts and wpts
files should coincide up to precision accuracy (basically, the transformation
matrix for each sample to the procrustes mean should be very close to
identity).
- shape works reports eigen values while svd of wpts report singular values
which are the square root of the eigenvalues. for a scree plot, eigen values
is used rather than the singular ones ...
hope this helps …
best regards
Shireen
>
On Feb 24, 2016, at 10:24 AM, Murat Maga <maga@uw.edu> wrote:
>
>
Hi Shireen,
>
>
In my experience downsampled volumes didn't proceed any faster than full
>
sized ones. But anyways, the full resolution execution finished. Increasing
>
the iterations made it much longer, but results are more consistent, so
>
that's good.
>
>
I am wondering if you can elaborate a bit more on the Procrustes fitting
>
done in SW.
>
>
I see in the log file the setting were:
>
m_procrustes_interval = 0
>
m_recompute_regularization_interval = 1
>
m_procrustes_scaling = 1
>
m_adaptivity_mode = 0
>
m_keep_checkpoints = 0
>
>
I thought when Procrustes interval is set to 0, it doesn't do Procrustes
>
alignment. Am I mistaken?
>
If they are indeed Procrustes aligned, what are the coordinates reported?
>
Are they the tangent approximations? If I take do a SVD of the reported
>
coordinates, I get a very different result of shape variance decomposition
>
than SW reports. Below is the first 10 from SW
>
>
mode 0 : 1359.87, 33.5901%, 33.5901%
>
mode 1 : 599.133, 14.7992%, 48.3893%
>
mode 2 : 359.544, 8.88109%, 57.2704%
>
mode 3 : 265.742, 6.56408%, 63.8344%
>
mode 4 : 216.873, 5.35698%, 69.1914%
>
mode 5 : 191.874, 4.73947%, 73.9309%
>
mode 6 : 166.742, 4.11869%, 78.0496%
>
mode 7 : 112.695, 2.78369%, 80.8333%
>
mode 8 : 71.9164, 1.77641%, 82.6097%
>
mode 9 : 70.1653, 1.73315%, 84.3428%
>
mode 10 : 64.4909, 1.59299%, 85.9358%
>
>
what I get is:
>
75.3%
>
1.33%
>
1.254%
>
...
>
>
I appreciate any insight you can provide.
>
Best,
>
M
>
>
>
>
-----Original Message-----
>
From: Shireen Elhabian [mailto:shireen@sci.utah.edu]
>
Sent: Monday, February 22, 2016 10:24 AM
>
To: shapeworks-users@sci.utah.edu; Murat Maga
>
Subject: Re: [shapeworks-users] Downsampling groomed volumes
>
>
Hi Murat,
>
>
It would be the volume size ... reducing it will hugely speed things up ...
>
however, you might want at least to visual QC the downsampled volumes to
>
make sure that this step didn't introduce holes in the resulting shapes or
>
eroded thin structures ....
>
>
best regards
>
Shireen
>
>
On 02/22/2016 11:19 AM, Murat Maga wrote:
>
> Hello,
>
>
>
> After three days of non-stop crunching on 32 core system with openMP,
>
> particle distribution hasn't finished yet (it still has two more splits to
>
> go). I don't want to cancel the job but I also need some results to take a
>
> look for an upcoming conference.
>
>
>
> I am wondering if I can simply down sample groomed files and submit a new
>
> job with the new parameters? Or stated differently, what would have more
>
> impact on the compute time at the particle distribution stage? The size of
>
> the volume or specific parameters (shown below)?
>
>
>
> <iterations_per_split>1000</iterations_per_split>
>
> <starting_regularization>100</starting_regularization>
>
> <ending_regularization>0.1</ending_regularization>
>
> <number_of_particles>1024</number_of_particles>
>
> <optimization_iterations>5000</optimization_iterations>
>
>
>
> Best,
>
> M
>
Archive powered by MHonArc 2.6.18.