iv3d-users

Text archives Help


Re: [IV3D-USERS] Implementing a Reader Parallel to UVF


Chronological Thread 
  • From: tom fogal <tfogal@sci.utah.edu>
  • To: iv3d-users@sci.utah.edu
  • Subject: Re: [IV3D-USERS] Implementing a Reader Parallel to UVF
  • Date: Fri, 13 Jun 2014 19:45:13 +0200

On 06/13/2014 12:20 PM, Mathieu Malaterre wrote:
> - GetRange(), I used hardcoded values. I did check those are correct
> on the whole volume: [-600, 8000]

these are expected to be per-brick ranges. I *think* it should only affect culling, so you might be okay with making it "too wide" like this.

It's easy to compute with std::minmax_element, though, and with your current data set sizes that wouldn't be *too* painful.

- GetBitWidth() returns 32, as I am dealing with signed integer, and
GetComponentCount() returns 1.

My guess is that this path is broken. For technical (OpenGL) reasons, data wider than 16bit are hard. There was some code for doing greater-than-16bit data, long ago, I think under a define of BIAS_SCALE in the GLSL shaders. However, it was never enabled by default and has likely atrophied.

I could verify if you turned on messages/warnings/errors and sent us a debug log.

- GetBrickOverlapSize() returns for now UINTVECTOR3(0,0,0) (is this correct?)

you tell me ;-) What is called BrickOverlap internally is commonly referred to as "ghost data" in the literature or even other software. Certainly, if your bricks have no ghost data, then the (0,0,0) is the correct thing to return. That said, a volume renderer cannot do proper interpolation at boundaries without this ghost data, so you'll be able to see the block boundaries if that is the case.

- GetDomainSize() only returns for ts=0 / lod=0 a domain of (64,64,42)
and assert(0) for any other one (called only when I zoom in).

you could probably return the voxel sizes from the other LODs and be fine.

- GetBrick is a very silly implementation, which does
std::ifstream::read (on LOD 0 file) into the std::vector<uint8_t>
params. reading went ok (.good()), vector was resized to proper actual
file length.

oh, i would have expected a resize() to be necessary. This might depend on the renderer you use; I think our newer renderer sizes things correctly before it passes it into GetBrick.

- MaxGradientMagnitude() returns for now:
-std::numeric_limits<float>::max() (could not figure out a decent
default implementation).

This probably ends up being used in scaling calculations for doing proper lighting. To remove it from the equation for now, I would just turn lighting off after you load your volume.

And upon construction I am doing:

// cstor
{
m_pHist1D.reset();
m_pHist1D.reset(new Histogram1D(
std::min(MAX_TRANSFERFUNCTION_SIZE, 1<<GetBitWidth())));
for (size_t i = 0;i<m_pHist1D->GetSize();i++) {
m_pHist1D->Set(i, 1);
}
m_pHist2D.reset();
VECTOR2<size_t> vec(256, std::min(MAX_TRANSFERFUNCTION_SIZE,
1<<GetBitWidth()));
m_pHist2D.reset(new Histogram2D(vec));
for (size_t y=0; y < m_pHist2D->GetSize().y; y++) {
for (size_t x=0; x < m_pHist2D->GetSize().x; x++) {
m_pHist2D->Set(x,y,1);
}
}

// add two bricks
{
BrickMD bmd;
FLOATVECTOR3 vBrickExtents(64,64,42);
UINTVECTOR3 vBrickVector(64,64,42);

bmd.extents = vBrickExtents;

FLOATVECTOR3 vBrickCorner(32,32,21);
bmd.center = vBrickCorner;
bmd.n_voxels = vBrickVector;

const BrickKey k = BrickKey(0, 0, 0);
AddBrick(k, bmd);
}
{
BrickMD bmd;
FLOATVECTOR3 vBrickExtents(512,512,336);
UINTVECTOR3 vBrickVector(512,512,336);

bmd.extents = vBrickExtents;

FLOATVECTOR3 vBrickCorner(256,256,168);
bmd.center = vBrickCorner;
bmd.n_voxels = vBrickVector;

const BrickKey k = BrickKey(0, 4, 0);
AddBrick(k, bmd);
}
}

If you have 4 LODs, you should add a minimum of 4 bricks. But I think it would give you an empty (as opposed to white) volume if that was the case.

IIRC the default IV3D extents normalizes the data to be within the unit cube, so extents of e.g. (512,512,336) might cause craziness. I might be wrong on that. Adding a print and loading a standard UVF would be an easy way to check me, though. Ditto for the bricks' centers.

This is were I am at now. Any suggestion how to diagnose what the
issue could be would be very much appreciated.

"Simplify data management" is a laudable goal, but the parallel reader stuff was originally designed around the "my data are too large to convert to UVF" use case. Terabytes, for example.

A lot of the mechanisms for IV3D performing well are based around some guarantees---16bit or smaller data, for example---that your situation does not seem to hit.

I don't want to discourage you---as I said, "simplify data management" is a laudable goal---but is there a reason that the conversion pipeline isn't working for you that we could potentially fix?

-tom

On Thu, Jun 12, 2014 at 5:45 PM, TF (mobile) <tfogal@sci.utah.edu> wrote:
it's out of date, sorry.

you probably want to derive from LinearIndexDataset (i might be slightly off
on that name), assuming your data are completely regular.

you do (probably) want to mixin FileBackedDataset, though (assuming your data
are backed by files). See UVFDataset for an example.

-tom

On Jun 12, 2014 4:22 PM, Mathieu Malaterre <mathieu.malaterre@gmail.com>
wrote:

Dear all,

I am starring at the documentation from:

https://github.com/SCIInstitute/ImageVis3D/blob/master/doc/import.adoc#reader-skeleton

The documentation states:

"Derive the class from FileBackedDataset."

However FileBackedDataset does not derives from Dataset, therefore it
is impossible to implement a Dataset *::Create() function. Is the
documentation outdated or am I missing something here ?

Thanks.




Archive powered by MHonArc 2.6.18.

Top of page