iv3d-users

Text archives Help


Re: [IV3D-USERS] Implementing a Reader Parallel to UVF


Chronological Thread 
  • From: Mathieu Malaterre <mathieu.malaterre@gmail.com>
  • To: iv3d-users@sci.utah.edu
  • Subject: Re: [IV3D-USERS] Implementing a Reader Parallel to UVF
  • Date: Mon, 16 Jun 2014 12:52:10 +0200

Hi Tom,

On Fri, Jun 13, 2014 at 7:45 PM, tom fogal <tfogal@sci.utah.edu> wrote:
> On 06/13/2014 12:20 PM, Mathieu Malaterre wrote:
>> - GetRange(), I used hardcoded values. I did check those are correct
>> on the whole volume: [-600, 8000]
>
> these are expected to be per-brick ranges. I *think* it should only affect
> culling, so you might be okay with making it "too wide" like this.

That's odd because the signature does not take neither `ts` nor `lod`:

virtual std::pair<double,double> GetRange() const = 0;

Are you sure this is meant to be per-brick ?

> It's easy to compute with std::minmax_element, though, and with your current
> data set sizes that wouldn't be *too* painful.

Ok, will do finer computation. I assume you meant for LOD 0.

>
>> - GetBitWidth() returns 32, as I am dealing with signed integer, and
>> GetComponentCount() returns 1.
>
>
> My guess is that this path is broken. For technical (OpenGL) reasons, data
> wider than 16bit are hard. There was some code for doing greater-than-16bit
> data, long ago, I think under a define of BIAS_SCALE in the GLSL shaders.
> However, it was never enabled by default and has likely atrophied.

Using 16 definitely helps here. Thanks much for the quick help.

> I could verify if you turned on messages/warnings/errors and sent us a debug
> log.

Ah good point. I'll use that next time.

>
>> - GetBrickOverlapSize() returns for now UINTVECTOR3(0,0,0) (is this
>> correct?)
>
>
> you tell me ;-) What is called BrickOverlap internally is commonly referred
> to as "ghost data" in the literature or even other software. Certainly, if
> your bricks have no ghost data, then the (0,0,0) is the correct thing to
> return. That said, a volume renderer cannot do proper interpolation at
> boundaries without this ghost data, so you'll be able to see the block
> boundaries if that is the case.

Ok. I am not very interested in volume rendering for now (mainly the
2x2 MPR view), so 0,0,0 is fine with me. Thanks for the explanation.

>
>> - GetDomainSize() only returns for ts=0 / lod=0 a domain of (64,64,42)
>> and assert(0) for any other one (called only when I zoom in).
>
>
> you could probably return the voxel sizes from the other LODs and be fine.

Ok will do that next.

>
>> - GetBrick is a very silly implementation, which does
>> std::ifstream::read (on LOD 0 file) into the std::vector<uint8_t>
>> params. reading went ok (.good()), vector was resized to proper actual
>> file length.
>
>
> oh, i would have expected a resize() to be necessary.

I am doing a resize here. I took care to multiply by sizeof( sample )
since ouput is stored in vector<uint8_t>. So I have something like:


bool MF3::GetBrick(const BrickKey& bk, std::vector<uint8_t> & vals) const
[...]
const std::string & fn = Internals->Filename;
std::ifstream is( fn.c_str(), std::ios::binary );
const size_t n = 64 * 64 * 42 * sizeof( int16_t );
vals.resize( n );
is.read( (char*)&vals[0], n );
return true;


> This might depend on
> the renderer you use; I think our newer renderer sizes things correctly
> before it passes it into GetBrick.

vals was handed to me with a size of 67108864 bytes which did not make
much sense to me... I'am resiz'ing anyway.

>
>> - MaxGradientMagnitude() returns for now:
>> -std::numeric_limits<float>::max() (could not figure out a decent
>> default implementation).
>
>
> This probably ends up being used in scaling calculations for doing proper
> lighting. To remove it from the equation for now, I would just turn
> lighting off after you load your volume.

Ok make sense.

>
>> And upon construction I am doing:
>>
>> // cstor
>> {
>> m_pHist1D.reset();
>> m_pHist1D.reset(new Histogram1D(
>> std::min(MAX_TRANSFERFUNCTION_SIZE, 1<<GetBitWidth())));
>> for (size_t i = 0;i<m_pHist1D->GetSize();i++) {
>> m_pHist1D->Set(i, 1);
>> }
>> m_pHist2D.reset();
>> VECTOR2<size_t> vec(256, std::min(MAX_TRANSFERFUNCTION_SIZE,
>> 1<<GetBitWidth()));
>> m_pHist2D.reset(new Histogram2D(vec));
>> for (size_t y=0; y < m_pHist2D->GetSize().y; y++) {
>> for (size_t x=0; x < m_pHist2D->GetSize().x; x++) {
>> m_pHist2D->Set(x,y,1);
>> }
>> }
>>
>> // add two bricks
>> {
>> BrickMD bmd;
>> FLOATVECTOR3 vBrickExtents(64,64,42);
>> UINTVECTOR3 vBrickVector(64,64,42);
>>
>> bmd.extents = vBrickExtents;
>>
>> FLOATVECTOR3 vBrickCorner(32,32,21);
>> bmd.center = vBrickCorner;
>> bmd.n_voxels = vBrickVector;
>>
>> const BrickKey k = BrickKey(0, 0, 0);
>> AddBrick(k, bmd);
>> }
>> {
>> BrickMD bmd;
>> FLOATVECTOR3 vBrickExtents(512,512,336);
>> UINTVECTOR3 vBrickVector(512,512,336);
>>
>> bmd.extents = vBrickExtents;
>>
>> FLOATVECTOR3 vBrickCorner(256,256,168);
>> bmd.center = vBrickCorner;
>> bmd.n_voxels = vBrickVector;
>>
>> const BrickKey k = BrickKey(0, 4, 0);
>> AddBrick(k, bmd);
>> }
>> }
>
>
> If you have 4 LODs, you should add a minimum of 4 bricks. But I think it
> would give you an empty (as opposed to white) volume if that was the case.

Ok.

> IIRC the default IV3D extents normalizes the data to be within the unit
> cube, so extents of e.g. (512,512,336) might cause craziness. I might be
> wrong on that. Adding a print and loading a standard UVF would be an easy
> way to check me, though. Ditto for the bricks' centers.

For simplification I'll use BrickMD.extents to be 1,1,1 and
BrickMD.center (0.5,0.5,0.5) and try again.

>
>> This is were I am at now. Any suggestion how to diagnose what the
>> issue could be would be very much appreciated.
>
>
> "Simplify data management" is a laudable goal, but the parallel reader stuff
> was originally designed around the "my data are too large to convert to UVF"
> use case. Terabytes, for example.
>
> A lot of the mechanisms for IV3D performing well are based around some
> guarantees---16bit or smaller data, for example---that your situation does
> not seem to hit.
>
> I don't want to discourage you---as I said, "simplify data management" is a
> laudable goal---but is there a reason that the conversion pipeline isn't
> working for you that we could potentially fix?

It does not make much sense to do conversion to me. Opening directly
the data directly is my goal here (you can see my data as some sort of
JPEG 2000 in 3D).

Thanks for your help (again!).



Archive powered by MHonArc 2.6.18.

Top of page