Manta Interactive Ray Tracer Development Mailing List

Text archives Help


Re: [MANTA] Re: Manta Problem


Chronological Thread 
  • From: Dmitri Rubinstein <rubinste@graphics.cs.uni-sb.de>
  • To: Abe Stephens <abe@sci.utah.edu>
  • Cc: manta@sci.utah.edu
  • Subject: Re: [MANTA] Re: Manta Problem
  • Date: Thu, 10 May 2007 12:02:59 +0200

Abe Stephens schrieb:

Dmitri Rubinstein wrote:
I am a big fan of SWIG and Python, and have already ported OpenRT API to Python with SWIG.
James Bigler and I have a love-hate relationship with swig.

I am interested to port my VRML scene graph library to use Manta for rendering. Currently it uses OpenRT API. Here is my thesis about this: http://graphics.cs.uni-sb.de/~rubinste/works/diplom.pdf .
Because Manta is now able to visualize Boeing I hope to be able to run my "Nancy in Boeing" demo (see pictures in the thesis) with Manta.
Neat. This sounds like an interesting project, I think we'll be happy to help.

Manta provides the underlying rendering architecture and interface, the system doesn't currently have a robust scene graph. The Objects/Groups/Primitives in Manta provide targets for rays (for the most part, intersection methods), they don't provide a scene graph like OpenSG or Performer/Inventor. Manta could be embedded within a driving application (or used as a back-end to a scene graph library) which would be responsible for configuring and controlling the renderer using the basic containers. Programs like bin/manta or the various python scripts are examples of very simple driving applications.
RTSG (my scene graph library) provides the scene graph infrastructure, I only need to rewrite rendering backend which currently use OpenRT. However RTSG is also still in development, so I have no full API to create driving application. As RTSG was designed to be very modular I plan in future to create some API to dynamically load rendering backend on demand, configure it and run rendering loop. Currently driving programs look like this:

init RTSG
create scene object
load scene from file to the scene object
setup renderer ( currently depends on rendering backend)
associate renderer with scene
init scene with current time
rendering loop:
process next frame with current time ( this process VRML events and notify renderer about changes in scene graph )


In the Boeing777 demo, manta was embedded within a larger Performer based application which sent input to the renderer and processed the output frames. The interaction with python is similar, there is an image display mechanism that is python specific and python scripts are able to send transactions to Manta to provide input.

I have not found any reference counting code, do you allow shared objects in the scene ? How I can delete objects correctly ?
We've viewed safe object deletion as the responsibility of the application (or application's scene graph in your case). As long as the driving application changes renderer state safely using transactions or callbacks provided by the Manta interface, new object instances may be swapped in or out at any time. The application level scene graph could implement object counting if necessary, although none of the simple examples we've implemented have done so. The manta_new() and manta_delete() methods in python manipulate the python garbage collection mechanism.

Is there any simple example of rigid body animation ? Something like creating object from triangles and change its transformation matrix depending on time and render in loop.
Yes. I have some python scripts which animate rigid bodies (textured spheres I think). This is accomplished using serial animation callbacks. I can try to dig up an example and make it available.
This would be great. In order to adapt current rendering backend I need only to know how to do a couple of things. I assume that ray tracing backend has following properties:
There are geometry objects which may require time consuming building of an acceleration structure like kd-tree. These objects can be instantiated in the scene multiple times with different transformation matrices. Instances are used for rigid body animation, objects are used to represent static geometry. Also in the case of geometry coordinate interpolation, objects will be rebuilt too. Internal optimizer decide how to split VRML scene into objects and instances optimally.

In order to provide material animation, I also need to know how to assign materials to objects and change material parameters, like diffuse color.


I have found the code to set object properties on construction by passing a string vector. Is there a possibility to change properties in this way after construction ?
Unfortunately not without reconstructing the object. The string based constructors are a carry over from older code bases which relied on command lines for configuration. I don't think they are very useful today. As I've needed various classes in python I've added normal constructors and get/set methods for different fields. Let us know if you see any that are missing and they can be added quickly.

We'd be happy to discuss the trade offs of different software architectures on top of Manta. I think the design is flexible, although there are some limitations.
The only limitation that I see currently is an absence of some parametrization interface like in OpenRT. Here is a simple example of a shader specification in the X3D scene (I get portions of it from here: http://www.bitmanagement.com/developer/contact/examples/shader/spec/X3DProgrammableShadersProposal.htm#X3DShaderNode):

Shape {
  appearance ShaderAppearance {
      material Material {
        ambientIntensity 0.416667
        diffuseColor 0.6 0.6 0.6
        specularColor 0.8 0.8 1
        shininess 0.4
      }
      fragmentShader FragmentShader {
        field SFColor baseColor 0.673469 0.420254 0.297996
        field SFVec3f lightPosition -10 0 10
        url "cg: ..."
      }
  }
  geometry Box { }
}

Here are two material specifications, the first one uses predefined fields which specify OpenGL lighting model. They can be easly mapped to the single material class e.g. Phong. However the second specification of the shader allow to define shader parameter specific to the shader declared in the url. This would require some factory interface to dynamically create the shader object instance by its declaration in the url (e.g. load it from the specified library). Also some API for setting parameter values specified by the name is required too. For example:

class Object {
...
virtual void setParameter(const std::string &name, const Params &params) = 0;
}

Params is a container of a primitive values like: int, float, and string. When we redefine operator<< we can also do following:

material->setParameter("color", Params()<<0.1<<0.4<<0.7);

In the same way parameters may be parsed in the material e.g.

void SomeMaterial::setParameter(const std::string &name, const Params &params) {
ParamParser p(params); // it stores current index into params
...
if (name == "color")
  if (!p>>color)
    return false;
...
}
I have used this approach in my own ray tracer. Another advantage is the reduction of the SWIG wrapped code: you don't need anymore to wrap every class with SWIG, but only factory and parametrization interface.
The only drawback is that for each shader parameter some parsing code is still required. I have also ideas how to avoid this.

Greetings

Dmiti


Abe








Archive powered by MHonArc 2.6.16.

Top of page