Text archives Help
- From: "Steven G. Parker" <sparker@cs.utah.edu>
- To: Hansong Zhang <hansong@sgi.com>
- Cc: "'manta@sci.utah.edu'" <manta@sci.utah.edu>
- Subject: Re: [MANTA] Manta output
- Date: Tue, 14 Jun 2005 17:04:32 -0600
On Jun 14, 2005, at 12:07 PM, Hansong Zhang wrote:
Hi Steve,
Abe has explained to a few of us that the ImageDisplay class is
supposed be a one stop link to display hardware. This seems
limiting to the many way different downsteam applications can
connect to Manta to obtain, process, and distribute the result.
That is absolutely not true. Manta is set up to have a series of
operations performed on the image - rendering, tone-mapping, and so
forth. The current version has two stages - rendering and display,
but the display can do anything it wants. The difficulty is that
Manta is pipelined, so rendering a frame and waiting for the result
is not an appropriate model. The power of it is that these
operations occur in parallel, so for example you could have a
parallel compression of the image stream for handing to vizserver.
Downstream applications can register their own components in this
pipeline. More on this below.
In the end, Manta has to have a output interface. The current
interface is the ImageDisplay class. It may be more general to get
the "display" concept out of there and make Manta export opaque
buffers (including the image buffer). This is so that the
downstream apps don't have to subclass Manta to make use of the
output. This arrangement has to do with a ray tracing server based
on Manta, but that's topic for another mail.
I am working on generalizations to the pipeline structure defined
above so that you can concatenate an arbitrary number of pipeline
stages. However, it is going slow because all of my Manta time is
going to answering e-mail :) In practice you won't want more than a
few stages, but it is easy to envision that you would want to have
tone-mapping, possibly multiple render stages, and so forth.
I too have though a lot about the ray tracing server, and have some
definite ideas. I'd love to share them with you at some point.
I believe that when other apps like Catia connects to Manta's
output, it would want to do so at the buffers level (i.e. data
protocols, not class inheritance).
class inheritance isn't the right word for it. The question is how
does the external application know that a buffer is ready. There are
two options:
1. The app says "render this frame", and waits for the result.
or
2. Manta makes a callback when the new frame is ready.
I believe that Manta should be able to do both. Right now it only
does #2. I have concerns about doing #1 efficiently, but the ability
should be there. Option 2 allows complex stages to use the parallel
CPU infrastructure without fighting manta for processing resources.
In implementing #2, the question is how does Manta make the
callback. Right now it is through implementing an interface - the
ImageDisplay component (soon to be the PipelineComponent interface).
There are a few methods for setting up the pipeline, but for the most
part, one functions gets called that presents the component with a
buffer. The component can do whatever it would like with that buffer
- post it to vizserver, post it to opengl, etc. The buffers can be
complicated - they may not even be images at all. However, the
pipeline component will be able to constrain the types of buffers
that it wants to deal with.
So I don't think of this as class inheritance. The fact that
inheritance is used is merely to provide the callback, but a C API
could be made to work with this.
On a related note, it seems to me that Manta shouldn't concern
itself with (a)synchronous i/o at all. Manta may have a notion of
double/triple/...-buffered output (with ready flags), and it would
be up to whatever consumers that come later to decide what to do
with the buffers.
The debate is how to do this all in the most efficient yet flexible
manner.
Steve
Archive powered by MHonArc 2.6.16.