SCIRun User Mailing List

Text archives Help

Re: [SCIRUN-USERS] SCIRun in headless mode

Chronological Thread 
  • From: Matthias Lange <>
  • To: Dan White <>
  • Cc:
  • Subject: Re: [SCIRUN-USERS] SCIRun in headless mode
  • Date: Mon, 22 Aug 2016 15:53:41 +0100

Hi Dan,

Thanks for your replay and effort! I will follow all the tickets you kindly created.

I do not need the GUI on the cluster. I have a working implementation and it will need to run in headless mode anyway.
The python binding is not really necessary, as the SCIRun console is a python environment. So I am happy with it. I was just wondering, because libSCIRunPythonAPI look like it should be possible. This would make it easier to debug network/python scripts, as an IDL could be used.


On 19 August 2016 at 23:11, Dan White <> wrote:


On 8/17/2016 10:27 AM, wrote:


Dear All,


I started using SCIRun 5.0 in the scripting mode and made some observation. The aim is to run SCIRun 5.0 headless on an HPC. My observations are most like due to my limited knowledge of SCIRun.


To use SCIRun on the cluster, I need to compile it there. However, the QT4 version installed on the cluster is rather old. Therefore, I compiled a recent version separately. The path to the new QT4 is then supplied the Cmake script of the Superbuild. However,  when generating the Cmake files, it turns out that new projects are generated in 'bin/SCIRun/cpm'. These projects do not take on the path to the manually specified QT4, and as a result fail. The good thing is that the problem can be avoided when compiling headless only.


The same projects in 'bin/SCIRun/cpm/' seam to have a different compiler path. If in the Superbuild project 'gcc' is specified as compiler, then some of the projects in given folder use 'cc' as  compiler. I am not sure, whether Cmake has two different variables for both compiler but I suspect it is just a different configuration. Currently, I fix this problem with a symbolic link to gcc.

Do you need non-headless on this cluster? You might try starting the build from scratch, making sure cmake points to your newer Qt, and the correct compiler. It should be straightforward.


After compiling SCIRun in headless mode, I found a few question regarding the execution of SCIRun.

1.       Is it possible to specify the number of CPU to use? A node on the HPC might have more CPUs then I applied for. So it would be good if SCIRun would use a set number of CPUs, only.

I made an issue for this during IBBM, it's not done yet but you can track it here:

2.       Is it possible to save a network from the interactive console, when compiled in headless mode? The 'scirun_save_network' command seams not to be existing and causes errors.

It's possible, it just wasn't implemented yet. I made an issue:

3.       Is the 'scirun_execute_all' command supposed to return immediately? Meaning before the execution is completed. If so, is there a way of synchronization with the execution?

This would be a feature request--the scirun_execute_all command is asynchronous and returns after launching the request. I will look into how to synchronize this; most likely I will return some sort of future-value object.

4.       After running SCIRun in verbose mode and then exiting, it seams as everything in the memory is printed to screen. A large amount of nonsense is printed to the screen.

verbose mode is mainly for developers, I suggest not using it.

5.       When running SCIRun with the combination '-s -I' , the quite command results in an endless output of the string "scirun>"

The interactive headless version has this known goofy bug. Another issue for me:

6.       Is it possible to use SCIRun directly form Python?

Theoretically you can import SCIRunPythonAPI while running the python version built with SCIRun, but I tried this and ran into init and path errors. If it's important, you can make a feature request:

Thanks for the feedback!

SCIRun users mailing list:
To unsubscribe, email with the "unsubscribe scirun-users" in the message body.

Archive powered by MHonArc 2.6.18.

Top of page