Introduction

Milagros is an SGI Onyx2 that is used primarily for driving the large immersive displays in the ACES Visualization Laboratory. Milagros is a very flexible graphics system allowing graphics resources to be allocated simultaneously for up to three users. Unlike our other thin-client based visualization system Maverick, Milagros is designed for high performance local visualization that may require a large amount of graphics processing power. Even though X-forwarding is allowed, Milagros is not designed with a formal mechanism for robust remote visualization.

Architecture and Display system

Milagros is configured with 24 400 MHz R12000 processors, 25 GB memory, and 6 IR2 graphics pipelines or pipes. A pipe is comparable to a GPU, having its own graphics memory and rendering pipeline. Milagros has 6 highly configurable graphics pipes. Each pipe is numbered from 0-5 and is classified as either being a fat or thin pipe. Fat pipes have a larger amount of graphics memory than thin pipes and can host additional video ports. Pipes 0, 2, and 4 are fat pipes while pipes 1, 3, and 5 are narrow pipes.

A channel is a portion of a pipe’s graphics memory. The chunk of video memory allocated to a channel is used to drive a display for a video port. Thus, the number of channels supported by a pipe directly corresponds to the number of video ports on that pipe.  This means that users can assign multiple channels with variable resolutions to a single pipe and only be limited by the amount of available video memory and video ports on the pipe. This flexibility allows many different and complex display setups to be used in the ACES Visualization Laboratory.

There are several pre-set display configurations available. See the Display Modes section of this manual for more information.

For more information about switching display modes see the ACES Visualization Lab users guide.

System Access

The most common access point to see the output from the graphics pipes on Milagros is at the 3-headed workstations in the ACES Visualization Laboratory. We have a set up so that each fat pipe has its own keyboard/mouse, enabling the OS to host an X-session. 3 channels are created from the pipe to host single 3-headed workstation. This is the simplest example of pipe/channel setup relating to a workstation. We have 3 such workstations that are capable of hosting this setting. One of those 3 workstations is designated as the main console.  The video outputs on each channels on the main console is mirrored to the projection system for the front visualization wall. Milagros is capable of driving 3 projectors for the front wall projection system as well as 10 projectors for the back wall projection system simultaneously. You may use whole graphics resource on the system by your self for a presentation or for running immersive environment.  Or you may share resource with up to 2 more users simultaneously, by distributing graphics resource over 3 workstations.

SSH

When logging into Milagros remotely, users may use the ssh command. To ensure a secure login session, users must connect to all TACC machines using the secure shell, ssh program. Telnet is no longer allowed because of the security vulnerabilities associated with it. The "r" commands rlogin, rsh, and rcp, as well as ftp, are also disabled on this machine for similar reasons. These commands are replaced by the more secure alternatives included in SSH --- ssh, scp, and sftp.

Before any login sessions can be initiated using ssh, a working SSH client needs to be present in the local machine. Go to the TACC introduction to SSH for information on downloading and installing SSH. To initiate a ssh connection to milagros, type the following on the local workstation

ssh <login-name> @ milagros.tacc.utexas.edu
 
You may need to specify ssh protocol version by
ssh -2 <login-name> @ milagros.tacc.utexas.edu

Note that the <login-name> is only needed if the user name on the machine being logged onto differs from the user name on the workstation.

Login Info

  1. Login Shell

The most important component of a user's environment is the login shell that interprets text on each interactive command line and statements in shell scripts. Each login has a line entry in the /etc/passwd file, and the last field contains the shell launched at login. To determine your login shell, execute:

grep <my_login_name> /etc/passwd {to see your login shell}

You can use the chsh command to change your login shell; instructions are in the man page. Available shells are listed in the /etc/shells file with their full-path. To change your login shell, execute:

cat /etc/shells

{select a <shell> from list}

chsh <username> <shell>

{use full path of the shell}

  1. User Environment

The next most important component of a user's environment is the set of environment variables. Many of the Unix commands and tools, such as the compilers, debuggers, profilers, editors, and just about all applications that have GUIs (Graphical User Interfaces), look in the environment for variables that specify information they may need to access. To see the variables in your environment execute the command:

env

env

{list of environment variables currently loaded}

The variables are listed as keyword/value pairs separated by an equal (=) sign, as illustrated below by the HOME and PATH variables.

HOME=/home/utexas/staff/username

PATH=/bin:/usr/bin:/usr/local/apps

(PATH has a colon (:) separated list of paths for its value.) It is important to realize that variables set in the environment (with setenv for C shells and export for Bourne shells) are "carried" to the environment of shell scripts and new shell invocations, while normal "shell" variables (created with the set command) are useful only in the present shell. Only environment variables are seen in the env (or printenv) command; execute set to see the (normal) shell variables.

  1. Startup Scripts

All Unix systems set up a default environment and provide administrators and users with the ability to execute additional Unix commands to alter the environment. These commands are "sourced"; that is, they are executed by your login shell, and the variables (both normal and environmental) as well as aliases and functions are included in the present environment. We recommend that you customize the login environment by inserting your "startup" commands in .cshrc_user, .login_user, and .profile_user files in your home directory.

The commands in the /etc/profile file address operating system behavior and set the initial PATH, ulimit, umask, and environment variables such as the HOSTNAME. /etc/profile sources files ending in .sh. Many site administrators use these scripts to setup the environments for common user tools (vim, less, etc.) and system utilities (ganglia, modules, Globus, LSF, etc.)

TACC has to coordinate the environments on platforms of several operating systems: AIX, Linux, IRIX, Solaris, and Unicos. In order to efficiently maintain and create a common environment among these systems, TACC uses its own startup files in /usr/local/etc. (A corresponding file in this etc directory is sourced by the .profile and .login files that reside in your home directory. (Please do not remove these files and the sourcing commands in them, even if you are a Unix guru.) Any commands that you put in your .login_user, .cshrc_user, or .profile_user file are sourced (if the file exists) at the end of the corresponding /usr/local/etc command files. If you accidentally remove your .login, .cshrc, and .login, you can copy new ones from /usr/local/etc/start-up.

  1. Modules

TACC is constantly including updates and installing revisions for application packages, compilers, communications libraries, and tools and math libraries. To facilitate the task of updating and to provide a uniform mechanism for accessing different revisions of software, TACC uses the modules utility.

At login, a basic environment for the default applications, compilers, tools, and libraries is set by several modules commands. Your PATH, MANPATH, LIBPATH, directory locations (WORK, ARCHIVE, HOME, ...), alias (cdw, cda, ...) and license paths, are just a few of the environment variables and aliases created for you. This frees you from having to initially set them and update them whenever modifications and updates are made in system and application software.

Users who need 3rd party applications, special libraries, and tools for their development can quickly tailor their environment with only the applications and tools they need. (Building your own specific application environment through modules allows you to keep your environment free from the clutter of all the other application environments you don't need.)

Each of the major TACC applications has a modulefile that sets, unsets, appends to, or prepends to environment variables such as $PATH, $LD_LIBRARY_PATH, $INCLUDE_PATH, $MANPATH for the specific application. Each modulefile also sets functions or aliases for use with the application. A user need only invoke a single command,

module load <application>

module load <app.1> [<app.2>  …]

{list of modules to be loaded}

at each login to configure an application/programming environment properly.

If you often need an application environment, place the modules command in your .login_user and/or .profile_user shell startup file.

Basic modules needed for each system at TACC domain are automatically loaded at login time. You may list currently loaded modules to your login by issuing a command,

module list

module list

{list of modules that are currently loaded}

You should see,

1) IRIX64     2) milagros     3) TACC

as default.

Most of the package directories are in /usr/local/apps ($APPS) and are named after the package name (<app>). In each package directory there are subdirectories that contain the specific version of the package. The APPS directory structure is shown in the diagram below:

diagram showing directory structure branching out from /usr/local/apps ($APPS)
TACC Applications Directory Structure

The details of the environmental changes are in the modulefile, /usr/local/opt/modules/modulefiles/fftw. To see a list of available modules and a synopsis of a modulefile's operations, execute:

module available

module avail

{lists modules available to the system}

{ditto}

module help

module help <app>

{lists module commands}

{lists environment changes performed for <app>}

During upgrades, new modulefiles are created to reflect the changes made to the environment variables. TACC will always announce upgrades and module changes in advance.

Another feature of modules is the ease in changing the environment for experimenting with new updates or backing down to older application versions. TACC will often make a link from <app>.new to the updated package modulefile (<app>.<new-version>) that has not become the default version yet. Also, the retired default modulefile is often linked to <app>.old. This makes it easier for users to change to new or old environments with the commands:

module swap <app>.new <app>.old

{swaps module from old to new}

For more information on modules and a description of how to build modulefiles, check out the man pages and the following URL:

http://www.tacc.utexas.edu/resources/userguides/modules/.

For information on customizing your login, go to the following URL:

http://www.tacc.utexas.edu/resources/userguides/login/.

File Systems

The TACC platforms have several different file systems with distinct storage characteristics. There are predefined, user-owned directories in these file systems for users to store their data. Of course, these file systems are shared with other users, so they are managed by either a quota limit, a purge policy (time-residency) limit, or a migration policy.

To determine the size of a file system, cd to the directory of interest and execute the "df" command with the syntax:

df -k .

or simply execute it without the "dot" to see all file systems. In the example below the file system name appears on the left, and the used and available space (-k, in units of 1KBytes) appear in the middle columns followed by the percent used:

% df -k .

File System

1k-blocks

Used

Available

Use%

Mounted on

/dev/dsk

562903666

342225440

215049190

62%

/home

To determine the amount of space occupied in a user-owned directory, cd to the directory and execute the du command with the -s option (s=summary):

du -s

To determine quota limits and usage on $HOME, execute the quota command:

Quota –v

Important directories

The file systems and directories that are important to you on Milagros are:

Directory

Physical location

$HOME

/home/utexas/{your institution}/{your username}

$WORK

/san/vis/work/utexas/{your institution}/{your username}

$ARCHIVE

/archive/utexas/{your institution}/{your username}

$SCRATCH

/tmp/

 

$HOME Directories

The system automatically changes to a user's home directory at login and this is the recommend location to store your source codes and build your executables.

A user's home directory is the place to store files that are routinely used in development and day-to-day work. If the output files from production runs are small, then it is reasonable to store them in $HOME. Home directories are backed up daily; so, if you accidentally remove a critical file, submit a request using the consulting form in the TACC User Portal to recover the last saved version (include the full path name of the file(s) or directory, as well as the machine name).

Since the home file system is of limited size, a 500 megabyte quota limit is imposed on every user (the quota limit is machine specific).

Use $HOME to reference your home directory in scripts.

Use cd to change to $HOME.

$WORK Directories

Store large files and perform most of your job runs from this file system. This file system is accessible from all the nodes, however, older will be purged.

The work file system is configured with fast disks on TACC machines and should, therefore, be used when I/O performance significantly affects program performance. Work can also be used to store large files temporarily. The files in work ARE NOT backed up and are temporary. Files that are corrupted or accidentally removed are not recoverable.

PLEASE NOTE: TACC staff may delete files from work if the work file system becomes full and directories consume an inordinately large amount of disk space. A full work file system inhibits use of the file system for ALL users.

Use $WORK to reference your work directory in scripts.

Use cdw to change to $WORK.

SAN Directory

The TACC SAN is a Storage Area Network that is accessible from the Milagros. Space on the SAN is an allocatable resource; that is, space is not automatically allocated to a project, the Principal Investigator must request space on this file system.

The /san/vis/<project_name> directories are for projects that have been awarded (allocated) long-term space. The present configuration has ~4TB space for persistent, project-oriented storage. For more details read:

More on SAN

$ARCHIVE Directories

For long term file storage, use the archive file system ($ARCHIVE). This file system physically resides on an SGI Origin 2000 (archive.tacc.utexas.edu), a machine dedicated to supporting the archive file system. This file system has "archive" characteristics. The access speed is low relative to the work directory.

A user's archive directory is available on all TACC HPC computers and is mounted at /archive. It appears as a normal UNIX file system but is managed by DMF, SGI's Data Migration Facility. Files that have not been accessed in a long time are moved offline (migrated) to tape via two StorageTek 9310 robots. DMF automatically and transparently performs the archival and retrieval of files from the tape robot system. When an off-line file is accessed, DMF automatically retrieves the file while the process that is accessing the files waits. Under normal circumstances it takes less than a minute for the robots to start streaming a file's data back to the disk and for the user's process to continue.

Use $ARCHIVE to reference your archive directory in scripts.

Use cda to change to $ARCHIVE.

Programming

 

This chapter provides an overview of the development environment on milagros including available compilers and APIs (Application Programming Interface) that might be of interest to users wishing to develop HPC or visualization applications.

 

 

Compilation

 

Compiling and Running Serial Programs

 

By default the milagros programming environment uses SGI’s MIPSpro C/C++ and

Fortran. The following section highlights the important aspects of using the MIPSpro compilers including  commands that can be used for both compiling and linking (making an executable from a .o object file).The tables below list the syntax for serial and parallel program compilation.

 

Compiling Serial Programs

Compiler

File Suffix

Example

cc MIPSpro C compiler

.h, .c

cc [options] file[s].suffix

CC MIPSpro C++ compiler

.h, .C, .cc, .cpp, .cxx

CC [options]  file[s].suffix

f77 MIPSpro Fortran 77 compiler

.f, .for,

f77 [options] file[s].suffix

f90  Mipspro Fortran 90 compiler

.f90, .fpp

f90 [options] file[s].suffix

Gcc Gnu C compiler

.h, .c

gcc [options] file[s].suffix

g++ Gnu C++ compiler

.h, .cc, .cpp

g++ [options] file[s].suffix

g77 Gnu Fortran 77 compiler

.g77

g77 [options] file[s].suffix

Appropriate program-name suffixes are required for each compiler. By default, the executable name is a.out; and it may be renamed with the -o option. To run an executable, simply type the name of the executable on the command line (and hit return). When compiling and linking code in a single command, include the linker options at the end of the command as illustrated below:

Compile/Link code

prog.c or prog.f90, naming the executable prog

C

cc -o prog [options] prog.cc [linker options]

C++

CC –o prog [options] prog.cc [linker options]

Fortran

f90 -o prog [options] prog.f90 [linker options]

To run the above compiled program interactively, execute:

./prog

The relative path expression "./" tells the shell to look in the present working directory for the executable. It is often used to make sure that an executable of the same name in another directory (as determined by the PATH environment variable) is not executed. Also, if the "." is not in the PATH variable it is necessary to use "./" for the shell to find the executable.

Additional information, including descriptions of compiler and linker options, can be found in the man pages for each compiler (e.g. man cc).

Compiling OpenMP Programs

The following table shows how to compile OpenMP programs on milagros using the MIPSpro compilers.

Compiler

prog.c, prog.cpp, prog.f or prog.f90

cc

cc –mp –MP:open_mp=ON [options] prog.cc [linker options]

CC

CC –mp –MP:open_mp=ON [options] prog.cc [linker options]

f77

f77 –mp –MP:open_mp=ON [options] prog.f77 [linker options]

f90

f90 –mp –MP:open_mp=ON [options] prog.f90 [linker options]

 

 

Compiling Parallel Programs with MPI

On milagros  the MIPSpro compilers support the compilation of MPI programs. The following table shows the command lines for building MPI programs on milagros.

Compiling and linking Parallel Programs with MPI

 

Compiler

Type

Example

cc

32-bit

 cc –n32 prog.clmpi

cc

64-bit

cc -64 prog.clmpi

CC

32-bit

CC –n32 prog.cpplmpi++ -lmpi

CC

64-bit

CC -64 prog.cpplmpi++ -lmpi

f77

32-bit

f77 –n32 –LANG:recursive=on compute.flmpi

f77

64-bit

f77 -64 –LANG:recursive=on compute.flmpi

f90

32-bit

f90 –n32 –LANG:recursive=on compute.flmpi

f90

64-bit

f90 -64 –LANG:recursive=on compute.flmpi

 

Basic Optimization for Serial and Parallel Programming using OpenMP and MPI

Below are some of the common compiler options that control code generation in relation to optimization and debugging.

Compiler Options

Description

-O[0, 1, 2, 3]

Controls the degree of optimization performed . -O0 disables optimization while –O3 enables the compiler to optimize aggressively.

-g[0, 1, 2, 3]

Controls the amount of debugging information produced.

 

Loading Libraries

Some of the more useful load flags/options are listed below.

  • Use the -l loader option to link in a library at load time: e.g.
    f90 prog.f -lname
  • This links in the library libname.a provided it is found in ldd's library search path or by the environment variable LD_LIBRARY_PATH.
  • To add a library directory to the library search path, use the -L option, e.g.
    f90 prog.f -L/mydirectory/lib -lname
  • In the above example, the libname.a library linked in by the user is not in the default search path, so the "-L" option must be specified to point to the libname.a directory.

Graphics and Visualization APIs

 

This section provides a brief introduction to the more commonly used graphics and visualization APIs available on Milagros and how to get started using them. However, this is not intended to be a programming manual. For programming information/tutorials, see the accompanying links and man pages.

 

          OpenGL

 

OpenGL is a standardized, hardware accelerated, platform independent API for interactive 2D/3D computer graphics. Because of OpenGL’s low-level, cross platform nature, it serves as the foundation for many other high-level APIs such as SGI’s Performer and Inventor toolkits.

 

GLUT (the OpenGL Utility Toolkit) is a window system independent library for easing the development of cross platform OpenGL applications.

 

*          For more information about both OpenGL and GLUT visit the OpenGL homepage at www.opengl.org.

 

*          On Milagros type “man GLUT” for the GLUT 3.6 man page.

 

*          For information about a specific OpenGL/GLUT function, see the corresponding man page. For example, to see the man page for the OpenGL function glViewport: type “man glViewport”.

 

*          OpenGL and GLUT header files are located in /usr/include/GL/.

 

*          OpenGL version 1.1 is installed in /usr/lib/ and /usr/lib32/.

 

*          GLUT 3.6 is installed in /usr/lib32/ and /usr/lib64/.

 

*          GLUT 3.7 is installed in /usr/freeware/lib32/ and /usr/freeware/lib64/.

 

*          Example command line for building an OpenGL application on Milagros using the CC compiler and linking with the default libraries in /usr/lib32/:

 

                                    CC <source files> -lglut -lGLU -lGL -lXmu -lX11

 

                   OpenGL Extensions

 

          OpenGL supports an extension mechanism for making the latest            vendor and hardware specific capabilities available     through the       API. The following table lists all available OpenGL extensions on    milagros.

 

GL_EXT_abgr

GL_EXT_blend_color

GL_EXT_blend_logic_op

GL_EXT_blend_minmax

GL_EXT_blend_subtract

GL_EXT_convolution

GL_EXT_copy_texture

GL_EXT_histogram

GL_EXT_packed_pixels

GL_EXT_polygon_offset

GL_EXT_subtexture

GL_EXT_texture

GL_EXT_texture3D

GL_EXT_texture_object

GL_EXT_vertex_array

GL_SGI_color_matrix

GL_SGI_color_table

GL_SGI_texture_color_table

GL_SGIS_detail_texture

GL_SGIS_fog_function

GL_SGIS_multisample

GL_SGIS_point_line_texgen

GL_SGIS_point_parameters

GL_SGIS_sharpen_texture

GL_SGIS_texture_edge_clamp

GL_SGIS_texture_filter4

GL_SGIS_texture_lod

GL_SGIS_texture_select

GL_SGIX_calligraphic_fragment

GL_SGIX_clipmap

GL_SGIX_fog_offset

GL_SGIX_instruments

GL_SGIX_interlace

GL_SGIX_ir_instrument1

GL_SGIX_flush_raster

GL_SGIX_list_priority

GL_SGIX_reference_plane

GL_SGIX_shadow

GL_SGIX_shadow_ambient

GL_SGIX_sprite

GL_SGIX_subdiv_patch

GL_SGIX_texture_add_env

GL_SGIX_texture_lod_bias

GL_SGIX_texture_scale_bias

GL_SGIX_depth_texture

 

 

          OpenGL Multipipe SDK

 

SGI’s OpenGL Multipipe SDK is an API for writing configurable, multi-pipe OpenGL applications. Note: OpenGL Multipipe SDK should not be confused with OpenGL Multipipe. OpenGL Multipipe is an IRIX utility for allowing existing single-pipe OpenGL applications to utilize multiple graphics pipes without having to alter or recompile source code.

 

*          For more information about the OpenGL Multipipe utility see

            www.sgi.com/products/software/multipipe/ or type

            man Multipipe” on Milagros.

 

*          For more information about OpenGL Multipipe SDK  see        

      www.sgi.com/products/software/multipipe/sdk/.

 

*          Sample source code and makefiles for OpenGL Multipipe SDK are located in /usr/share/Multipipe/.

 

          Performer

 

Performer is SGI’s C/C++ API designed for high performance rendering. Like OpenGL Multipipe SDK, Performer allows the user to utilize multiple graphics pipes. However, Performer does this through a high-level scenegraph based API.

 

Note: Performer is intended for applications where performance is paramount to ease of coding. If peak performance is not required then other high-level, single-pipe APIs like Open Inventor or Java3D might be more suitable.

 

The perfly demo installed on Milagros is an excellent example of Performer’s capabilities. For information about using perfly see the man page.

 

*          For more information on Performer visit www.sgi.com/products/software/performer/ or type “man Performer” on Milagros.

 

*          For information about a specific Performer function, see the corresponding man page. For example, to see the man page for the function pfInit: type “man pfInit”.

 

*          A suite of compiled demos is available in

                        /usr/local/demos/Performer/bin/.

 

*          Sample source code and makefiles are available in /usr/share/ Performer/.

 

          Inventor

 

Inventor is SGI’s scenegraph based, object-oriented, C++ API for developing interactive 3D applications. Note: unlike Performer, Inventor applications can only be run on a single pipe.

 

*          For more information about Inventor go to       

                        http://oss.sgi.com/projects/inventor/ or type “man Inventor”                                           on Milagros.

 

*          For information about a specific Inventor function/class, see the corresponding man page. For example, to see the man page for the C++ class SoCamera: type “man SoCamera”.

 

*          Sample source code and makefiles are located in /usr/share/src/Inventor/.

 

          Volumizer

 

Volumizer is a cross platform, OpenGL based, high-level API designed specifically for volume rendering of large datasets.

 

*          For more information about OpenGL Volumizer see

            http://www.sgi.com/products/software/volumizer/.

 

*          Sample source code and makefiles for Volumizer 1.1 are located in /usr/share/Volumizer/.

 

*          Sample source code and makefiles for Volumizer 2.0 are located in /usr/share/Volumizer2/.

 

          Java3D

 

Java3D is a single-pipe, high-level API that allows interactive 3D graphics to be incorporated into Java applications and applets. 

 

*          For more information about Java3D visit the Java3D homepage or type “man Java3D” on Milagros.

 

*          Jave3D demos with source code and makefiles are located in /usr/demos/j3d/.

 

API Summary

 

            The following table lists the APIs discussed in this chapter along with their          support for multiple graphics pipes, high-level scene management (i.e.          scenegraph) as well as language availability.

 

                       API                        Multipipe          Scenegraph              Language

OpenGL

        No

          No

         C/C++

OpenGL Multipipe SDK

       Yes

          No

         C/C++

Performer

       Yes

         Yes

         C/C++

Inventor

        No

         Yes

           C++

Volumizer

       Yes

         Yes

           C++

Jave3D

        No

         Yes

          Java

 

Display Modes

Milagros supports a variety of video modes because application software can vary widely. For example, an application that loads a large data file containing large geometry descriptions would want to use the multi-pipe mode so that more graphics pipelines can be available, thus increasing the amount of graphics memory and effective graphics capability. As discussed in earlier section of this document, each graphics pipes contain a pool of graphics memory, and graphics channels are created to host your console or X-session by taking chunks of graphics memory from the graphics pipe. With this scenario, you can easily think of 3 modes:

Mode 1 (default mode): a single pipe creating multiple channels,

Mode 2: a single channel creating a single pipe, or multiple channels creating multiple channels,

and

Mode 3: multiple pipes creating a single channel (This is called Monster-mode, and has very special usages. This is not a commonly used mode, and will not be discussed here. Please contact the visualization staff directly if you need further assistance on this.)

Milagros main console is defaulted to use 3 channels from 1 pipe. Depending on the modes, these 3 channels can come from the same pipe or from different pipes.

Mode 1 is the most common mode for non-high-performance application users.  All of command line interfaced applications, GUI applications that do not heavily depend on huge geometry structures fall into this category. Actually most of applications you user are in this category, including many of OpenGL applications.  With this reasoning, this is in fact the default mode of Milagros console. All 3 channels on your console come from the same graphics pipeline. This is very similar to the most dual-headed personal computer systems.  In ACES Visualization Laboratory, this mode is referred as Single-Pipe Mode.

Mode 2 is a lot less common case than Mode 1, but is very common and useful for high-performance applications with heavy geometry information. When you run efficiently written multi-pipe capable software, it allows you to utilize more hardware graphics resources on the system and increases throughputs. The console, continuous over 3 screens, is driven by 3 different video cards. This enables more graphics resource to the applications with more video memory and parallelism for geometry processing capability.  This is useful to display larger geometry sets faster. This is an advantage of this mode. The disadvantage is that users may need to write his/her application for parallel process. Poorly written applications for this mode can worsen the performance than just using Single-Pipe Mode. You may avoid the complication by using libraries, like CAVElib or “OpenGL Multipipe”, to manage multiple displays for your applications. In ACES Visualization Laboratory, this mode is referred as Multi-Pipe Mode. You should also know that, within your X-session, you can not drag your application windows across contiguous channels (screens).  This issue is discussed in next section.

Launching applications in multi-pipe

Graphics memory is part of a graphics pipeline. If you are using 6 pipes, then you have 6 physically different graphics memory locations. The window manager is only capable of handling windows within the same graphics buffers. This means that you have to know on which channel (screen) you want your stuff to be displayed. There are at least 2 basic ways to specify channels to be used by your application software.

  1. Issuing a command on the channel you want to use:

If you move the mouse to the far right edge of the main desktop screen, the mouse pointer will move over to consecutive channels. Each Console (desktop) has a Toolchest menu at left top corner. You can go to:

  Toolchest
  Desktop -> “open unix shell”
 
              

Any application you invoke through Toolchest or a Shell you just opened will run in the same console. There are some applications that know how to display to specific channels. An example for this would be CAVElib applications, like vGeo, which may take up all display devices.

  1. Specifying the display channel by using environment variable:

You can specify within a shell which pipe (desktop) your application software is started. You can do so by issuing:

  > setenv DISPLAY localhost:0.{PipeID}
  where {PipeID} is the integer ID of the desired pipe

You can then issue a command to invoke your application to the PipeID you specified.

Display Mode Example

On Milagros, several pre-defined display modes are available for users through a GUI. The components of each display mode are:

  1. Pipe- single, multi
  2. Channel- single, multi
  3. Resolution per Channel- 1280x1024(SXGA) @60Hz, 1024x768(XGA) @96Hz
  4. Geometry Mode- flat screen projection mode, curved screen projection mode
  5. Stereo Mode- single-eyed view, stereo view

As a default, Single-Pipe Stereo mode is used on Milagros. It is called “Single-Pipe Stereo” since channels that compose the main console are all from the same pipe and stereo view mode is enabled (with proper stereo application software). In ACES Visualization Laboratory, main console is defaulted to mirror the display to the fore-side projection wall. The projector screens on the wall are arranged as:

Screen arrangement on the front-side curved wall (view from the center of the lab):

Left 1/3 of screen
Screen 2

Blend

Center 1/3 of screen
Screen 1

Blend

Right 1/3 of screen
Screen 3

 

Milagros has 6 graphics pipes.  As you can see in this picture above, it does not use more than 3 pipes/channels on fore-side projection wall.  The information on the rest of pipes is often displayed to back-side projection walls.

 

Rear Projected Tiles on the back-side wall (view from the center of the lab):

Screen 4

Screen 6

Screen 8

Screen 10

Screen 12

Screen 5

Screen 7

Screen 9

Screen 11

Screen 13

Given the screen arrangement above, the detail of single-pipe mode is:

Screen ID

Pipe/Channel ID

Geometry

Channel resolution

Horizontal Frequency

Stereo sync

Screen 1

P0C0

Flat

1024x768(XGA)

96Hz

Stereo

Screen 2

P0C1

Flat

1024x768(XGA)

96Hz

Stereo

Screen 3

P0C2

Flat

1024x768(XGA)

96Hz

Stereo

Screen 4

P1C0

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 5

P1C1

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 6

P2C0

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 7

P2C1

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 8

P3C0

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 9

P3C1

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 10

P4C0

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 11

P4C1

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 12

P5C0

Flat

1280x1024(SXGA)

60Hz

Mono

Screen 13

P5C1

Flat

1280x1024(SXGA)

60Hz

Mono

In the above example, each Pipe/Channel ID pair is routed to particular Screen IDs. Even above configuration is used as default in the lab, you may route any desired video output from channels to any desired display devices in the lab.

Switching Display Modes

As a default, Milagros display mode is set to single-pipe/multi-channels for the front screens. For normal use, you do not have to change Milagros display mode. If you need to switch the display mode, there is a GUI utility for pre-defined display modes that are commonly needed in the lab. Depending on what display mode you would use, you may also need to switch display modes on projectors side. If this is the case, you need to use a utility from the Trimension handset, located at main workstation. Further explanation on Trimension is covered in the ACES Visualization Lab user guide.

To start the mode switching utility, type:

        > ~vislab/Demos/Video_Modes

in a shell.  This will give you a toolchest menu with selections of available video modes.

Demos

We have several demos on Milagros around our users’ projects.  Demos are available to all users.

To start the demo, type:

        > ~vislab/Demos/sc

in a shell.  This will give you a toolchest menu with selections of available demos.

 

Applications, Libraries and Software Packages

Users will be able to run application software from Milagros either at a console in the ACES Visualization Lab or through X-forwarding via Milagros (see the section Launching applications in multi-pipe under Display Modes). More details on the software listed here can be found in the UNIX man pages and at the applications website listed below.

  1. Amira is commercial visualization software with relatively easy-to-deal-with GUI that allows visualization and processing of 3D data sets in medicine, biology, physics and engineering. It includes automatic and interactive segmentation tools, reconstruction algorithms to create polygonal models from segmented objects, and many other tools relevant to CT and volumetric data sets. More information about Amira can be found at http://www.amiravis.com.

To set up your environment properly to run Amira, type:

> module load amira.

Invoke the amira by:

> amira

or

> /usr/local/Amira/bin/start

  1. AVS 5.6 is a visualization tool tailored for scientific researchers from Advanced Visual Systems, Inc.. AVS is built on a visual programming paradigm that makes it easy to visualize scientific data without requiring extensive programming knowledge or expertise in advanced visualization techniques. A large collection of vendor-supplied and public domain modules is available from the International AVS Center at the North Carolina Supercomputing Center. AVS has standard subsystems for handling image, geometry, volume, and chemistry data and a graph manipulation package for plotting.

To set up your environment properly to run AVS, type:

> module load avs

Invoke the avs by:

> avs

Online references: more information about AVS 5 can be found at TACCs online pages at http://www.tacc.utexas.edu/resources/software/applications.php#AVS%205 and users can find online man pages by typing man avs at the prompt. Help is also available from within the package.

University of Texas at Austin students, faculty, staff, and other authorized agents of the university can get a free copy of AVS 5.6 by completing the form located at http://www.tacc.utexas.edu/resources/software/avs/

  1. AVS/Express is a general purpose data-flow visualization tool. AVS/Express also provides an object oriented environment for developing interactive visualization applications for use with data from science and engineering related sources. Beginning users should refer to the Getting Started and Using AVS/Express Guides.

To set up the environment properly to run AVS/Express, type:

> module load avs_express

Invoke the avs express by:

>avs_express

Online references: more information about AVS/Express can be found at TACCs online pages at http://www.tacc.utexas.edu/resources/software/applications.php#AVS%205 and users can find online man pages by typing man avs at the prompt. Help is also available from within the package.

University of Texas at Austin students, faculty, staff, and other authorized agents of the university can get a free copy of AVS 5.6 by completing the form located at http://www.tacc.utexas.edu/resources/software/avs/

  1. Ferret is an interactive computer visualization and analysis environment designed to meet the needs of oceanographers and meteorologists analyzing large and complex gridded data sets. It can transparently access extensive remote Internet data bases using OPeNDAP (formerly known as DODS); see http://www.unidata.ucar.edu/packages/dods/. Ferret was developed by the Thermal Modeling and Analysis Project (TMAP) at PMEL in Seattle to analyze the outputs of its numerical ocean models and compare them with gridded, observational data. The model data sets are generally multi- gigabyte in size with mixed 3 and 4-dimensional variables defined on staggered grids. Ferret offers a Mathematica-like approach to analysis; new variables may be defined interactively as mathematical expressions involving data set variables. Calculations may be applied over arbitrarily shaped regions. Fully documented graphics are produced with a single command. More information about Ferret can be found at http://ferret.pmel.noaa.gov/Ferret/.

To set up your environment properly to run Ferret, type:

module load ferret

Invoke the ferret by:

> ferret

If you are new to the ferret, you may want to try a tutorial. At “yes?” prompt, type:

yes? GO tutorial

  1. OpenDX is a full-featured software package for the visualization of scientific, engineering and analytical data. Its open system design is built on a standard interface environments, and its sophisticated data model provides users with great flexibility in creating visualizations. More information about OpenDX can be found at http://www.opendx.org.

To set up the environment properly to run OpenDX, type:

> module load dx

Invoke the OpenDX by:

> dx

You may invoke dx tutorial by:

> dx -tutor

One another good link for dx guide is at:

http://opendx.npaci.edu/docs/html/allguide.htm

  1. ParaView was created by Kitware in conjunction with Jim Ahrens of the Advanced Computing Laboratory at Los Alamos National Laboratory (LANL). Contributers and developers of ParaView currently include: Kitware, LANL, Sandia National Laboratories, and Army Reseach Laboratory. ParaView is funded by the US Department of Energy ASCI Views program as part of a three-year contract awarded to Kitware, Inc. by a consortium of three National Labs - Los Alamos, Sandia, and Livermore. The goal of the project is to develop scalable parallel processing tools with an emphasis on distributed memory implementations. The project includes parallel algorithms, infrastructure, I/O, support, and display devices. One significant feature of the contract is that all software developed is to be delivered open source. Hence ParaView is available as an open-source system. ParaView runs on distributed and shared memory parallel as well as single processor systems and has been succesfully tested on Windows, Linux and various Unix workstations and clusters. Under the hood, ParaView uses the Visualization Toolkit as the data processing and rendering engine and has a user interface written using a unique blend of Tcl/Tk and C++. More information about ParaView can be found at http://www.paraview.org . A tutorial can be found in  Teragrid site at http://www.uc.teragrid.org/community/viz/paraview.html .

To set up the environment properly to run Paraview version 1.8.5, type:

> module load paraview/1.8.5

To set up the environment properly to run Paraview version 1.6, type:

> module load paraview/1.6

To load the current version of paraview, you can simply type:

> module load paraview

Invoke the paraview by:

> paraview

  1. Vis5D is a system for interactive visualization of large 5-D gridded data sets such as those produced by numerical weather models. One can make isosurfaces, contour line slices, colored slices, volume renderings, etc of data in a 3-D grid, then rotate and animate the images in real time. There's also a feature for wind trajectory tracing, a way to make text anotations for publications, support for interactive data analysis, etc. More information about Vis5D can be found at http://www.ssec.wisc.edu/~billh/vis5d.html.

To set up the environment properly to run Vis5D, type:

> module load vis5d

You can invoke vis5d file by:

> vis5d filename.vis5d

  1. VMD is a molecular visualization program for displaying, animating, and analyzing large bio-molecular systems using 3-D graphics and built-in scripting. More information about VMD can be found at http://www.ks.uiuc.edu/Research/vmd/.

To set up the environment properly to run vmd, type:

> module load vmd

  1. The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Tcl/Tk, Java, and Python. Professional support and products for VTK are provided by Kitware, Inc. VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data. The design and implementation of the library has been strongly influenced by object-oriented principles. More information about VTK can be found at http://public.kitware.com/VTK/.

To set up the environment properly to use vtk, type:

> module load vtk

 


http://www.tacc.utexas.edu

created: January 25th 2005