Architectures of Change

DARPA Image Understanding Motion Benchmark

Version 2, Release 2.0

About the Benchmark
Obtaining the Benchmark
Building the Benchmark
The Directory Structure
Running the Benchmark
Creating New Benchmarks
Publishing the Results
Makefile Options
Publications


About the Benchmark

Credits & Conditions

This Image Understanding Benchmark was created for DARPA by the
  Specialized Parallel Architectures Research Group
  Department of Computer Science
  University of Massachusetts
  Amherst, MA  01003

Copyright 1986-1998 by the Department of Computer Science of the University of Massachusetts at Amherst, Massachusetts.
Permission is hereby granted for research and educational use only.

You may not transfer this software to any other organization or individual without the expressed, written permission of the Department of Computer Science.

This software is made available on an as-is basis. No warranty of correctness is either expressed or implied by its release. Neither the University of Massachusetts nor the authors shall be held liable for any damages resulting from its use.

A paper describing the benchmark is available.
For questions about the benchmark, contact Chip Weems (weems@cs.umass.edu).
For questions about the code, contact Jim Burrill (burrill@cs.umass.edu).

The following people worked on this project:
Sunit Bhalla Jim Burrill Steve Dropsho Martin Herbordt
Rohan Kumar Mike Rudenko Mike Scudder Glen Weaver
The image format used was originally developed for the Low Level Vision System by Jim Burrill and Robert Heller.

Description

The task performed by this benchmark is the recognition and tracking of an approximately specified 2 1/2 dimensional "mobile" sculpture that is moving in a cluttered environment, given a series of synthetic images from simulated intensity and range sensors.

These scenes follow the same pattern as the static version of the DARPA IU Benchmark, but in the dynamic benchmark, the mobile and chaff are blown around the scene by an idealized wind to produce predictable motion. The motion involves movement of the entire mobile as a unit, and movement of its individual components. The motions are both translational and rotational, and they are controlled by reasonably realistic physical constraint models.

The dynamic benchmark is meant to supplement, rather than replace, the static benchmark, which tests system performance at the kernel operation level within the framework of a larger task. We recommend that developers begin by implementing the static benchmark on their computers, and then the motion benchmark can be more easily constructed by reusing the code modules from the static benchmark.

The goal of the dynamic benchmark is to extend the testing of system performance for a longer period of time so that, for example, caches and page tables will be filled and achieve steady-state behavior. The benchmark also explores I/O and real-time capabilities of the systems under test, and involves more high-level processing. Thus, the combination of the two benchmarks allows developers to analyze the performance and behavior of systems at both a fine level of granularity on a single burst of processing, and at a coarser granularity under a sustained load.

Unlike the static benchmark, there are no fixed data sets (except for a small test set called "sample"). Given the number of frames that must be processed in a single test, it is too unwieldy to prepare the input data for distribution. Instead, we have developed a data set generator that can be used to repeatably produce the same image sequence from a set of input parameters.

Release 2 of version 2 of the dynamic benchmark represents a major re-organization of the system code and major changes in the tracking logic.


Obtaining the Benchmark

You may ftp the benchmark from
ftp://spa-www.cs.umass.edu/pub/IUdynamic_benchmark/V2R2_2.tar.Z.
Its size is approximately .5 MB compressed. Un-compress the file using the Unix uncompress utility and then use the Unix tar utility to extract the files.


Building the Benchmark

Your system must be ANSI C and POSIX 1003.1 compliant.
To build the benchmark,
  1. Create a directory in this directory using whatever name you choose. (We normally use the computer architecture type name such as "sparc".)
  2. Create a sub-directory called bin and one called output.
  3. In the bin directory create a Makefile file. You may be able to copy one from some other processor's bin directory.
  4. In the bin directory create a defines.h file. This include files defines some macros that are needed to describe your system. You will probably be able to copy one from another architecture.
  5. Then, run
      make compile
    
    (You may need to use Gnu make which is avaialble from prep.ai.mit.edu using anonymous ftp.) This builds the executable file "Benchmark" using the C compiler you specified.
  6. In the bin directory create a mach.txt file. Copy one from another architecture and then edit it. This file describes your installation. The benchmark will not run unless this file has been completed.


The Directory Structure

src
contains the C source code for the benchmark

include
contains the .h files for the C source code

benchmark
contains a directory for each pre-defined benchmark
Except for the sample directory, these directories are empty. The GenSeq program is used to create the necessary files for each benchmark.
generators
contains input files that are used to generate a particular benchmark including all the images for that benchmark
The "sample" benchmark does not have a generator file. The GenSeq program is used to create the necessary files for each benchmark from these "generator" files
makefiles
contains some makefile include files

html
contains this documentation
xxx
contains the files related to the xxx computer architecture


Running the Benchmark

To run the "sample" benchmark, in directory "./xxx/bin" execute

 ./Benchmark ../../benchmarks/sample/sample.setup -t 1
The images that come with this distribution work with the included set up file "sample.setup".

The benchmark goes through three phases. First, it searches for a mobile in successive images until a mobile is identified. This is essentially a static image interpretation task. The code is nearly identical to the first version of the benchmark. After a mobile is identified, the benchmark uses the next intensity and depth images to bootstrap its velocity vectors.

After the first two frames the benchmark tracks the mobile for the remaining frames. After each frame is processed, the benchmark prints out the number of rectangles found and the number of rectangles hallucinated. If you select the visual X window display, you will also see the found rectangles outlined in green.

Usage

  ./Benchmark [-p n] [-t trace] [-r rect] setup_file
-p n
the number of processors to be used
n == 1 is the default and the only allowed value.
-t trace
the type of run
If trace is non-zero, the benchmark uses X windows to display intermediate results. A value of 0 is used for obtaining timing information. A value greater than 1 causes additional trace information to be printed to stdout and/or the log file.

During the searching and bootstrapping phases, the benchmark image display uses orange to indicate rectangles extracted from strong cues, and blue for rectangles on the probe list. The tracking phase uses green for identified rectangles, orange for hallucinated rectangles, and blue for lost rectangles. By default, the benchmark uses single pixel width lines to outline rectangles, but the SLIW environment variable can be used to control the thickness of lines.

-r rect
specify rectangle to trace if trace > 0
If trace > 0, trace is set to 3 during tracking for rectangle rect.
setup_file
file containing the parameters for this benchmark

The benchmark writes to the files specified in the setup file. The sample.setup file supplied does not cause any result images to be written. The timing information is appended to a file called "../sample.data" when trace is 0. A human-readable version is written to "../sample.log".


Creating New Benchmarks

Five different benchmark data sets are supplied with this benchmark. If you wish to use any but the "sample" benchmark, you will first have to build the set of images and other data files for that benchmark. This is accomplished by running the GenSeq program. For example,

   ./GenSeq ../../generators/twist.gen -G
will generate the image files, model file, and setup file for the "twist" benchmark. (You may also execute make make_twist to accomplish this.)

If you leave off the -G parameter, GenSeq will bring up an X window display and allow you to select the images that should be part of the image sequence for the benchmark. It is recommended that you NOT change any of the five existing benchmarks.

To create a new benchmark, make a copy of one of the files in the "generators" directory and then edit this new file. You will want to change the pathnames for the files that the benchmark will use. Make sure that these pathnames reference existing directories.

To generate a different benchmark, change the "chaff_state" and "rect_state" values. These values control the random number generator. You may also change any of the other values such as "rigid_pendulum" or "rectangle_twist_max_degrees".

Note - changing these values may result in mobile that can not be found or in mobile motion that the benchmark is not able to track.

Once you have created the "generator" file, use the GenSeq program interactively to generate and select the images. It will also create the model file and setup file for the new benchmark.

Usage

  ./GenSeq [[-h] |  [-l] [-G]] generator_file
The parameters for GenSeq are
-h
display help
-l
list initial mobile
-G
generate images and models
If -G is not specified, the program runs interactively to allow you to specify which images should be included in the image sequence.
generator_file
file of image sequence generation parameters
The generator file will be created if it doesn't exist.


Publishing the Results

We would like to publish the results of this benchmark for many different architectures. We have run it already on nine different systems at the University of Massachusetts. If you would like your system included, run the long_hard benchmark with a trace value of 0. E-mail the resulting long_hard.data file to burrill@cs.umass.edu. Your participation will be appreciated.


Makefile Options

Two makefile-include files provide the various operations supported. The makefile/compile.mak file controls compilation and linking. The makefile/images.mak file controls benchmark generation and execution. All make operations should be performed in your ./xxx/bin directory.

make all
same as
  make compile
  make make_benchmarks
  make run_benchmarks
make compile
compile the Benchmark and GenSeq programs
make clean
remove all .o, .a, and executables
make make_benchmarks
create the files for the "simple", "twist", "pendulum", "long_easy", and "long_hard" benchmarks
make make_simple
create the files for the "simple" benchmark
(make run_twist, etc)
make run_benchmarks
execute the "simple", "twist", "pendulum", "long_easy", and "long_hard" benchmarks
make run_simple
execute the "simple" benchmark
(make run_twist, etc)


o burrill@cs.umass.edu



Return to ALI home page.
(Last changed: June 25, 1999.)