The Spatial Media Group
Julian
Villegas and Michael Cohen
University of Aizu
|
Immersive Representation of
Musical Scale Properties
Visitors will wear chromastereoptic eyewear as well
as wireless headphones to visually and aurally express musical scale
properties when they are
stretched or compressed dynamically using the Helical Keyboard, a
Java3D application developed in our laboratory.
By interacting with this rich interface,
the user can experiment with the dual nature of the
musical scale (linear and cyclical) as well as the principles
of the musical scales construction.
Contact:julovi yahoo.com
Back to top
|

|
Virtual Experiences
Research Group
Ben
Lok
University of Florida
Virtual Patients
|
This project allows medical students
to interview a virtual patient to
practice communication skills. Students naturally interact with
life-sized virtual (projected onto a wall) people using speech and
gestures. The system tracks the users head, body pose, and gestures in
addition to using speech recognition software to drive a system that
presents a virtual experience similar to standardized patients (paid
actors used to train medical students).
|

|
Mixed Environments for Rapid
Generation of Engineering Design
|
We have developed a pipeline for rapid
integration of real objects,
such as tools and parts in engineering designs, into a ME. Instead of
weeks to model, track, and integrate objects, this has been reduced to
hours to get objects such as pliers, circuit boards, etc. into a ME.
The system uses a combination of a laser scanner, multiple camera
tracking, and a HMD and TabletPC for interaction. We have partnered
with NASA engineers at NASA Langley Research Center to obtain content
and end-user feedback of our ME system.
Contact:lok cise.ufl.edu
Back to top
|

|
Augmented Reality Lab
Oliver
Bimber and Anselm Grundhoefer
Bauhaus University Weimar
|
Compensating Indirect Scattering
for Immersive and Semi-Immersive Projection Displays
Concavely shaped projection screens, such as CAVEs, two-sided
workbenches, domes, or cylinders scatter a fraction of light to other
screen portions. The amount of indirect illumination adds to the
directly projected image and causes the displayed content to appear
partially inconsistent and washed out. We have developed a reverse
radiosity method that compensates first-level and higher-level
secondary scattering effects in real-time. The images appear more
brilliant and uniform when reducing the scattering contribution. A
numerical solution is approximated with Jacobi iteration for a
sparse-matrix linear equation system on the GPU. Efficient data
structures allow packing the required data into textures which are
processed by pixel shaders. Frame-buffer objects are used for a fast
exchange of intermediate iteration results, and enable computations
with floating point precision. Our algorithms result can be optimized
for quality or performance.
Contact:oliver.bimber medien.uni-weimar.de
Back to top
|


|
Virtual Environments Lab
Albert
"Skip" Rizzo, Jarrell Pair and Ken Graap
Institute for Creative Technologies (U of
Southern California) and Virtually Better Inc.
|
A Virtual Reality Exposure Therapy
Application for Iraq War Post Traumatic Stress Disorder
The USC Institute for Creative
Technologies (ICT) has initiated a
project that is creating an immersive virtual reality system for the
treatment of Iraq War veterans diagnosed with combat-related Post
Traumatic Stress Disorder. The treatment environment is based on a
creative approach to recycling and adding to the virtual assets that
were initially built for the combat tactical simulation and
commercially available X-Box game, Full Spectrum Warrior.
The first version of the application has been created and is designed
to resemble a middle-eastern city, and outlying village and desert
areas. The scenario also supports a variety of user perspectives
including, walking alone or within a patrol of flocking virtual
soldiers, and from the vantage point of being inside a vehicle (i.e.,
HUMVEE, helicopter, etc.).
Contact:arizzo usc.edu
Back to top
|


|
VRLab
Dennis
Brown and Mark Livingston
Naval Research Lab
|
Virtual Targets for Mortar Training
Training and rehearsal are vital to maintaining warfighting
capabilities. In live fire training, the cost of destroying real
targets
is prohibitive. Substitutes are non-reactive, stay in fixed locations,
and rarely resemble real targets. This limits the quality of education;
trainees are not given live fire training for firing upon moving,
reactive targets. We have built a prototype augmented reality system
for fire support team training. Head-mounted displays and video
touchscreens allow trainees and trainers to view and control synthetic
forces that appear to exist in, and interact with, the real world.
Veritcal Vergence Calibration for
Augmented Reality Displays
Stereo and bi-ocular head-mounted displays (HMDs) require the user to
fuse two images into a coherent picture of the three-dimensional world.
A vertical disparity in the graphics causes diplopia for users trying
to
fuse the real and virtual objects simultaneously. We implement three
methods to measure and correct this disparity and assess them with a
collection of a single model of optical see-through HMD.
Contact:dbrown ait.nrl.navy.mil
Back to top
|
|
ID-Imag
Florian
Geffray, Clément Ménier, Jean-Sébastien Franco, Jérémie Allard, Bruno
Raffin, Edmond Boyer
INRIA Rhône-Alpe
|
MV Platform: A Real-Time
Multi-Video Environment
This platform presents a scalable architecture to compute, visualize
and interact in real time with 3D dynamic textured models of real
scenes. This architecture is designed for mixed reality applications
requiring such dynamic models, such as tele- immersion. The system is
built upon of 3 main components: image acquisition, based on standard
firewire cameras; model computation, based on a distribution scheme
over a cluster of PC and using an optimal shape-from-silhouette
algorithm ; model visualization, which can be achieved with multiple
projectors. The distribution scheme ensures the scalability of the
system, interactive frame rates and low latency.
Contact:bruno.raffin imag.fr
Back to top
|
|
3D Interaction Group
Doug
A. Bowman and Denis Gracanin
Virginia Tech
|
Display Size and Resolution
Effects in Information Rich VEs
We
will demonstrate an Information-Rich Virtual Environment we designed as
a testbed to evaluate how display size and resolution affect task
performance. We compared VisBlocks (a large-sized high-resolution
display) with a rear-projected screen (a large-sized low-resolution
display) and an IBM T221 LCD monitor (used as either small-sized
high-resolution or small-sized low-resolution display).
Contact:bowman vt.edu
|
|
IRVE Applications and Components
Information display components are presented in the context of a
biomedical
visualization of immune system simulation (PathSim) as well as a cell
and
chemical environment (CML). These techniques are being evaluated and
improved to increase the information bandwidth between user and
visualization.
Contact:bowman vt.edu
|
|
3D Cloning techniques for the Design
of building Structures
Traditional 3D interaction techniques are not adequate for the design
of complex scenes containing hundreds of objects. We will demonstrate
new techniques for the task of cloning, allowing users to efficiently
and precisely create complex, repetitive structures in an immersive VE.
Contact:bowman vt.edu
|
|
SSWIM: Scaled and Scrolling World In
Miniature
The typical WIM (World In Miniature)
technique was limited in its utility for worlds of various scale.
Overcoming this is the Scaled Scrolling WIM, which
was created through successive design iterations leading to a technique
allowing scaling and scrolling without impacting
user performance.
Contact:bowman vt.edu
|
|
Navigation Techniques for Multiscale
Virtual Environments
VEs that require viewing and interaction with the scene at multiple
scales require specialized navigation techniques to allow the user to
travel both between and within levels of scale. We will demonstrate
several usable techniques in the context of an immersive VE for human
anatomy education.
Contact:bowman vt.edu
|
|
A Tangible User Interface System for
CAVE Applications
This work presents a new 3D user interface system for a CAVE
application based on Tangible User Interfaces. Card-like props tracked
via the ARToolkit are used as input devices, and no other wired
electrical devices are required. Based on this interface, users can
explore fundamental 3D interaction tasks in a CAVE system.
Contact:gracanin vt.edu
Back to top
|
|
Hirose and Hirota and
Tanikawa Lab
Shin'ichiro
Eitoku
University of Tokyo
|
Electromyogram Interface
The electromyogram (EMG) interface has been implemented in CABIN
immersive multiscreen display in our laboratory.
A demonstration on the EMG interface will be presented.
The subject can perform a brief controlling of a simple virtual object
on PC monitor.
|

|
Wearable Olfactory Display
In this research, we constructed and evaluated a wearable olfactory
display to present the odor information in an outdoor environment. We
will show two types of wearable olfactory display. The prototype1
wearable olfactory display system is presenting the odor by using the
air pump to convey the odor air to the user's nose through tubes. The
prototype2 wearable olfactory display system is using inkjet head
device to inject minute odor droplets for odor presentation. And this
prototype2 system is detecting the user's breathing pattern to find the
best timing for the droplet injection.
|

|
Real World Video Avatar:
Rotational Holographic Display
To realize a photo-realistic avatar in the real world, we propose new
approach to enhance the concept of video avatar to real world. By
changing the images on the display panel according to the direction
that the display is facing, the system can present video avatar which
can be seen from all around in the real world. Based on this concept,
we developed prototype system; this system consists of tablet PC with
privacy filter and rotating mechanism.
|

|
Controllable Water Particle Display
As a system the virtual space and the real space coexist naturally and
by which we can realize the spatial nature of would, we developed a
prototype system using water drops as particles. In this system, a
cluster of water drops, falling from a tank, is designed to form a
planar surface. Then, patterns of images are projected from below
towards the falling water drops. By projecting a set of tomographic
images according to the position of water drops, three-dimensional
objects can be observed without wearing a special apparatus.
Contact:eitoku cyber.rcast.u-tokyo.ac.jp
Back to top
|

|
Future Computing Lab
Larry
F. Hodges
University of North Carolina Charlotte
|
Let's Talk About UNC Charlotte: A
Conversation with a Virtual Human
The Future Computing Lab at UNC Charlotte has been working on a variety
of
research projects in the areas of human-virtual human interaction, 3D
user
interfaces, VR travel techniques, computer vision, and computer game
design.
Visitors can learn more about these and other projects by chatting with
one
of our interactive virtual characters.
Contact:lfhodges uncc.edu
Back to top
|

|
Laboratory of Integrated
Systems (LSI)
Marcio
Cabral, Celso
Kurashima, Victor Gomes, Olavo Belloc, Leonardo Nomura, Fernanda
Andrade, Daniel Balciunas, Lucas Dulley, Breno Santos, Mario Nagamura,
Guido Lemos, Luiz Gonçalves, Roseli Lopes, Marcelo K. Zuffo
Escola Politécnica da Universidade de São Paulo
|
Panoramic Image Capture and Display
Virtual explorations of unknown or dangerous places can be of
significant advantage. Instead of using human beings, one can utilize
robots with cameras to explore such places. In our system we use a ring
of cameras to capture 360 degrees of field of view. This ring of
cameras is mounted on a robot platform.
Using computer vision algorithms, we are able to project these images
in a virtual reality environment, like a 5-sided CAVE.
This demonstration version will show a simulation of how the images are
projected in a CAVE environment. In this simulation, the user is able
to explore the panoramic image using a computer monitor. With the help
of a joystick, the user will be able to control the robot carrying the
cameras.
Contact:mcabral lsi.usp.br
Back to top
|

|