Thursday, July 8, 2010

3rd Day of ISUVR 2010 - Panel talk


The invited speakers and Sebastien Duval had a panel talk discussing the definitions and applications of digital ecosystem.

3rd Day of ISUVR 2010 - Paper Session 2

Andreas Duenser et al. presented the paper 'Evaluation of Tangible User Interfaces for Desktop AR'.


In the paper they discussed how to evaluate tangible user interfaces in an AR desktop environment. Andreas introduced a TUI named Slider interface designed for AR.
The user studies revealed :
  • Paddle interface has problems of jittering in tracking, and selection is difficult with it.
  • Mouse interface is the fastest interface in task completion speed, but it suffers from the confusion of forward/backward movement in AR nvironment.
  • Slider interface is difficult to select a specific value and it is better for selection of a relative value.
  • All interfaces show the same accuracy.

Muhammad Rusdi Syamsuddin presented the paper 'Research on Virtual World and Real World Integration for Batting Practice' .



The paper is about a system relating the real and the virtual world through pitching and batting actions in a baseball game. The scenario is as follows: The pitching data of a professional baseball player available on MLB website is simulated in the virtual space and the user in the real world becomes a batter with the interface of Wiimote. The idea of the system is very interesting.

3rd Day of ISUVR 2010 - Invited Talk 3



Joshua Harlan Lifton gives his talk 'Consumer Adoption of Cross Reality Systems'.

He presented about connecting the real and the virtual world through sensor networks. The real world events are reflected the the virtual world with different types of sensors (sound, network flow, usage of electoronic power, etc.). Many of his work is based on 3D virtual space, Second Life, which is good for visualization of an event.

However, 3D virtual space is not all. Building services on the existing consumer technologies is another future extension.

2nd Day of ISUVR 2010 - Invited Talk 2


Seunghee Lee gives us a presentation about 3D modeling and animation for AR.

There are two major methods for motion creation, keyframe-based and data-driven methods. His interest is in Physics-based animation, which adopts the physicial properties of an object. The physics-based method can create complex motions that are very very difficult to create through keyframe-based methods.

But physics are not all for complex motion. We need one more thing, motor control principles for realistic motion.

The questions in this area are:
  • What is a realistic and robust motor controller
  • How to compute complex motions fast.
To create realistic motion/animation, a keyframe-based animation made by hand is corrected by using physical analysis. In visual, it is not easy to see difficulties between the hand-made and the corrected animation. But the point is that hand-made animations take much time to create, whereas the physics-based approach let us make it instantly.


Wednesday, July 7, 2010

2nd Day of ISUVR 2010 - Invited Talk 1

Gehard Reitmayr gives a talk, entitled 'Panoramic Mapping and Tracking on Mobile Phones'.


He mentioned about the approach connecting the model generated by SLAM and the real space (using sensors, recognition techniques..).

PanoMT is a panorama tracking method running on mobile phones in 30 fps. It uses sensor data for camera's rotations in roll and pitch, while also usingfeature tracking on images in multi-scales.
Interesting features of PanoMT are :
  • Panorama correction: Aligning false panorama esitmation by correcting the cylinderical projection.
  • Loop closing: RANSAC matching is used to register the image over 360 degrees.
Sensors are noisy and calibration is required. For example, accelerometers and compasses have transient and local disturbances and thus sensor calibration is inevitable, whereas visual tracking provides accurate pose relative to a model. Hybrid tracking using both reduces errors in sensor values


Gehard discussed using maps instead of frame to frame detectioni/tracking, which is computationally expensive, is better : reducing redundency, less data, and slow changes. Template matching with Walsh transformation and NCC is used to map the selected template and the panorama.


As a last topic, he mentioned visualization issues, called image-based ghostings. Simple overlaying of virtual objects occludes the real scenes providing many visual queues. So the question 'which information has to be presented ?' comes here.
His solution to the problem is finding out clues from the image. Through image analysis and user interactions, the ghosting becomes possible. For better perofrmance panoramic remapping is used again.



He conclude the talk with future directions of the current work: extending to 6DOF tracking, object detection to link with applications.

2nd Day of ISUVR 2010 - Paper Session 1

Changhyeon Lee et al. presented the paper 'Networked Collaborative Group Cheerleading Technology: Virtual Cheerleader Experience', which is about a prototype implementation of collaboration of remote users in a VR space. They used Second Life as the virtual interaction space, where a live video coming from a baseball game stadium is displayed. The users in remote site perform interaction through Wiimote.

2nd Day of ISUVR 2010 - Keynote

The 2nd day of ISUVR started with an invited talk by Anton van den Hengel (The Australian Centre for Visual Technologies) The title was Image-based modelling for augmenting reality.


He said that the current mobile AR browsers shows information overlay on the real world, but the information is not registered to the geometry.
Anton claims that User Created Contents (UCC) are required for ubiquitous AR. For AR 3D contents are better, but there haven't been good UCC tools. With current tools epic effort is required to create 3D contents.

Image-based modeling is a good way to create 3D contents because images contains many queues for 3D modeling. There are two way for image-based 3D modeling. Automatic methods generates 3D models of everything, like a 3D laser scanner. Interactive methods allows user to choose the object to interact with.

He introduced Video Trace, which is an interactive image-based 3D modeling system. It is a very nice interactive tool for 3D modeling (However, from my viewpoint, it requires a little bit labor, moving among video frames, modifying 3D geometries..., to finish modeling).

To help 3D modeling, several features and techniques can be exploited:
  • Lines and curves for geometry modeling
  • Mirroring, extrusion, and dense meshing makes modeling an object easier.
  • Surface fitting helps to align planar surfaces that are on the same infinity plane.

Video trace works on a recorded video seuqence. The same idea is extended to the Live AR modeling, so called In-situ modeling. The video below explains the concept and implementation well.




Anton also discussed that there exists misalignment between the modeled real objects and the synthesized virtual ones. It causes visual defect on the occlusion boundary. This problem can be addressed by graph-cut based segmentation method exploiting color distributions.

Video trace requires much user interaction and sometimes they are painful when working with complex 3D objects. To reduce user interaction, silhouette modeling method is adopted. Silhouette based modeling followed by segmentatation generate a 3D model from the video without interaction.


Very nice talk, thank you Anton !

Tuesday, July 6, 2010

1st Day of ISUVR 2010

Today, International Symposium on Ubiquitous VR 2010 started.

Andreas Duenser from HITLab NZ gave a good tutorial about user evaluation to attendee. In his tutorial, he explained :

  • Why user evaluation is required.
  • How to do user evaluations.
  • Different evaluation methods
  • How to interpret the evaluation results.
  • Papers related to user evaluation


The tutorial was very useful, because we always have a question like "Is this really useful ?" or "Is this better than others ?" when we do something new as engineers.