You are here

Activity / Event Detection

Detection of complex events within the volumetric time varying scientific datasets rises more frequently in the scientific community during the visualization and quantification process as a common problem, since the existance (or non existance) of such events can help them to understand the underlying physics of the phenomenon that they work on. As the data size increases, it becomes infeasible to search for such events  for the scientists manually over the time.  Therefore automated tools are necessary. Here, the main problem is how to visualize the complex events in time varying scientific (volumetric) data. This project aims to provide a solution for such problem in a broader perspective. Individual objects (features) can form certain shapes or can interact in a certain way. Visualizing such interactions requires the definition of the interaction first, and then identifying such interactions within the dataset. As an example, consider the illustrated image shown below.

As shown on the image, all the yellow objects (features) form a certain shape around the blue (or grey) objects as being indivudal groups. Forming a group from a single feature over time can be considered as an example event. Moreover, visualization of the features that belongs to the same group and then identifying their events is an open problem. It is hard to visualize the features as being part of the same group when only the feature(yellow objects) information is available. This project aims to solve this problem. By deriving and then incorporating some prior information, we can group these yellow objects as groups properly in feature tracking algorithm and can detect their interactions.

 

About the Feature Tracking Project

  • Visualizing and analyzing 3D time-varying datasets (4D datasets scalar/vector) can be very difficult because of the immense amount of data to be processed and understood. These datasets contain many evolving amorphous regions, and it is difficult to observe patterns and visually follow regions of interest. An essential part of the scientific process is to identify, quantify and track important regions and structures (objects of interest). This is true for almost all disciplines since the crux of understanding the original simulation, experiment or observation is the study of the evolution of the ``objects'' present.
  • Some well known examples include tracking the progression of a storm, the motion and change of the ``ozone hole'', or the movement of vortices shed by the meandering Gulf stream. What is needed is visualization, quantification and querying techniques to help filter and reduce the data to a form more conducive to analysis. This is complementary to the standard visualization and helps explain in more mathematical and quantitative detail what is being seen.
  • FeatureTracking is an evolving project and currently being improved by a group of graduate students under the supervision of Prof. Deborah Silver. You can find the most recent version of the software and related guides on this website. This website is designed as a complementary website for the software. And only includes the recent improvements and the information about the most recent version of the code.

    The FeatureTracking, fundamentally, is formed of the following steps:

     ................
    1) Segmentation: (Feature Extraction): This is the first main step in the FeatureTracking algorithm. In this step, the entire data is searched and all the objects are "extracted" from the 3D data for each time frame and the information extracted from the features is saved in the .poly .attr, .uocd and .track files. 

    2) Tracking: In this step, the two consecutive time frames are compared to each other to correlate the objects (features) and the tracking information of the objects is created and saved in the .trakTable file.. 

    3) Visualization: Our code currently allows users to visualize their data in two possible ways:

    • by rendering all the segmented objects in the dataset,
    • by allowing user to choose the objects which they want to focus on, to be visualized over time.

                                           

                  Figure 1: Focusing on only 1 feature in the entire Dataset                                                         Figure 2: Showing all the features in the Dataset

     

    Detailed information on how our algorithm works can be found on the main VizLab website as well as on the publications [1] [2], [3] and [4].

    [1] D. Silver and X. Wang, Volume Tracking. IEEE Visualization '96 Conference Proceeding. October 1996.

    [2] D. Silver, Object Oriented Visualization. IEEE Computer Graphics and Applications, Volume 15, Number 3, May 1995.

    [3] R. Samtaney, D. Silver, N. Zabusky, and J. Cao, Visualizing Features and Tracking Their Evolution, IEEE Computer, Volume 27, Number 7, pp. 20-27, July 1994.

    [4] T. Walsum, F. Post, D. Silver and F. Post Feature Extraction and Iconic Visualization. IEEE Transactions on Visualization and Graphics, July 1996.

    [5] J. Chen, D. Silver,and L. jiang, The Feature Tree: Visualizing Feature Tracking in Distributed AMR Datasets. In Proceedings of IEEE symposium on Parallel and Large-Data Visualization and Graphics, 2003.

     

    More on Feature Tracking: 

    How Feature Tracking works

    Download the software

    Installation Guide

    User Guide

    Results Gallery

    Packet Identification


    We gratefully acknowledge the support of SciDAC Institute for Ultra-Scale Visualization, http://vis.cs.ucdavis.edu/Ultravis/, DOE #DE-FG02-09ER25977

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer