You are here

Activity Detection for Scientific Visualization


Project Homepage           Data & Software           Results           Publications           Relevant Sites


About the Project

Detection of complex events within the volumetric time varying scientific datasets rises more frequently in the scientific community during the visualization and quantification process as a common problem, since the existance (or non existance) of such events can help them to understand the underlying physics of the phenomenon that they work on. As the data size increases, it becomes infeasible to search for such events  for the scientists manually over the time.  Therefore automated tools are necessary. Here, the main problem is how to visualize the complex events in time varying scientific (volumetric) data (the video below presents an example of time varying 3D data). This project aims to provide a solution for such problem in a broader perspective. Individual objects (features) can form certain shapes or can interact in a certain way. Visualizing such interactions requires the definition of the interaction first, and then identifying such interactions within the dataset.

 

 

In this project, we use Petri Nets (PN) for detecting activities in 3D scientific data sets and further enhance them for feature-based scientific data processing and visualization. As a graphical and state based approach, the Petri Net can model and searche for events in scientific simulations and convert the semantics of an event into a graph-based model. Based on the proposed activity detection algorithm, we develop a framework that a scientist can use to first model a spatio- temporal pattern and then search through massive data sets to find instances of such a pattern. An overview of the proposed framework is shown in Figure 1.

 

Figure 1: An Activity Detection Framework with Petri Nets (image source: [6])

 

Activity detection framework with Token-Tracking Petri Net (TTPN)

We call our proposed activity detection algorithm Token-Tracking Petri Nets (TTPN). It considers the feature dynamics by updating the tokens and their places automatically as the time changes. This is done by handling the time variance in tokens and coupling the Petri Net with the tracking information. Therefore, the state of a TTPN is a function of time.

As a framework, our activity detecton model also involves data processing which may include feature extraction and tracking before executing the Petri Net algorithm. Figure 1 illustrates the overview of our proposed framework and Figure 2 shows the process in each time step. The input to the system is the time-varying data set and the Petri Net model which is defined by the scientist.

Figure 2: Flow diagram of the proposed activity detection framework (image source: [6])

 

In each time step, the activity detection process can be summarized as the following steps:

1). Processing the data.

Features, groups, variable changes or other types of user interested entities are computed. Different types of features can be extracted by using appropriate tools for the respective domain. Detailed information on how to process the data can be found on the publications [1], [2], [3], [4], [5], [7] and our Feature Tracking webpage.

2). Token formation.

Meta data, which includes volume, mass, centroid, max and min positions, orientation, shape information, etc., is computed from the processed data and transformed into tokens.

3). Computing tracking information

The features and groups extracted in the current time step are correlated to those extracted in the previous time step. This tracking algorithm computes various attributes including the tracking history of the features (correspondence list), position changes, and any other value/attribute that is a function of two consecutive time steps.

4). Updating & Executing the Petri Net algorithm

Both the newly formed (extracted) tokens and computed tracking information are fed into the Petri Net for activity detection. The existing tokens in the Petri Net (from previous time step) is correlated to the tokens extracted in current time step. Once the Petri Net is updated by using the tracking information, the marking (k is the idex of current time step) is obtained. Now the Petri Net is executed to obtain the final marking . Finally, both the computed meta data at current time step and the final marking are fed into the next time step.

This "4-step" process recursively continues in each new time step. The tokens that fall into the final place are the ones that perform the complete activity.

 

Implementation

As shown in Figure 3, the implementation of our proposed activity detection framework can be divide into three parts: activity modeling, activity detection, and activity visualization and takes the output of data processing as its input. The scientist provides the activity model along with all the place and transition conditions. The model is saved as a text-based Config file and will be used for activity detection.

In the implementation of TTPN, we use logical or mathematical expressions formed of object attributes to describe a feature’s state or action. A place condition is run at each time step to determine whether a token still remains in that place. Tokens that change their states are put into a vector for a further evaluation to check if they changed their places via the firing process. A transition condition is used to determine whether that transition can be enabled for a given token. If a token satisfies the transition condition, then a second step checks whether the same token exists in all the incoming places. Furthermore, a third step checks whether the object satisfies “at least” one of the output places’ conditions. After passing the third step, the transition is enabled and ready to fire. Firing a transition for a token removes the token from all the incoming places, and puts it into the output places for which the token satisfies the conditions. A text-based event list summarizing all the detected activities is generated as the output of activity detection algorithm and is used for activity visualization. [6][9]

Figure 3: The modules of the activity detection framework implementation

 

We are currently developing a graphical user interface (GUI) that can help scientist model an activity graphically, incorporate our activity detection program as a module, and visualize the detected activity in conjunction with the designed activity model. The GUI is written with standard C++ language and created by Qt with VTK. Figure 4 shows the GUI with toolkit, model and data panels. More details and information of the GUI can be found on the publications [8] and here.

Figure 4: The GUI visualizes activity detection of a wall bounded turbulence simulation (image source: [8])

 

Reference

[1] Silver, Deborah, and Xin Wang. "Volume tracking." In Visualization'96. Proceedings., pp. 157-164. IEEE, 1996.

[2] Silver, Deborah. "Object-oriented visualization." Computer Graphics and Applications, IEEE 15, no. 3 (1995): 54-62.

[3] Samtaney, Ravi, Deborah Silver, Norman Zabusky, and Jim Cao. "Visualizing features and tracking their evolution." Computer 27, no. 7 (1994): 20-27.

[4] Van Walsum, Theo, Frits H. Post, Deborah Silver, and Frank J. Post. "Feature extraction and iconic visualization." Visualization and Computer Graphics, IEEE Transactions on 2, no. 2 (1996): 111-119.

[5] Chen, Jian, Deborah Silver, and Lian Jiang. "The feature tree: Visualizing feature tracking in distributed amr datasets." In Proceedings of the 2003 IEEE Symposium on Parallel and Large-Data Visualization and Graphics, p. 14. IEEE Computer Society, 2003.

[6] Ozer, Sedat, Deborah Silver, Karen Bemis, Pino Martin, and Jay Takle. "Activity Detection for scientific visualization." In Large Data Analysis and Visualization (LDAV), 2011 IEEE Symposium on, pp. 117-118. IEEE, 2011.

[7] Ozer, Sedat, Jishang Wei, Deborah Silver, Kwan-Liu Ma, and Pino Martin. "Group dynamics in scientific visualization." In Large Data Analysis and Visualization (LDAV), 2012 IEEE Symposium on, pp. 97-104. IEEE, 2012.

[8] Liu, Li, Sedat Ozer, Karen Bemis, Jay Takle, and Deborah Silver. "An Interactive Method for Activity Detection Visualization."(Poster) In Large Data Analysis and Visualization (LDAV), 2013 IEEE Symposium on.

[9] Ozer, Sedat, "ACTIVITY DETECTION IN SCIENTIFIC VISUALIZATION." Ph.D. Thesis, Rutgers University, 2013.

 



Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer