Title: Uncertainty in Scientific & Volume Visualization Speaker: Joe Kniss, University of New Mexico Date/Time: Monday, February 1, 2010, 1:00 pm Location: CSRI/90 Brief Abstract: My previous work in visualization involved direct volume rendering methods, which is the process of rendering images of temporo-spatial data without explicit surface extraction. This research focused primarily on rendering, including light transport problems, high performance interactive algorithms, and perceptual issues. Working closely with domain scientists, it became clear that the crucial unresolved issues in visual data analysis involved the trustworthiness of analysis results and visualization's role in the decision making process. When visual analysis is used to make potentially critical decisions, the evidence and trustworthiness of the data and analysis must be interrogated if we are to have confidence in the actions to be taken. The difficulty in building confidence in decisions based on computational data analysis stems from the fact that {\bf all data is uncertain}. Finite spatiotemporal resolution and limited precision prevent us from resolving features or phenomena below some scale. Stochastic processes, such as thermal or mechanical vibrations, introduce noise. Arbitrary or erroneous parameter settings can produce erratic algorithmic analysis results. Many other sources of uncertainty exist, some are well understood, others are unknown. These sources can combine linearly or nonlinearly and tend to increase with each algorithmic transformation in the analysis pipeline. It is clear, therefore, that any decision based on computational analysis, visual or otherwise, must be subject to uncertainty inherent in the data or introduced by algorithms and users. If we are to directly quantify uncertainty in data analysis, then each stage of the pipeline must deal with uncertain inputs and produce results that report their uncertainty. Because the process of scientific visualization necessarily transforms data, eliminating uninteresting or unimportant features to expose those of interest, these methods are direct participants in the exploration and decision making processes. The question that researchers in scientific visualization need to address is, ``Do the techniques we provide for visual data analysis expose and quantify knowledge of uncertainty?'' This question can be posed in more dramatic fashion, "Am I ethically responsible for the consequences of actions taken based on my visual data analysis methods when uncertainty is ignored?'' My current work addresses these issues by focusing on techniques for tracking uncertainty in the visualization pipeline. This work can be broken in to three categories; Segmentation algorithms that capture uncertainty, data representation schemes for probabilistic segmentations, and visualization techniques for communicating the semantic consequences of uncertain data and algorithms.CSRI POC: Patricia Crossno, (505) 845-7506 |