TopHeader_english Graz University of Technology Faculty of Computer Science Home Fraunhofer Fraunhofer IGD
language_german
icon_login
User: anonymous

Publications preceding the year 2005, in which the Institute has been founded.

Access PDF files is reserved to authorized persons only.

CGV Publications
Veröffentlichungen des CGV

QuickSearch:   
Number of matching entries: 0.



2015

Grabner, H., Ullrich, T. & Fellner, D.W., (2015), "Generative Training for 3D-Retrieval", GRAPP 2015, pp.97-105, SciTePress.
Abstract: A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.
BibTeX:
@inproceedings{Grabner*15GRAPP,
  author = {Grabner, Harald and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Generative Training for 3D-Retrieval},
  booktitle = {GRAPP 2015},
  publisher = {SciTePress},
  year = {2015},
  pages = {97-105},
  doi = {http://dx.doi.org/10.5220/0005248300970105}
}
Hecher, M., Traxler, C., Hesina, G., Fuhrmann, A. & Fellner, D.W., (2015), "Web-based Visualization Platform for Geospatial Data", IVAPP 2015. Proceedings, pp.311-316, SciTePress.
Abstract: This paper describes a new platform for geospatial data analysis. The main purpose is to explore new ways to visualize and interact with multidimensional satellite data and computed models from various Earth Observation missions. The new V-MANIP platform facilitates a multidimensional exploring approach that allows to view the same dataset in multiple viewers at the same time to efficiently find and explore interesting features within the shown data. The platform provides visual analytics capabilities including viewers for displaying 2D or 3D data representations, as well as for volumetric input data. Via a simple configuration file the system can be configured for different stakeholder use cases, by defining desired data sources and available viewer modules. The system architecture, which will be discussed in this paper in detail, uses Open Geospatial Consortium web service interfaces to allow an easy integration of new visualization modules. The implemented software is based on open source libraries and uses modern web technologies to provide a platform-independent, plugin-free user experience.
BibTeX:
@inproceedings{Hecher*15IVAPP,
  author = {Hecher, Martin and Traxler, Christoph and Hesina, Gerd and Fuhrmann, Anton and Fellner, Dieter W.},
  title = {Web-based Visualization Platform for Geospatial Data},
  booktitle = {IVAPP 2015. Proceedings},
  publisher = {SciTePress},
  year = {2015},
  pages = {311-316},
  doi = {http://dx.doi.org/10.5220/0005359503110316}
}
Bernard, Jü., Daberkow, D., Fellner, D.W., Fischer, K., Koepler, O., Kohlhammer, Jö., Runnwerth, M., Ruppert, T., Schreck, T. & Sens, I., (2015), "VisInfo: A Digital Library System for Time Series Research Data Based on Exploratory Search -- a User-centered Design Approach", International Journal on Digital Libraries, Vol.16(1), pp.37-59.
Abstract: To this day, data-driven science is a widely accepted concept in the digital library (DL) context (Hey et al. in The fourth paradigm: data-intensive scientific discovery. Microsoft Research, 2009). In the same way, domain knowledge from information visualization, visual analytics, and exploratory search has found its way into the DL workflow. This trend is expected to continue, considering future DLchallenges such as content-based access to newdocument types, visual search, and exploration for information landscapes, or big data in general. To cope with these challenges, DL actors need to collaborate with external specialists from different domains to complement each other and succeed in given tasks such as making research data publicly available. Through these interdisciplinary approaches, the DL ecosystem may contribute to applications focused on data-driven science and digital scholarship. In this work, we present Vis- Info (2014) , a web-based digital library system (DLS) with the goal to provide visual access to time series research data. Based on an exploratory search (ES) concept (White and Roth in Synth Lect Inf Concepts Retr Serv 1(1):1-98, 2009), VisInfo at first provides a content-based overview visualization of large amounts of time series research data. Further, the system enables the user to define visual queries by example or by sketch. Finally, VisInfo presents visual-interactive capability for the exploration of search results. The development process of VisInfo was based on the user-centered design principle. Experts from computer science, a scientific digital library, usability engineering, and scientists from the earth, and environmental sciences were involved in an interdisciplinary approach. We report on comprehensive user studies in the requirement analysis phase based on paper prototyping, user interviews, screen casts, and user questionnaires. Heuristic evaluations and two usability testing rounds were applied during the system implementation and the deployment phase and certify measurable improvements for our DLS. Based on the lessons learned in VisInfo, we suggest a generalized project workflow that may be applied in related, prospective approaches.
BibTeX:
@article{Juergen*15IJODLS,
  author = {Bernard, Jürgen and Daberkow, Debora and Fellner, Dieter W. and Fischer, Katrin and Koepler, Oliver and Kohlhammer, Jörn and Runnwerth, Mila and Ruppert, Tobias and Schreck, Tobias and Sens, Irina},
  title = {VisInfo: A Digital Library System for Time Series Research Data Based on Exploratory Search -- a User-centered Design Approach},
  journal = {International Journal on Digital Libraries},
  year = {2015},
  volume = {16},
  number = {1},
  pages = {37-59},
  doi = {http://dx.doi.org/10.1007/s00799-014-0134-y}
}
Keim, D. & Schreck, T., (2015), "Preface to Special Issue on Visual Analytics", De Gruyter Oldenbourg Information Technology, Vol.57(1), pp.1-2.
BibTeX:
@article{Keim-Schreck15,
  author = {D. Keim and T. Schreck},
  title = {Preface to Special Issue on Visual Analytics},
  journal = {De Gruyter Oldenbourg Information Technology},
  year = {2015},
  volume = {57},
  number = {1},
  pages = {1--2},
  doi = {http://dx.doi.org/10.1515/itit-2014-1084}
}
Krispel, U., Evers, H.L., Tamke, M., Viehauser, R. & Fellner, D.W., (2015), "Automatic Texture and Orthophoto Generation from Registered Panoramic Views", International Workshop 3D-ARCH, pp.131-137.
Abstract: Recent trends in 3D scanning are aimed at the fusion of range data and color information from images. The combination of these two outputs allows to extract novel semantic information. The workflow presented in this paper allows to detect objects, such as light switches, that are hard to identify from range data only. In order to detect these elements, we developed a method that utilizes range data and color information from high-resolution panoramic images of indoor scenes, taken at the scanners position. A proxy geometry is derived from the point clouds; orthographic views of the scene are automatically identified from the geometry and an image per view is created via projection. We combine methods of computer vision to train a classifier to detect the objects of interest from these orthographic views. Furthermore, these views can be used for automatic texturing of the proxy geometry.
BibTeX:
@inproceedings{Krispel*153DArch,
  author = {Krispel, Ulrich and Evers, Henrik L. and Tamke, Martin and Viehauser, Robert and Fellner, Dieter W.},
  title = {Automatic Texture and Orthophoto Generation from Registered Panoramic Views},
  booktitle = {International Workshop 3D-ARCH},
  year = {2015},
  pages = {131-137},
  doi = {http://dx.doi.org/10.5194/isprsarchives-XL-5-W4-131-2015}
}
Landesberger, T. v., Diel, S., Bremm, S. & Fellner, D.W., (2015), "Visual Analysis of Contagion in Networks", Information Visualization, Vol.14(2), pp.93-110.
Abstract: Contagion is a process whereby the collapse of a node in a network leads to the collapse of neighboring nodes and thereby sets off a chain reaction in the network. It thus creates a special type of time-dependent network. Such processes are studied in various applications, for example, in financial network analysis, infection diffusion prediction, supply-chain management, or gene regulation. Visual analytics methods can help analysts examine contagion effects. For this purpose, network visualizations need to be complemented with specific features to illustrate the contagion process. Moreover, new visual analysis techniques for comparison of contagion need to be developed. In this paper, we propose a system geared to the visual analysis of contagion. It includes the simulation of contagion effects as well as their visual exploration. We present new tools able to compare the evolution of the different contagion processes. In this way, propagation of disturbances can be effectively analyzed. We focus on financial networks; however, our system can be applied to other use cases as well.
BibTeX:
@article{Landesberger*13IV-2,
  author = {Landesberger, Tatiana von and Diel, Simon and Bremm, Sebastian and Fellner, Dieter W.},
  title = {Visual Analysis of Contagion in Networks},
  journal = {Information Visualization},
  year = {2015},
  volume = {14},
  number = {2},
  pages = {93-110},
  doi = {http://dx.doi.org/10.1177/1473871613487087}
}
Pérez, D., Zhang, L., Schäfer, M., Schreck, T., Keim, D. & Díaz, I., (2015), "Interactive Feature Space Extension for Multidimensional Data Projection", Neurocomputing, Vol.150, Part B, pp.611-628.
Abstract: Projecting multi-dimensional data to a lower-dimensional visual display is a commonly used approach for identifying and analyzing patterns in data. Many dimensionality reduction techniques exist for generating visual embeddings, but it is often hard to avoid cluttered projections when the data is large in size and noisy. For many application users who are not machine learning experts, it is difficult to control the process in order to improve the 'readability' of the projection and at the same time to understand their quality. In this paper, we propose a simple interactive feature transformation approach that allows the analyst to de-clutter the visualization by gradually transforming the original feature space based on existing class knowledge. By changing a single parameter, the user can easily decide the desired trade-off between structural preservation and the visual quality during the transforming process. The proposed approach integrates semi-interactive feature transformation techniques as well as a variety of quality measures to help analysts generate uncluttered projections and understand their quality.
BibTeX:
@article{Perez*15nc,
  author = {D. Pérez and L. Zhang and M. Schäfer and T. Schreck and D. Keim And I. Díaz},
  title = {Interactive Feature Space Extension for Multidimensional Data Projection},
  journal = {Neurocomputing},
  year = {2015},
  volume = {150, Part B},
  pages = {611--628},
  doi = {http://dx.doi.org/10.1016/j.neucom.2014.09.061}
}
Riffnaller-Schiefer, A., Augsdörfer, U.H. & Fellner, D.W., Bickel, B. & Ritschel, T. (ed.) (2015), "Isogeometric Analysis for Modelling and Design", EG 2015 -- Short Papers, pp.17-20, The Eurographics Association.
Abstract: We present an isogeometric design and analysis approach based on NURBS-compatible subdivision surfaces. The approach enables the description of watertight free-form surfaces of arbitrary degree, including conic sections and an accurate simulation and analysis based directly on the designed surface. To explore the seamless integration of design and analysis provided by the isogeometric approach, we built a prototype software which combines free-form modelling tools with thin shell simulation tools to offer the designer a wide range of design and analysis instruments.
BibTeX:
@inproceedings{Riffnaller-Schiefer*15EG,
  author = {Riffnaller-Schiefer, Andreas and Augsdörfer, Ursula H. and Fellner, Dieter W.},
  editor = {B. Bickel and T. Ritschel},
  title = {Isogeometric Analysis for Modelling and Design},
  booktitle = {EG 2015 -- Short Papers},
  publisher = {The Eurographics Association},
  year = {2015},
  pages = {17-20},
  doi = {http://dx.doi.org/10.2312/egsh.20151004}
}
Schinko, C., Krispel, U. & Ullrich, T., (2015), "Built by Algorithms -- State of the Art Report on Procedural Modeling", International Workshop 3D-ARCH, pp.469-479.
Abstract: The idea of generative modeling is to allow the generation of highly complex objects based on a set of formal construction rules. Using these construction rules, a shape is described by a sequence of processing steps, rather than just by the result of all applied operations: Shape design becomes rule design. Due to its very general nature, this approach can be applied to any domain and to any shape representation that provides a set of generating functions. The aim of this report is to give an overview of the concepts and techniques of procedural and generative modeling as well as their applications with a special focus on Archaeology and Architecture.
BibTeX:
@inproceedings{Schinko*153DArch,
  author = {Schinko, Christoph and Krispel, Ulrich and Ullrich, Torsten},
  title = {Built by Algorithms -- State of the Art Report on Procedural Modeling},
  booktitle = {International Workshop 3D-ARCH},
  year = {2015},
  pages = {469-479},
  doi = {http://dx.doi.org/10.5194/isprsarchives-XL-5-W4-469-2015}
}
Wanner, F., Jentner, W., Schreck, T., Stoffel, A., Sharalieva, L. & Keim, D., (2015), "Integrated visual analysis of patterns in time series and text data - Workflow and application to financial data analysis", Information Visualization.
Abstract: In this article, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which are significantly connected in time to quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a priori method. First, based on heuristics, we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a priori method supports the discovery of such sequential temporal patterns. Then, various text features such as the degree of sentence nesting, noun phrase complexity, and the vocabulary richness, are extracted from the news items to obtain meta-patterns. Meta-patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time, cluster, and sequence visualization and analysis functionality. We provide a case study and an evaluation on financial data where we identify important future work. The workflow could be generalized to other application domains such as data analysis of smart grids, cyber physical systems, or the security of critical infrastructure, where the data consist of a combination of quantitative and textual time series data.
BibTeX:
@article{Wanner*15ivs,
  author = {F. Wanner and W. Jentner and T. Schreck and A. Stoffel and L. Sharalieva and D. Keim},
  title = {Integrated visual analysis of patterns in time series and text data - Workflow and application to financial data analysis},
  journal = {Information Visualization},
  year = {2015},
  note = {Published online first},
  doi = {http://dx.doi.org/10.1177/1473871615576925}
}

2014

Behrisch, M., Davey, J., Fischer, F., Thonnard, O., Schreck, T., Keim, D. & Kohlhammer., J., (2014), "Visual Analysis of Sets of Heterogeneous Matrices Using Projection-Based Distance Functions and Semantic Zoom", Wiley-Blackwell Computer Graphics Forum (Proc. EuroVis 2014), Vol.33(3), pp.411-420.
Abstract: Matrix visualization is an established technique in the analysis of relational data. It is applicable to large, dense networks, where node-link representations may not be effective. Recently, domains have emerged in which the comparative analysis of sets of matrices of potentially varying size is relevant. For example, to monitor computer network traffic a dynamic set of hosts and their peer-to-peer connections on different ports must be analysed. A matrix visualization focused on the display of one matrix at a time cannot cope with this task. We address the research problem of the visual analysis of sets of matrices. We present a technique for comparing matrices of potentially varying size. Our approach considers the rows and/or columns of a matrix as the basic elements of the analysis. We project these vectors for pairs of matrices into a low-dimensional space which is used as the reference to compare matrices and identify relationships among them. Bipartite graph matching is applied on the projected elements to compute a measure of distance. A key advantage of this measure is that it can be interpreted and manipulated as a visual distance function, and serves as a comprehensible basis for ranking, clustering and comparison in sets of matrices. We present an interactive system in which users may explore the matrix distances and understand potential differences in a set of matrices. A flexible semantic zoom mechanism enables users to navigate through sets of matrices and identify patterns at different levels of detail. We demonstrate the effectiveness of our approach through a case study and provide a technical evaluation to illustrate its strengths.
BibTeX:
@article{Behrisch*14eurovis,
  author = {M. Behrisch and J. Davey and F. Fischer and O. Thonnard and T. Schreck and D. Keim and J. Kohlhammer.},
  title = {Visual Analysis of Sets of Heterogeneous Matrices Using Projection-Based Distance Functions and Semantic Zoom},
  journal = {Wiley-Blackwell Computer Graphics Forum (Proc. EuroVis 2014)},
  year = {2014},
  volume = {33},
  number = {3},
  pages = {411--420},
  doi = {http://dx.doi.org/10.1111/cgf.12397}
}
Behrisch, M., Korkmaz, F., Shao, L. & Schreck, T., (2014), "Feedback-Driven Interactive Exploration of Large Multidimensional Data Supported by Visual Classifier", Proc. IEEE Conference on Visual Analytics Science and Technology, pp.43-52.
Abstract: The extraction of relevant and meaningful information from multivariate or high-dimensional data is a challenging problem. One reason for this is that the number of possible representations, which might contain relevant information, grows exponentially with the amount of data dimensions. Also, not all views from a possibly large view space, are potentially relevant to a given analysis task or user. Focus+Context or Semantic Zoom Interfaces can help to some extent to efficiently search for interesting views or data segments, yet they show scalability problems for very large data sets. Accordingly, users are confronted with the problem of identifying interesting views, yet the manual exploration of the entire view space becomes ineffective or even infeasible. While certain quality metrics have been proposed recently to identify potentially interesting views, these often are defined in a heuristic way and do not take into account the application or user context. We introduce a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval. Our basic idea is that users iteratively express their notion of interestingness when presented with candidate views. From that expression, a model representing the user's preferences, is trained and used to recommend further interesting view candidates. A decision support system monitors the exploration process and assesses the relevance-driven search process for convergence and stability. We present an instantiation of our framework for exploration of Scatter Plot Spaces based on visual features. We demonstrate the effectiveness of this implementation by a case study on two real-world datasets. We also discuss our framework in light of design alternatives and point out its usefulness for development of user- and context-dependent visual exploration systems.
BibTeX:
@inproceedings{Behrisch*14VAST,
  author = {M. Behrisch and F. Korkmaz and L. Shao and T. Schreck},
  title = {Feedback-Driven Interactive Exploration of Large Multidimensional Data Supported by Visual Classifier},
  booktitle = {Proc. IEEE Conference on Visual Analytics Science and Technology},
  year = {2014},
  pages = {43-52},
  doi = {http://dx.doi.org/10.1109/VAST.2014.7042480}
}
Bender, J., Kuijper, A., Landesberger, T. v., Theisel, H., Urban, P., Fellner, D.W., Goesele, M. & Roth, S. (ed.) (2014), "VMV 2014: Vision, Modeling, and Visualization", Eurographics Association, Goslar.
Abstract: VMV is a unique event that brings together scientists and practicioners interested in the interdisciplinary fields of computer vision and computer graphics, with special emphasis on the link between the disciplines. It offers researchers the opportunity to discuss a wide range of different topics within an open, international and interdisciplinary environment, and has done so successfully for many years.
BibTeX:
@proceedings{Bender*14VMV,,
  editor = {Bender, Jan and Kuijper, Arjan and Landesberger, Tatiana von and Theisel, Holger and Urban, Philipp and Fellner, Dieter W. and Goesele, Michael and Roth, Stefan},
  title = {VMV 2014: Vision, Modeling, and Visualization},
  publisher = {Eurographics Association, Goslar},
  year = {2014}
}
Bernard, J., Daberkow, D., Fellner, D., Fischer, K., Koepler, O., Kohlhammer, J., Runnwerth, M., Ruppert, T., Schreck, T. & Sens, I., (2014), "VisInfo: a digital library system for time series research data based on exploratory search -- a user-centered design approach", Springer International Journal on Digital Libraries.
Abstract: To this day, data-driven science is a widely accepted concept in the digital library (DL) context (Hey et al. in The fourth paradigm: data-intensive scientific discovery. Microsoft Research, 2009). In the same way, domain knowledge from information visualization, visual analytics, and exploratory search has found its way into the DL workflow. This trend is expected to continue, considering future DL challenges such as content-based access to new document types, visual search, and exploration for information landscapes, or big data in general. To cope with these challenges, DL actors need to collaborate with external specialists from different domains to complement each other and succeed in given tasks such as making research data publicly available. Through these interdisciplinary approaches, the DL ecosystem may contribute to applications focused on data-driven science and digital scholarship. In this work, we present VisInfo (2014), a web-based digital library system (DLS) with the goal to provide visual access to time series research data. Based on an exploratory search (ES) concept (White and Roth in Synth Lect Inf Concepts Retr Serv 1(1):1-98, 2009), VisInfo at first provides a content-based overview visualization of large amounts of time series research data. Further, the system enables the user to define visual queries by example or by sketch. Finally, VisInfo presents visual-interactive capability for the exploration of search results. The development process of VisInfo was based on the user-centered design principle. Experts from computer science, a scientific digital library, usability engineering, and scientists from the earth, and environmental sciences were involved in an interdisciplinary approach. We report on comprehensive user studies in the requirement analysis phase based on paper prototyping, user interviews, screen casts, and user questionnaires. Heuristic evaluations and two usability testing rounds were applied during the system implementation and the deployment phase and certify measurable improvements for our DLS. Based on the lessons learned in VisInfo, we suggest a generalized project workflow that may be applied in related, prospective approaches.
BibTeX:
@article{Bernard*14ijdl,
  author = {J. Bernard and D. Daberkow and D. Fellner and K. Fischer and O. Koepler and J. Kohlhammer and M. Runnwerth and T. Ruppert and T. Schreck and I. Sens},
  title = {VisInfo: a digital library system for time series research data based on exploratory search -- a user-centered design approach},
  journal = {Springer International Journal on Digital Libraries},
  year = {2014},
  note = {Published online first},
  doi = {http://dx.doi.org/10.1007/s00799-014-0134-y}
}
Biasotti, S., Pratikakis, I., Castellani, U. & Schreck, T., (2014), "Preface to Special Issue on EG Workshop on 3D Object Retrieval 2013", Springer Visual Computer, Vol.30, pp.1195.
BibTeX:
@article{Biasotti*143DOR,
  author = {S. Biasotti and I. Pratikakis and U. Castellani and T. Schreck},
  title = {Preface to Special Issue on EG Workshop on 3D Object Retrieval 2013},
  journal = {Springer Visual Computer},
  year = {2014},
  volume = {30},
  pages = {1195},
  doi = {http://dx.doi.org/10.1007/s00371-014-1017-3}
}
Braun, A., Wichert, R., Kuijper, A. & Fellner, D.W., (2014), "A Benchmarking Model for Sensors in Smart Environments", Ambient Intelligence, pp.242-257, Springer, Berlin, Heidelberg, New York.
Abstract: In smart environments, developers can choose from a large variety of sensors supporting their use case that have specific advantages or disadvantages. In this work we present a benchmarking model that allows estimating the utility of a sensor technology for a use case by calculating a single score, based on a weighting factor for applications and a set of sensor features. This set takes into account the complexity of smart environment systems that are comprised of multiple subsystems and applied in non-static environments. We show how the model can be used to find a suitable sensor for a use case and the inverse option to find suitable use cases for a given set of sensors. Additionally, extensions are presented that normalize differently rated systems and compensate for central tendency bias. The model is verified by estimating technology popularity using a frequency analysis of associated search terms in two scientific databases.
BibTeX:
@inproceedings{Braun*14LNCS,
  author = {Braun, Andreas and Wichert, Reiner and Kuijper, Arjan and Fellner, Dieter W.},
  title = {A Benchmarking Model for Sensors in Smart Environments},
  booktitle = {Ambient Intelligence},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2014},
  pages = {242-257},
  series = {Lecture Notes in Computer Science (LNCS); 8850},
  doi = {http://dx.doi.org/10.1007/978-3-319-14112-1_20}
}
Braun, A., Cieslik, S., Zmugg, R., Klein, P., Havemann, S. & Wagner, T., (2014), "V2me - Virtual Coaching for Seniors", Wohnen -- Pflege -- Teilhabe. Besser leben durch Technik. 7. Deutscher AAL-Kongress, pp.5, VDE-Verlag GmbH, Berlin, Offenbach.
Abstract: Eine Herausforderung von alternden Gesellschaften ist die zunehmende Vereinsamung älterer Personen. Als Folge von Todesfällen im sozialen Umkreis, zunehmend mobilen Familien, oder den Wechsel in eine betreute Wohneinrichtung ändern sich die Möglichkeiten zur sozialen Interaktion -- die empfundene Einsamkeit nimmt zu. Dies kann sowohl Psyche als auch Physis deutlich belasten. Im Rahmen des Projekts V2me wurde eine Trainingsplattform entwickelt, die über verschiedene technische Geräte in der Lage ist, immersive, virtuelle Trainingslektionen wiederzugeben, die auf verschiedene Webdienste zugreifen können. In einem ersten Schritt wurde diese Plattform speziell darauf zugeschnitten, eine individuelle und digitale Variante des Friendship Enrichment Programs bereitzustellen. Dies es Interventionsprogramm für Senioren vermittelt in Gruppensitzungen Fähigkeiten, die den Teilnehmern ermöglichen neue soziale Kontakte zu gewinnen und bestehende Kontakte besser zu pflegen. In diesem Beitrag stellen wir die technische Infrastruktur vor, und beschreiben im Detail wie diese zu den vorgestellten Zielen beiträgt.
BibTeX:
@inproceedings{Braun*14v2me,
  author = {Andreas Braun and Silvana Cieslik and René Zmugg and Peter Klein and Sven Havemann and Tobias Wagner},
  title = {V2me - Virtual Coaching for Seniors},
  booktitle = {Wohnen -- Pflege -- Teilhabe. Besser leben durch Technik. 7. Deutscher AAL-Kongress},
  publisher = {VDE-Verlag GmbH, Berlin, Offenbach},
  year = {2014},
  pages = {5}
}
Caldera, C., Berndt, R., Schröttner, M., Eggeling, E. & Fellner, D., (2014), "PRIMA -- Towards an Automatic Review/Paper Matching Score Calculation", Proceedings of The Sixth International Conference on Creative Content Technologies (CONTENT 2014), pp.70-75.
Abstract: Programme chairs of scientific conferences face a tremendous time pressure. One of the most time-consuming steps during the conference workflow is assigning members of the international programme committee (IPC) to the received submissions. Finding the best-suited persons for reviewing strongly depends on how the paper matches the expertise of each IPC member. While various approaches like 'bidding' or 'topic matching' exist in order to make the knowledge of these expertises explicit, these approaches allocate a considerable amount of resources on the IPC member side. This paper introduces the Paper Rating and IPC Matching Tool (PRIMA), which reduces the workload for both -- IPC members and chairs -- to support and improve the assignment process.
BibTeX:
@inproceedings{Caldera*14Content,
  author = {Christian Caldera and René Berndt and Martin Schröttner and Eva Eggeling and Dieter Fellner},
  title = {PRIMA -- Towards an Automatic Review/Paper Matching Score Calculation},
  booktitle = {Proceedings of The Sixth International Conference on Creative Content Technologies (CONTENT 2014)},
  year = {2014},
  pages = {70-75}
}
Caldera, C., Berndt, R., Schröttner, M., Eggeling, E. & Fellner, D.W., (2014), "Mining Bibliographic Data -- Using Author's Publication History for a Brighter Reviewing Future within Conference Management Systems", International Journal On Advances in Intelligent Systems, Vol.7(3-4), pp.609-619.
Abstract: Organizing and managing a conference is a cumbersome and time consuming task. Electronic conference management systems support reviewers, conference chairs and the International Programme Committee members (IPC) in managing the huge amount of submissions. These systems implement the complete workflow of scientific conferences. One of the most time consuming tasks within a conference is the assignment of IPC members to the submissions. Finding the best-suited person for reviewing a paper strongly depends on the expertise of the IPC member. There are already various approaches like bidding or topic matching. However, these approaches allocate a considerable amount of resources on the IPC member side. This article introduces how the workflow of a conference looks like and what the challenges for an electronic conference management are. It will take a close look on the latest version of the Eurographics Submission and Review Management system (SRMv2). Finally, it will introduce an extension of SRMv2 called the Paper Rating and IPC Matching Tool (PRIMA), which reduces the workload for both, IPC members and chairs, to support and improve the assignment process.
BibTeX:
@article{Caldera*14iaria,
  author = {Christian Caldera and René Berndt and Martin Schröttner and Eva Eggeling and Dieter W. Fellner},
  title = {Mining Bibliographic Data -- Using Author's Publication History for a Brighter Reviewing Future within Conference Management Systems},
  journal = {International Journal On Advances in Intelligent Systems},
  year = {2014},
  volume = {7},
  number = {3-4},
  pages = {609-619}
}
Li, B., Lu, Y., Li, C., Godil, A., Schreck, T., Aono, M., Burtscher, M., Chen, Q., Chowdhury, N., Fang, B., Fu, H., Furuya, T., Li, H., Liu, J., Johan, H., Kosaka, R., Koyanagi, H., Ohbuchi, R., Tatsuma, A., Wan, Y., Zhang, C. & Zou, C., (2014), "A comparison of 3D shape retrieval methods based on a large-scale benchmark supporting multimodal queries", Elsevier Computer Vision and Image Understanding, Vol.131, pp.1-27.
Abstract: Large-scale 3D shape retrieval has become an important research direction in content-based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on large scale comprehensive and sketch-based 3D model retrieval have been organized by us in 2014. Both tracks were based on a unified large-scale benchmark that supports multimodal queries (3D models and sketches). This benchmark contains 13680 sketches and 8987 3D models, divided into 171 distinct classes. It was compiled to be a superset of existing benchmarks and presents a new challenge to retrieval methods as it comprises generic models as well as domain-specific model types. Twelve and six distinct 3D shape retrieval methods have competed with each other in these two contests, respectively. To measure and compare the performance of the participating and other promising Query-by-Model or Query-by-Sketch 3D shape retrieval methods and to solicit state-of-the-art approaches, we perform a more comprehensive comparison of twenty-six (eighteen originally participating algorithms and eight additional state-of-the-art or new) retrieval methods by evaluating them on the common benchmark. The benchmark, results, and evaluation tools are publicly available at our websites
BibTeX:
@article{cviu14a,
  author = {B. Li and Y. Lu and C. Li and A. Godil and T. Schreck and M. Aono and M. Burtscher and Q. Chen and N. Chowdhury and B. Fang and H. Fu and T. Furuya and H. Li and J. Liu and H. Johan and R. Kosaka and H. Koyanagi and R. Ohbuchi and A. Tatsuma and Y. Wan and C. Zhang and C. Zou},
  title = {A comparison of 3D shape retrieval methods based on a large-scale benchmark supporting multimodal queries},
  journal = {Elsevier Computer Vision and Image Understanding},
  year = {2014},
  volume = {131},
  pages = {1--27},
  doi = {http://dx.doi.org/10.1016/j.cviu.2014.10.006}
}
Edelsbrunner, J., Krispel, U., Havemann, S., Sourin, A. & Fellner, D.W., (2014), "Constructive Roof Geometry", 2014 International Conference on Cyberworlds, pp.63-70, IEEE.
BibTeX:
@inproceedings{Edelsbrunner*14cw,
  author = {Edelsbrunner, Johannes and Krispel, Ulrich and Havemann, Sven and Sourin, Alexei and Fellner, Dieter W.},
  title = {Constructive Roof Geometry},
  booktitle = {2014 International Conference on Cyberworlds},
  publisher = {IEEE},
  year = {2014},
  pages = {63-70},
  doi = {http://dx.doi.org/10.1109/CW.2014.17}
}
Fuhrmann, C., Santos, P. & Fellner, D.W., (2014), "CultLab3D: Ein mobiles 3D-Scanning Szenario für Museen und Galerien", EVA 2014 Berlin. Proceedings, pp.106-109, Gesellschaft zur Förderung angewandter Informatik e.V., Berlin.
Abstract: Im Projekt CultLab3D werden Kulturgüter dreidimensional und in sehr hoher Qualität erfasst. Dabei geht es um die Entwicklung einer neuartigen Scan-Technologie in Form eines mobilen Digitalisierungslabors, das aus flexibel einsetzbaren Modulen für die schnelle und ökonomische Erfassung von 3DGeometrie-, Textur- und Materialeigenschaften besteht. Dabei soll langfristig die Qualität der Daten auch wissenschaftlichen Ansprüchen genügen, die bislang Originalvorlagen erfordern. Das System soll hinsichtlich des Aufwands (u.a. Scan-Geschwindigkeit), der erzielbaren Qualität und der Kosten den Markt revolutionieren. Eine Marktreife wird für 2015 erwartet.
BibTeX:
@inproceedings{Fuhrmann*14EVA,
  author = {Fuhrmann, Constanze and Santos, Pedro and Fellner, Dieter W.},
  title = {CultLab3D: Ein mobiles 3D-Scanning Szenario für Museen und Galerien},
  booktitle = {EVA 2014 Berlin. Proceedings},
  publisher = {Gesellschaft zur Förderung angewandter Informatik e.V., Berlin},
  year = {2014},
  pages = {106-109}
}
von Landesberger, T., Bremm, S., Schreck, T. & Fellner, D., (2014), "Feature-based Automatic Identification of Interesting Data Segments in Group Movement Data", Sage Information Visualization, Vol.13(3), pp.190-212.
Abstract: The study of movement data is an important task in a variety of domains such as transportation, biology, or finance. Often, the data objects are grouped (e.g. countries by continents). We distinguish three main categories of movement data analysis, based on the focus of the analysis: (a) movement characteristics of an individual in the context of its group, (b) the dynamics of a given group, and (c) the comparison of the behavior of multiple groups. Examination of group movement data can be effectively supported by data analysis and visualization. In this respect, approaches based on analysis of derived movement characteristics (called features in this article) can be useful. However, current approaches are limited as they do not cover a broad range of situations and typically require manual feature monitoring. We present an enhanced set of movement analysis features and add automatic analysis of the features for filtering the interesting parts in large movement data sets. Using this approach, users can easily detect new interesting characteristics such as outliers, trends, and task-dependent data patterns even in large sets of data points over long time horizons. We demonstrate the usefulness with two real-world data sets from the socioeconomic and the financial domains.
BibTeX:
@article{geovativs14,
  author = {T. von Landesberger and S. Bremm and T. Schreck and D. Fellner},
  title = {Feature-based Automatic Identification of Interesting Data Segments in Group Movement Data},
  journal = {Sage Information Visualization},
  year = {2014},
  volume = {13},
  number = {3},
  pages = {190--212},
  doi = {http://dx.doi.org/10.1177/1473871613477851}
}
Grabner, H., Ullrich, T. & Fellner, D.W., (2014), "Content-based Retrieval of 3D Models using Generative Modeling Techniques", GCH 2014. Short Papers - Posters, pp.10-12, Eurographics Association.
Abstract: In this paper we present a novel 3D model retrieval approach based on generative modeling techniques. In our approach generative models are created by domain experts in order to describe 3D model classes. These generative models span a shape space, of which a number of training samples is taken at random. The samples are used to train content-based retrieval methods. With a trained classifier, techniques based on semantic enrichment can be used to index a repository. Furthermore, as our method uses solely generative 3D models in the training phase, it eliminates the cold start problem. We demonstrate the effectiveness of our method by testing it against the Princeton shape benchmark.
BibTeX:
@inproceedings{Grabner*14LNCS,
  author = {Grabner, Harald and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Content-based Retrieval of 3D Models using Generative Modeling Techniques},
  booktitle = {GCH 2014. Short Papers - Posters},
  publisher = {Eurographics Association},
  year = {2014},
  pages = {10-12},
  doi = {http://dx.doi.org/10.2312/gch.20141317}
}
Hadjiprocopis, A., Wenzel, K., Rothermel, M., Ioannides, M., Fritsch, D., Klein, M., Johnsons, P.S., Weinlinger, G., Doulamis, A., Protopapadakis, E., Kyriakaki, G., Makantasis, K., Fellner, D.W., Stork, A. & Santos, P., (2014), "Cloud-based 3D Reconstruction of Cultural Heritage Monuments using Open Access Image Repositories", Eurographics Workshop on Graphics and Cultural Heritage, GCH 2014. Short Papers - Posters, pp.5-8, Eurographics Association, Goslar.
Abstract: A large number of photographs of cultural heritage items and monuments is publicly available in various Open Access Image Repositories (OAIR) and social media sites. Metadata inserted by camera, user and host site may help to determine the photograph content, geo-location and date of capture, thus allowing us, with relative success, to localise photos in space and time. Additionally, developments in Photogrammetry and Computer Vision, such as Structure from Motion (SfM), provide a simple and cost-effective method of generating relatively accurate camera orientations and sparse and dense 3D point clouds from 2D images. Our main goal is to provide a software tool able to run on desktop or cluster computers or as a back end of a cloud-based service, enabling historians, architects, archaeologists and the general public to search, download and reconstruct 3D point clouds of historical monuments from hundreds of images from the web in a cost-effective manner. The end products can be further enriched with metadata and published. This paper describes a workflow for searching and retrieving photographs of historical monuments from OAIR, such as Flickr and Picasa, and using them to build dense point clouds using SfM and dense image matching techniques. Computational efficiency is improved by a technique which reduces image matching time by using an image connectivity prior derived from low-resolution versions of the original images. Benchmarks for two large datasets showing the respective efficiency gains are presented.
BibTeX:
@inproceedings{Hadjiprocopis*14GCH,
  author = {Hadjiprocopis, Andreas and Wenzel, Konrad and Rothermel, Mathias and Ioannides, Marinos and Fritsch, Dieter and Klein, Michael and Johnsons, Paul S. and Weinlinger, Guenther and Doulamis, Anastasios and Protopapadakis, Eftychios and Kyriakaki, Georgia and Makantasis, Kostas and Fellner, Dieter W. and Stork, André and Santos, Pedro},
  title = {Cloud-based 3D Reconstruction of Cultural Heritage Monuments using Open Access Image Repositories},
  booktitle = {Eurographics Workshop on Graphics and Cultural Heritage, GCH 2014. Short Papers - Posters},
  publisher = {Eurographics Association, Goslar},
  year = {2014},
  pages = {5-8},
  doi = {http://dx.doi.org/10.2312/gch.20141317}
}
Janetzko, H., Jäckle, D. & Schreck, T., (2014), "Geo-Temporal Visual Analysis of Customer Feedback Data Based on Self-Organizing Sentiment Maps", International Journal On Advances in Intelligent Systems, Vol.7(1 and 2), pp.237-246, International Academy, Research, and Industry Association (IARIA).
Abstract: The success of a company is often dependent on the quality of their Customer Relationship Management (CRM). Knowledge about customer's concerns and needs can be a huge advantage over competitors but is hard to gain. Large amounts of textual feedback from customers via surveys or emails has to be manually processed, condensed, and lead to decision makers. As this process is quite expensive and error-prone, CRM data is in practice often neglected. We therefore propose an automatic analysis and visualization approach helping analysts in finding interesting patterns. We combine opinion mining with the geospatial location of a review to enable a context-aware analysis of the CRM data. Instead of overwhelming the user by showing the details first, we visually group similar patterns together and aggregate them by applying Self-Organizing Maps in an interactive analysis application. We extend this approach by integrating temporal and seasonal analyses showing these influences on the CRM data. Our technique is able to cope with unstructured customer feedback data and shows location dependencies of significant terms and sentiments. The capabilities of our approach are shown in a case-study using real-world customer feedback data exploring and describing interesting findings.
BibTeX:
@article{Janetzko*14imm,
  author = {H. Janetzko and D. Jäckle and T. Schreck},
  title = {Geo-Temporal Visual Analysis of Customer Feedback Data Based on Self-Organizing Sentiment Maps},
  journal = {International Journal On Advances in Intelligent Systems},
  publisher = {International Academy, Research, and Industry Association (IARIA)},
  year = {2014},
  volume = {7},
  number = {1 and 2},
  pages = {237--246}
}
Janetzko, H., Sacha, D., Stein, M., Schreck, T., Keim, D. & Deussen, O., (2014), "Feature-Driven Visual Analytics of Soccer Data", Proc. IEEE Conference on Visual Analytics Science and Technology, pp.13-22.
Abstract: Soccer is one the most popular sports today and also very interesting from an scientific point of view. We present a system for analyzing high-frequency position-based soccer data at various levels of detail, allowing to interactively explore and analyze for movement features and game events. Our Visual Analytics method covers single-player, multi-player and event-based analytical views. Depending on the task the most promising features are semi-automatically selected, processed, and visualized. Our aim is to help soccer analysts in finding the most important and interesting events in a match. We present a flexible, modular, and expandable layer-based system allowing in-depth analysis. The integration of Visual Analytics techniques into the analysis process enables the analyst to find interesting events based on classification and allows, by a set of custom views, to communicate the found results. The feedback loop in the Visual Analytics pipeline helps to further improve the classification results. We evaluate our approach by investigating real-world soccer matches and collecting additional expert feedback. Several use cases and findings illustrate the capabilities of our approach.
BibTeX:
@inproceedings{Janetzko*14VAST,
  author = {H. Janetzko and D. Sacha and M. Stein and T. Schreck and D. Keim and O. Deussen},
  title = {Feature-Driven Visual Analytics of Soccer Data},
  booktitle = {Proc. IEEE Conference on Visual Analytics Science and Technology},
  year = {2014},
  pages = {13-22},
  doi = {http://dx.doi.org/10.1109/VAST.2014.7042477}
}
Klein, R., Santos, P., Fellner, D.W. & Scopigno, R. (ed.) (2014), "GCH 2014: Eurographics Workshop on Graphics and Cultural Heritage", Eurographics Association, Goslar.
Abstract: Focus of this year's forum is to present and showcase new developments within the overall process chain, from data acquisition, 3D documentation, analysis and synthesis, semantical modelling, data management, to the point of virtual museums or new forms of interactive presentations and 3D printing solutions. GCH 2014 therefore provides scientists, engineers and CH managers a possibility to discuss new ICT technologies applied to data modelling, reconstruction and processing, digital libraries, virtual museums, interactive environments and applications for CH, ontologies and semantic processing, management and archiving, standards and documentation, as well as its transfer into practice.
BibTeX:
@proceedings{Klein*14GCH,,
  editor = {Klein, Reinhard and Santos, Pedro and Fellner, Dieter W. and Scopigno, Roberto},
  title = {GCH 2014: Eurographics Workshop on Graphics and Cultural Heritage},
  publisher = {Eurographics Association, Goslar},
  year = {2014},
  note = {978-3-905674-63-7 and 2312-6124, available from //diglib.eg.org/handle/10.2312/7755/}
}
Knöbelreiter, P., Berndt, R., Ullrich, T. & Fellner, D.W., (2014), "Automatic Fly-through Camera Animations for 3D Architectural Repositories", GRAPP 2014, pp.335-341, SciTePress.
Abstract: Virtual fly-through animations through computer generated models are a strong tool to convey properties and the appearance of these models. In, e.g., architectural models the big advantage of such a fly-through animation is that it is possible to convey the structure of the model easily. However, the path generation is not always trivial, to get a good looking animation. The proposed approach in this paper can handle arbitrary 3D models and then extract a meaningful and good looking camera path. To visualize the path HTML/X3DOM is used and therefore it is possible to view the final result in a browser with X3DOM support.
BibTeX:
@inproceedings{Knoebelreiter*14GRAPP,
  author = {Knöbelreiter, Patrick and Berndt, Rene and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Automatic Fly-through Camera Animations for 3D Architectural Repositories},
  booktitle = {GRAPP 2014},
  publisher = {SciTePress},
  year = {2014},
  pages = {335-341},
  doi = {http://dx.doi.org/10.5220/0004703502090217}
}
Krispel, U., Ullrich, T. & Fellner, D., (2014), "Fast and Exact Plane Based Representation for Polygonal Meshes", Proceedings of the International Conferences of CGVCVIP, pp.189-196.
Abstract: Boolean operations on meshes tend to be non-robust, due to the rounding of newly constructed vertex coordinates. Planebased mesh representations are known to circumvent the problem for meshes with planar faces: geometric information is stored by face equations, and vertices (as well as newly constructed vertices) are expressed as plane triplets. We first review the properties of plane-based mesh representations and discuss a variant that is optimized for fast evaluation using fixed integer precision and give some practical insights on implementing search structures for indexing of planes and vertices in this representation.
BibTeX:
@inproceedings{Krispel*14Iadis,
  author = {Ulrich Krispel and Torsten Ullrich and Dieter Fellner},
  title = {Fast and Exact Plane Based Representation for Polygonal Meshes},
  booktitle = {Proceedings of the International Conferences of CGVCVIP},
  year = {2014},
  pages = {189-196}
}
Krispel, U., Schinko, C. & Ullrich, T., (2014), "The Rules Behind -- Tutorial on Generative Modeling", Proceedings of Symposium on Geometry Processing / Graduate School, Symposium on Geometry Processing, SGP 2014, Vol.12, pp.2:1-2:49.
Abstract: This tutorial introduces the concepts and techniques of generative modeling. It starts with some introductory examples in the first learning unit to motivate the main idea: to describe a shape using an algorithm. After the explanation of technical terms, the second unit focuses on technical details of algorithm descriptions, programming languages, grammars and compiler construction, which play an important role in generative modeling. The purely geometric aspects are covered by the third learning unit. It comprehends the concepts of geometric building blocks and advanced modeling operations. Notes on semantic modeling aspects -- i.e. the meaning of a shape -- complete this unit and introduce the inverse problem. What is the perfect generative description for a real object? The answer to this question is discussed in the fourth learning unit while its application is shown (among other applications of generative and inverse-generative modeling) in the fifth unit. The discussion of open research questions concludes this tutorial. The assumed background knowledge of the audience comprehends basics of computer science (including algorithm design and the principles of programming languages) as well as a general knowledge of computer graphics. The tutorial takes approximately 120min. and enables the attendees to take an active part in future research on generative modeling.
BibTeX:
@inproceedings{Krispel*14SGP,
  author = {Krispel, Ulrich and Schinko, Christoph and Ullrich, Torsten},
  title = {The Rules Behind -- Tutorial on Generative Modeling},
  booktitle = {Symposium on Geometry Processing, SGP 2014},
  journal = {Proceedings of Symposium on Geometry Processing / Graduate School},
  year = {2014},
  volume = {12},
  pages = {2:1-2:49}
}
Ladenhauf, D., Berndt, R., Eggeling, E., Ullrich, T., Battisti, K. & Gratzl-Michlmair, M., (2014), "From Building Information Models to Simplified Geometries for Energy Performance Simulation", Proceeding of the International Academic Conference on Places and Technologies, pp.669-676.
Abstract: A major future challenge in the building industry is to reduce primary energy use of buildings. EU law now requires energy performance certificates to be issued for all buildings. Hence, energy performance simulation becomes an increasingly important topic. Accurate, yet efficient simulation depends on simple building models. Most of the required data can be found in Building Information Models (BIM), following the buildingSMART alliance's Industry Foundation Classes (IFC) schema. IFC has become an ISO standard and enjoys increasing support by CAD software. However, typical IFC models contain a lot of irrelevant data, in particular geometric representations, which are too detailed for energy performance simulation. Therefore, an algorithm is proposed for extracting input models for simulations directly from IFC models in a semi-automatic process, to overcome the current situation where simple models are manually built from scratch. The key aspect of the algorithm is geometry simplification subject to semantic and functional groups; more specifically, the 3D representations of walls, slabs, windows, doors, etc. are reduced to a collection of surfaces describing the building's thermal shell on one hand, and the material layers associated with it on the other hand. Test models from simple fictitious houses to complex models of real-world buildings have been provided to guide the development of the algorithm in an incremental manner. This paper presents the resulting algorithm and the current status of prototype software implementing it.
BibTeX:
@inproceedings{Ladenhauf*14PT,
  author = {Daniel Ladenhauf and René Berndt and Eva Eggeling and Torsten Ullrich and Kurt Battisti and Markus Gratzl-Michlmair},
  title = {From Building Information Models to Simplified Geometries for Energy Performance Simulation},
  booktitle = {Proceeding of the International Academic Conference on Places and Technologies},
  year = {2014},
  pages = {669-676}
}
Landesberger, T. v., Fiebig, S., Bremm, S., Kuijper, A. & Fellner, D.W., Huang, W. (ed.) (2014), "Interaction Taxonomy for Tracking of User Actions in Visual Analytics Applications", Handbook of Human Centric Visualization, pp.653-670, Springer, Berlin, Heidelberg, New York.
Abstract: In various application areas (social science, transportation, or medicine) analysts need to gain knowledge from large amounts of data. This analysis is often supported by interactive Visual Analytics tools that combine automatic analysis with interactive visualization. Such a data analysis process is not streamlined, but consists of several steps and feedback loops. In order to be able to optimize the process, identify problems, or common problem solving strategies, recording and reproducibility of this process is needed. This is facilitated by tracking of user actions categorized according to taxonomy of interactions. Visual Analytics includes several means of interaction that are differentiated according to three fields: information visualization, reasoning, and data processing. At present, however, only separate taxonomies for interaction techniques exist in these three fields. Each taxonomy covers only a part of the actions undertaken in Visual Analytics. Moreover, as they use different foundations (user intentions vs. user actions) and employ different terminology, it is not clear to what extent they overlap and cover the whole Visual Analytics interaction space. We therefore first compare them and then elaborate a new integrated taxonomy in the context of Visual Analytics. In order to show the usability of the new taxonomy, we specify it on visual graph analysis and apply it to the tracking of user interactions in this area.
BibTeX:
@incollection{Landesberger*14HHCV,
  author = {Landesberger, Tatiana von and Fiebig, Sebastian and Bremm, Sebastian and Kuijper, Arjan and Fellner, Dieter W.},
  editor = {Huang, Weidong},
  title = {Interaction Taxonomy for Tracking of User Actions in Visual Analytics Applications},
  booktitle = {Handbook of Human Centric Visualization},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2014},
  pages = {653-670},
  doi = {http://dx.doi.org/10.1007/978-1-4614-7485-2_26}
}
Lex, C., Eichberger, A., Koglbauer, I., Schinko, C., Holzinger, Jü., Battel, M., Bliem, N. & Sternat, A., (2014), "Bewertung von Fahrerassistenzsystemen von Normalfahrerinnen und Normalfahrern im Realversuch", 4. Jahrestagung der GMTTB (Gesellschaft für Medizinisch Technische Traumabiomechanik), pp.to appear.
BibTeX:
@inproceedings{Lex*14GMTTB,
  author = {Cornelia Lex and Arno Eichberger and Ioana Koglbauer and Christoph Schinko and Jürgen Holzinger and Mario Battel and Norbert Bliem and Anton Sternat},
  title = {Bewertung von Fahrerassistenzsystemen von Normalfahrerinnen und Normalfahrern im Realversuch},
  booktitle = {4. Jahrestagung der GMTTB (Gesellschaft für Medizinisch Technische Traumabiomechanik)},
  year = {2014},
  pages = {to appear}
}
Li, B., Lu, Y., Godil, A., Schreck, T., Bustos, B., Ferreira, A., Furuya, T., Fonseca, M., Johan, H., Matsuda, T., Ohbuchi, R., Pascoal, P. & Saavedra, J., (2014), "A comparison of methods for sketch-based 3D shape retrieval", Elsevier Computer Vision and Image Understanding, Vol.119, pp.57-80.
Abstract: Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. To foster this research area, two Shape Retrieval Contest (SHREC) tracks on this topic have been organized by us in 2012 and 2013 based on a small-scale and large-scale benchmarks, respectively. Six and five (nine in total) distinct sketch-based 3D shape retrieval methods have competed each other in these two contests, respectively. To measure and compare the performance of the top participating and other existing promising sketch-based 3D shape retrieval methods and solicit the state-of-the-art approaches, we perform a more comprehensive comparison of fifteen best (four top participating algorithms and eleven additional state-of-the-art methods) retrieval methods by completing the evaluation of each method on both benchmarks. The benchmarks, results, and evaluation tools for the two tracks are publicly available on our websites.
BibTeX:
@article{Li*14cviu,
  author = {B. Li and Y. Lu and A. Godil and T. Schreck and B. Bustos and A. Ferreira and T. Furuya and M. Fonseca and H. Johan and T. Matsuda and R. Ohbuchi and P. Pascoal and J. Saavedra},
  title = {A comparison of methods for sketch-based 3D shape retrieval},
  journal = {Elsevier Computer Vision and Image Understanding},
  year = {2014},
  volume = {119},
  pages = {57--80},
  doi = {http://dx.doi.org/10.1016/j.cviu.2013.11.008}
}
Limper, M., Thöner, M., Behr, J. & Fellner, D.W., (2014), "SRC -- A Streamable Format for Generalized Web-based 3D Data Transmission", Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies - Web3D 2014, pp.35-43, ACM.
Abstract: A problem that still remains with today's technologies for 3D as- set transmission is the lack of progressive streaming of all relevant mesh and texture data, with a minimal number of HTTP requests. Existing solutions, like glTF or X3DOM's geometry formats, either send all data within a single batch, or they introduce an unnecessary large number of requests. Furthermore, there is still no established format for a joined, interleaved transmission of geometry data and texture data. Within this paper, we propose a new container file format, entitled Shape Resource Container (SRC). Our format is optimized for pro- gressive, Web-based transmission of 3D mesh data with a minimum number of HTTP requests. It is highly configurable, and more pow- erful and flexible than previous formats, as it enables a truly pro- gressive transmission of geometry data, partial sharing of geometry between meshes, direct GPU uploads, and an interleaved transmis- sion of geometry and texture data. We also demonstrate how our new mesh format, as well as a wide range of other mesh formats, can be conveniently embedded in X3D scenes, using a new, mini- malistic X3D ExternalGeometry node
BibTeX:
@inproceedings{Limper*14Web3D,
  author = {Limper, Max and Thöner, Maik and Behr, Johannes and Fellner, Dieter W.},
  title = {SRC -- A Streamable Format for Generalized Web-based 3D Data Transmission},
  booktitle = {Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies - Web3D 2014},
  publisher = {ACM},
  year = {2014},
  pages = {35--43},
  doi = {http://dx.doi.org/10.1145/2628588.2628589}
}
Nazemi, K., Kuijper, A., Hutter, M., Kohlhammer, Jö. & Fellner, D.W., (2014), "Measuring Context Relevance for Adaptive Semantics Visualizations", i-KNOW 2014. Proceedings, pp.8, ACM, New York.
Abstract: Semantics visualizations enable the acquisition of information to amplify the acquisition of knowledge. The dramatic increase of semantics in form of Linked Data and Linked-Open Data yield search databases that allow to visualize the entire context of search results. The visualization of this semantic context enables one to gather more information at once, but the complex structures may as well confuse and frustrate users. To overcome the problems, adaptive visualizations already provide some useful methods to adapt the visualization on users' demands and skills. Although these methods are very promising, these systems do not investigate the relevance of semantic neighboring entities that commonly build most information value. We introduce two new measurements for the relevance of neighboring entities: The Inverse Instance Frequency allows weighting the relevance of semantic concepts based on the number of their instances. The Direct Relation Frequency inverse Relations Frequency measures the relevance of neighboring instances by the type of semantic relations. Both measurements provide a weighting of neighboring entities of a selected semantic instance, and enable an adaptation of retinal variables for the visualized graph. The algorithms can easily be integrated into adaptive visualizations and enhance them with the relevance measurement of neighboring semantic entities. We give a detailed description of the algorithms to enable a replication for the adaptive and semantics visualization community. With our method, one can now easily derive the relevance of neighboring semantic entities of selected instances, and thus gain more information at once, without confusing and frustrating users.
BibTeX:
@inproceedings{Nazemi*14iKnow,
  author = {Nazemi, Kawa and Kuijper, Arjan and Hutter, Marco and Kohlhammer, Jörn and Fellner, Dieter W.},
  title = {Measuring Context Relevance for Adaptive Semantics Visualizations},
  booktitle = {i-KNOW 2014. Proceedings},
  publisher = {ACM, New York},
  year = {2014},
  pages = {8},
  series = {ACM International Conference Proceedings Series; 889},
  doi = {http://dx.doi.org/10.1145/2637748.2638416}
}
Santos, P., Ritz, M., Tausch, R., Schmedt, H., Monroy Rodriguez, R., Stefano, A., Posniak, O., Fuhrmann, C. & Fellner, D.W., (2014), "CultLab3D - On the Verge of 3D Mass Digitization", Eurographics Symposium on Graphics and Cultural Heritage (GCH) 2014, pp.65-73, Eurographics Association, Goslar.
Abstract: Acquisition of 3D geometry, texture and optical material properties of real objects still consumes a considerable amount of time, and forces humans to dedicate their full attention to this process. We propose CultLab3D, an automatic modular 3D digitization pipeline, aiming for efficient mass digitization of 3D geometry, texture, and optical material properties. CultLab3D requires minimal human intervention and reduces processing time to a fraction of today's efforts for manual digitization. The final step in our digitization workflow involves the integration of the digital object into enduring 3D Cultural Heritage Collections together with the available semantic information related to the object. In addition, a software tool facilitates virtual, location-independent analysis and publication of the virtual surrogates of the objects, and encourages collaboration between scientists all around the world. The pipeline is designed in a modular fashion and allows for further extensions to incorporate newer technologies. For instance, by switching scanning heads, it is possible to acquire coarser or more refined 3D geometry.
BibTeX:
@inproceedings{Santos*14GCH,
  author = {Santos, Pedro and Ritz, Martin and Tausch, Reimar and Schmedt, Hendrik and Monroy Rodriguez, Rafael and Stefano, Antonio and Posniak, Oliver and Fuhrmann, Constanze and Fellner, Dieter W.},
  title = {CultLab3D - On the Verge of 3D Mass Digitization},
  booktitle = {Eurographics Symposium on Graphics and Cultural Heritage (GCH) 2014},
  publisher = {Eurographics Association, Goslar},
  year = {2014},
  pages = {65-73},
  doi = {http://dx.doi.org/10.2312/gch.20141305}
}
Santos, P., Peña Serna, S., Stork, A. & Fellner, D.W., Ioannides, M. & Quak, E. (ed.) (2014), "The Potential of 3D Internet in the Cultural Heritage Domain", 3D Research Challenges in Cultural Heritage, pp.1-17, Springer, Berlin, Heidelberg, New York.
Abstract: Europe is rich in cultural heritage but unfortunately much of the tens of millions of artifacts remain in archives. Many of these resources have been collected to preserve our history and to understand their historical context. Nevertheless, CH institutions are neither able to document all the collected resources nor to exhibit them. Additionally, many of these CH resources are unique, and will be on public display only occasionally. Hence, access to and engagement with this kind of cultural resources is important for European culture and the legacy of future generations. However, the technology needed to economically mass digitize and annotate 3D artifacts in analogy to the digitization and annotation of books and paintings has yet to be developed. Likewise approaches to semantic enrichment and storage of 3D models along with meta-data are just emerging. This paper presents challenges and trends to overcome the latter issues and demonstrates latest developments for annotation of 3D artifacts and their subsequent export to Europeana, the European digital library, for integrated, interactive 3D visualization within regular web browsers taking advantage of technologies such as WebGl and X3D
BibTeX:
@incollection{Santos*14LNCS,
  author = {Santos, Pedro and Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  editor = { Marinos Ioannides and Ewald Quak},
  title = {The Potential of 3D Internet in the Cultural Heritage Domain},
  booktitle = {3D Research Challenges in Cultural Heritage},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2014},
  pages = {1-17},
  series = {Lecture Notes in Computer Science (LNCS); 8355},
  doi = {http://dx.doi.org/10.1007/978-3-662-44630-0_5}
}
Schiffer, T. & Fellner, D.W., (2014), "Efficient Multi-kernel Ray Tracing for GPUs", GRAPP 2014, pp.209-218, SciTePress.
Abstract: Images with high visual quality are often generated by a ray tracing algorithm. Despite its conceptual simplicity, designing an efficient mapping of ray tracing computations to massively parallel hardware architectures is a challenging task. In this paper we investigate the performance of state-of-the-art ray traversal algorithms for bounding volume hierarchies on GPUs and discuss their potentials and limitations. Based on this analysis, a novel ray traversal scheme called batch tracing is proposed. It decomposes the task into multiple kernels, each of which is designed for efficient parallel execution. Our algorithm achieves comparable performance to currently prevailing approaches and represents a promising avenue for future research.
BibTeX:
@inproceedings{Schiffer-Fellner14GRAPP,
  author = {Schiffer, Thomas and Fellner, Dieter W.},
  title = {Efficient Multi-kernel Ray Tracing for GPUs},
  booktitle = {GRAPP 2014},
  publisher = {SciTePress},
  year = {2014},
  pages = {209-218},
  doi = {http://dx.doi.org/10.5220/0004703502090217}
}
Schinko, C., Ullrich, T. & W., F.D., (2014), "Modeling with High-Level Descriptions and Low-Level Details", Proceeding of the International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing, pp.328-332.
BibTeX:
@inproceedings{Schinko*14mccsis,
  author = {Schinko, Christoph and Ullrich, Torsten and Fellner Dieter W.},
  title = {Modeling with High-Level Descriptions and Low-Level Details},
  booktitle = {Proceeding of the International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing},
  year = {2014},
  pages = {328-332},
  series = {8}
}
Schinko, C., Berndt, R., Eggeling, E. & Fellner, D., (2014), "A Scalable Rendering Framework for Generative 3D Content", Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies, pp.81-87, ACM.
Abstract: Delivering high quality 3D content through a web browser is still a challenge especially when intellectual property (IP) protection is necessary. Thus, the transfer of 3D modeling information to a client should be avoided. In our work we present a solution to this prob- lem by introducing a server-side rendering framework. Only im- ages are transferred to the client, the actual 3D content is not de- livered. By providing simple proxy geometry it is still possible to provide direct interaction on the client. Our framework incorporates the Generative Modeling Language (GML) for the description and rendering of generative content. It is then possible to not only interact with the 3D content, but to mod- ify the actual shape within the possibilities of the generative con- tent. By introducing a control layer and encapsulating processing and rendering of the generative content in a so called GML Ren- dering Unit (GRU) it is possible to provide a scalable rendering framework.
BibTeX:
@inproceedings{Schinko*14Web3D,
  author = {Christoph Schinko and René Berndt and Eva Eggeling and Dieter Fellner},
  title = {A Scalable Rendering Framework for Generative 3D Content},
  booktitle = {Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies},
  publisher = {ACM},
  year = {2014},
  pages = {81-87},
  doi = {http://dx.doi.org/10.1145/2628588.2628601}
}
Schrom-Feiertag, H., Schinko, C., Settgast, V. & Seer, S., (2014), "Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment", Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research (ET4S2014), pp.62-66, CEUR Workshop Proceedings.
Abstract: In this paper we present a novel method to evaluate guidance systems and navigation solutions for public infrastructures based on an immersive virtual environment in combination with a mobile eye tracking system. It opens new opportunities for wayfinding studies in public infrastructures already during the planning phase. Our approach embeds an interface for natural locomotion in the virtual environment offering significant advantages related to the user's spatial perception as well as physical and cognitive demands. Accurate measurements of position, locomotion and gaze within the virtual environment enable a great simplification for the analysis of eye tracking data. We conducted a study with participants that included virtual and real world scenarios within a train station. First results exhibit similar behaviour of participants in the real and virtual environment, confirming the comparability and applicability of our method.
BibTeX:
@inproceedings{Schrom-Feiertag*14ET4S,
  author = {Helmut Schrom-Feiertag and Christoph Schinko and Volker Settgast and Stefan Seer},
  title = {Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment},
  booktitle = {Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research (ET4S2014)},
  publisher = {CEUR Workshop Proceedings},
  year = {2014},
  pages = {62-66}
}
Silva, N., Settgast, V., Eggeling, E., Grill, F., Zeh, T. & Fellner, D.W., (2014), "Sixth Sense -- Air Traffic Control Prediction Scenario Augmented by Sensors", International Conference on Knowledge Management and Knowledge Technologies (I-KNOW) , pp.4, ACM, New York.
Abstract: This paper is focused on the fault tolerance of Human Machine Interfaces in the field of air traffic control (ATC) by accepting the overall user's body language as input. We describe ongoing work in progress in the project called Sixth Sense. Interaction patterns are reasoned from the combination of a recommendation and inference engine, the analysis of several graph database relationships and from multiple sensor raw data aggregations. Altogether, these techniques allow us to judge about different possible meanings of the current user's interaction and cognitive state. The results obtained from applying different machine learning techniques will be used to make recommendations and predictions on the user's actions. They are currently monitored and rated by a human supervisor.
BibTeX:
@inproceedings{Silva*14iKnow,
  author = {Silva, Nelson and Settgast, Volker and Eggeling, Eva and Grill, Florian and Zeh, Theodor and Fellner, Dieter W.},
  title = {Sixth Sense -- Air Traffic Control Prediction Scenario Augmented by Sensors},
  booktitle = {International Conference on Knowledge Management and Knowledge Technologies (I-KNOW) },
  publisher = {ACM, New York},
  year = {2014},
  pages = {4},
  series = {ACM International Conference Proceedings Series; 889},
  doi = {http://dx.doi.org/10.1145/2637748.2638441}
}
Sipiran, I., Gregor, R. & Schreck, T., (2014), "Approximate Symmetry Detection in Partial 3D Meshes", Wiley Computer Graphics Forum, Vol.33(7), pp.131-140.
Abstract: Symmetry is a common characteristic in natural and man-made objects. Its ubiquitous nature can be exploited to facilitate the analysis and processing of computational representations of real objects. In particular, in computer graphics, the detection of symmetries in 3D geometry has enabled a number of applications in modeling and reconstruction. However, the problem of symmetry detection in incomplete geometry remains a challenging task. In this paper, we propose a vote-based approach to detect symmetry in 3D shapes, with special interest in models with large missing parts. Our algorithm generates a set of candidate symmetries by matching local maxima of a surface function based on the heat diffusion in local domains, which guarantee robustness to missing data. In order to deal with local perturbations, we propose a multi-scale surface function that is useful to select a set of distinctive points over which the approximate symmetries are defined. In addition, we introduce a vote-based scheme that is aware of the partiality, and therefore reduces the number of false positive votes for the candidate symmetries. We show the effectiveness of our method in a varied set of 3D shapes and different levels of partiality. Furthermore, we show the applicability of our algorithm in the repair and completion of challenging reassembled objects in the context of cultural heritage.
BibTeX:
@article{Sipiran*14cgf,
  author = {I. Sipiran and R. Gregor and T. Schreck},
  title = {Approximate Symmetry Detection in Partial 3D Meshes},
  journal = {Wiley Computer Graphics Forum},
  year = {2014},
  volume = {33},
  number = {7},
  pages = {131--140},
  doi = {http://dx.doi.org/10.1111/cgf.12481}
}
Sipiran, I., Meruane, R., Bustos, B., Schreck, T., Li, B., Lu, Y. & Johan, H., (2014), "A benchmark of simulated range images for partial shape retrieval", The Visual Computer, Vol.30, pp.1293 - 1308.
Abstract: In this paper, we address the evaluation of algorithms for partial shape retrieval using a large-scale simulated benchmark of partial views which are used as queries. Since the scanning of real objects is a time-consuming task, we create a simulation that generates a set of views from a target model and at different levels of complexity (amount of missing data). In total, our benchmark contains 7,200 partial views. Furthermore, we propose the use of weighted effectiveness measures based on the complexity of a query. With these characteristics, we aim at jointly evaluating the effectiveness, efficiency and robustness of existing algorithms. As a result of our evaluation, we found that a combination of methods provides the best effectiveness, mainly due to the complementary information that they deliver. The obtained results open new questions regarding the difficulty of the partial shape retrieval problem. As a consequence, potential future directions are also identified.
BibTeX:
@article{Sipiran*14tvc,
  author = {I. Sipiran and R. Meruane and B. Bustos and T. Schreck and B. Li and Y. Lu and H. Johan},
  title = {A benchmark of simulated range images for partial shape retrieval},
  journal = {The Visual Computer},
  year = {2014},
  volume = {30},
  pages = {1293 -- 1308},
  note = {Peer-reviewed article, published online May 08, 2014},
  doi = {http://dx.doi.org/10.1007/s00371-014-0937-2}
}
Stenin, I., Hansen, S., Becker, M., Sakas, G., Fellner, D.W., Klenzner, T. & Schipper, Jö., (2014), "Minimally Invasive Multiport Surgery of the Lateral Skull Base", BioMed Research International, pp.7.
Abstract: Objective: Minimally invasive procedures minimize iatrogenic tissue damage and lead to a lower complication rate and high patient satisfaction. To date only experimental minimally invasive single-port approaches to the lateral skull base have been attempted. The aim of this study was to verify the feasibility of a minimally invasive multiport approach for advanced manipulation capability and visual control and develop a software tool for preoperative planning. Methods: Anatomical 3D models were extracted from twenty regular temporal bone CT scans. Collision-free trajectories, targeting the internal auditory canal, round window, and petrous apex, were simulated with a specially designed planning software tool. A set of three collision-free trajectories was selected by skull base surgeons concerning the maximization of the distance to critical structures and the angles between the trajectories. Results: A set of three collision-free trajectories could be successfully simulated to the three targets in each temporal bone model without violating critical anatomical structures. Conclusion: A minimally invasive multiport approach to the lateral skull base is feasible. The developed software is the first step for preoperative planning. Further studies will focus on cadaveric and clinical translation.
BibTeX:
@article{Stenin*14BioMed,
  author = {Stenin, Igor and Hansen, Stefan and Becker, Meike and Sakas, Georgios and Fellner, Dieter W. and Klenzner, Thomas and Schipper, Jörg},
  title = {Minimally Invasive Multiport Surgery of the Lateral Skull Base},
  journal = {BioMed Research International},
  year = {2014},
  pages = {7},
  doi = {http://dx.doi.org/10.1155/2014/379295}
}
Sturm, W., Berndt, R., Halm, A., Ullrich, T., Eggeling, E. & Fellner, D.W., (2014), "Time-based Visualization of Large Data-Sets An Example in the Context of Automotive Engineering", ThinkMind - International Journal On Advances in Software, Vol.7(1-2), pp.139-149.
BibTeX:
@article{Sturm*14TM,
  author = {Werner Sturm and René Berndt and Andreas Halm and Torsten Ullrich and Eva Eggeling and Dieter W. Fellner},
  title = {Time-based Visualization of Large Data-Sets An Example in the Context of Automotive Engineering},
  journal = {ThinkMind - International Journal On Advances in Software},
  year = {2014},
  volume = {7},
  number = {1-2},
  pages = {139-149}
}
Ullrich, T. & Fellner, D.W., (2014), "Statistical Analysis on Global Optimization", MCSI 2014, pp.99-106, IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif..
Abstract: The global optimization of a mathematical model determines the best parameters such that a target or cost function is minimized. Optimization problems arise in almost all scientific disciplines (operations research, life sciences, etc.). Only in a few exceptional cases, these problems can be solved analytically-exactly, so in practice numerical routines based on approximations have to be used. The routines return a result - a so-called candidate of a global minimum. Unfortunately, the question whether the candidate represents the optimal solution, often remains unanswered. This article presents a simple-to-use, statistical analysis that determines and assesses the quality of such a result. This information is valuable and important - especially for practical application.
BibTeX:
@inproceedings{Ullrich-Fellner*14MCSI,
  author = {Ullrich, Torsten and Fellner, Dieter W.},
  title = {Statistical Analysis on Global Optimization},
  booktitle = {MCSI 2014},
  publisher = {IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif.},
  year = {2014},
  pages = {99-106},
  doi = {http://dx.doi.org/10.1109/MCSI.2014.15}
}
Wanner, F., Schreck, T., Jentner, W., Sharalieva, L. & Keim, D., (2014), "Relating Interesting Quantitative Time Series Patterns with Text Events and Text Features", Proc. IS&T/SPIE Conference on Visualization and Data Analysis, Vol.90170G.
Abstract: In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.
BibTeX:
@inproceedings{Wanner*14spie,
  author = {F. Wanner and T. Schreck and W. Jentner and L. Sharalieva and D. Keim},
  title = {Relating Interesting Quantitative Time Series Patterns with Text Events and Text Features},
  booktitle = {Proc. IS&T/SPIE Conference on Visualization and Data Analysis},
  year = {2014},
  volume = {90170G},
  doi = {http://dx.doi.org/10.1117/12.2039639}
}
Weber, D., Mueller-Roemer, J., Altenhofen, C., Stork, A. & Fellner, D.W., (2014), "A p-Multigrid Algorithm using Cubic Finite Elements for Efficient Deformation Simulation", VRIPHYS 14: 11th Workshop in Virtual Reality Interactions and Physical Simulations, pp.49-58, Eurographics Association, Goslar.
Abstract: We present a novel p-multigrid method for efficient simulation of co-rotational elasticity with higher-order finite elements. In contrast to other multigrid methods proposed for volumetric deformation, the resolution hierarchy is realized by varying polynomial degrees on a tetrahedral mesh. We demonstrate the efficiency of our approach and compare it to commonly used direct sparse solvers and preconditioned conjugate gradient methods. As the polynomial representation is defined w.r.t. the same mesh, the update of the matrix hierarchy necessary for co-rotational elasticity can be computed efficiently. We introduce the use of cubic finite elements for volumetric deformation and investigate different combinations of polynomial degrees for the hierarchy. We analyze the applicability of cubic finite elements for deformation simulation by comparing analytical results in a static scenario and demonstrate our algorithm in dynamic simulations with quadratic and cubic elements. Applying our method to quadratic and cubic finite elements results in speed up of up to a factor of 7 for solving the linear system.
BibTeX:
@inproceedings{Weber*14VRIPHYS,
  author = {Weber, Daniel and Mueller-Roemer, Johannes and Altenhofen, Christian and Stork, André and Fellner, Dieter W.},
  title = {A p-Multigrid Algorithm using Cubic Finite Elements for Efficient Deformation Simulation},
  booktitle = {VRIPHYS 14: 11th Workshop in Virtual Reality Interactions and Physical Simulations},
  publisher = {Eurographics Association, Goslar},
  year = {2014},
  pages = {49-58},
  doi = {http://dx.doi.org/10.2312/vriphys.20141223}
}
Yoon, S.-M., Yoon, G.-J. & Schreck, T., (2014), "User-Drawn Sketch-based 3D Object Retrieval Using Sparse Coding", Springer Multimedia Tools and Applications.
Abstract: 3D object retrieval from user-drawn (sketch) queries is one of the important research issues in the areas of pattern recognition and computer graphics for simulation, visualization, and Computer Aided Design. The performance of any content-based 3D object retrieval system crucially depends on the availability of effective descriptors and similarity measures for this kind of data. We present a sketch-based approach for improving 3D object retrieval effectiveness by optimizing the representation of one particular type of features (oriented gradients) using a sparse coding approach. We perform experiments, the results of which show that the retrieval quality improves over alternative features and codings. Based our findings, the coding can be proposed for sketch-based 3D object retrieval systems relying on oriented gradient features.
BibTeX:
@article{Yoon*14mtap,
  author = {S.-M. Yoon and G.-J. Yoon and T. Schreck},
  title = {User-Drawn Sketch-based 3D Object Retrieval Using Sparse Coding},
  journal = {Springer Multimedia Tools and Applications},
  year = {2014},
  note = {Published online: 11 January 2014. Peer-reviewed article},
  doi = {http://dx.doi.org/10.1007/s11042-013-1831-z}
}
Zmugg, R., Krispel, U., Thaller, W., Havemann, S., Pszeida, M. & Fellner, D.W., (2014), "A New Approach for Interactive Procedural Modelling in Cultural Heritage", Archaeology in the Digital Era: Papers from the 40th Annual Conference of Computer Applications and Quantitative Methods in Archaeology (CAA 2012) , pp.190-204.
BibTeX:
@inproceedings{zmugg*12CAA,
  author = {René Zmugg and Ulrich Krispel and Wolfgang Thaller and Sven Havemann and Martin Pszeida and Dieter W. Fellner},
  title = {A New Approach for Interactive Procedural Modelling in Cultural Heritage},
  booktitle = {Archaeology in the Digital Era: Papers from the 40th Annual Conference of Computer Applications and Quantitative Methods in Archaeology (CAA 2012) },
  year = {2014},
  pages = {190-204}
}
Zmugg, R., Thaller, W., Krispel, U., Edelsbrunner, J., Havemann, S. & Fellner, D.W., (2014), "Procedural Architecture Using Deformation-aware Split Grammars", The Visual Computer, Vol.30(9), pp.1009-1019.
Abstract: With the current state of video games growing in scale, manual content creation may no longer be feasible in the future. Split grammars are a promising technology for large-scale procedural generation of urban structures, which are very common in video games. Buildings with curved parts, however, can currently only be approximated by static pre-modelled assets, and rules apply only to planar surface parts. We present an extension to split grammar systems that allow the creation of curved architecture through integration of free-form deformations at any level in a grammar. Further split rules can then proceed in two different ways. They can either adapt to these deformations so that repetitions can adjust to more or less space, while maintaining length constraints, or they can split the deformed geometry with straight planes to introduce straight structures on deformed geometry.
BibTeX:
@article{Zmugg*14VisComp,
  author = {Zmugg, René and Thaller, Wolfgang and Krispel, Ulrich and Edelsbrunner, Johannes and Havemann, Sven and Fellner, Dieter W.},
  title = {Procedural Architecture Using Deformation-aware Split Grammars},
  journal = {The Visual Computer},
  year = {2014},
  volume = {30},
  number = {9},
  pages = {1009-1019},
  doi = {http://dx.doi.org/10.1007/s00371-013-0912-3}
}

2013

Aderhold, A., Jung, Y., Wilkosinska, K. & Fellner, D.W., (2013), "Distributed 3D Model Optimization for the Web with the Common Implementation Framework for Online Virtual Museums", 2013 Digital Heritage International Congress. Vol II, pp.719-726, The Institute of Electrical and Electronics Engineers (IEEE), New York.
Abstract: Internet services are becoming more ubiquitous and 3D graphics is increasingly gaining a strong foothold in the Web technology domain. Recently, with WebGL, real-time 3D graphics in the Browser became a reality and most major Browsers support WebGL natively today. This makes it possible to create applications like 3D catalogs of artifacts, or to interactively explore Cultural Heritage objects in a Virtual Museum on mobile devices. Frameworks like the open-source system X3DOM provide declarative access to low-level GPU routines along with seamless integration of 3D graphics into HTML5 applications through standardized Web technologies. Most 3D models also need to be optimized to address concerns like limited network bandwidth or reduced GPU power on mobile devices. Therefore, recently an online platform for the development of Virtual Museums with particular attention to presentation and visualization of Cultural Heritage assets in online virtual museums was proposed. This common implementation Framework (CIF) allows the user to upload large 3D models, which are subsequently converted and optimized for web display and embedded in an HTML5 application that can range from simple interactive display of the model to an entire virtual environment like a virtual walk-through. Generating these various types of applications is done via a templating mechanism, which will be further elaborated within this paper. Moreover, to efficiently convert many large models into an optimized form, a substantial amount of computing power is required, which a single system cannot yet provide in a timely fashion. Therefore, we also describe how the CIF can be used to utilize a dynamically allocated cloud-based or physical cluster of commodity hardware to distribute the workload of model optimization for the Web.
BibTeX:
@inproceedings{Aderhold*13DH,
  author = {Aderhold, Andreas and Jung, Yvonne and Wilkosinska, Katarzyna and Fellner, Dieter W.},
  title = {Distributed 3D Model Optimization for the Web with the Common Implementation Framework for Online Virtual Museums},
  booktitle = {2013 Digital Heritage International Congress. Vol II},
  publisher = {The Institute of Electrical and Electronics Engineers (IEEE), New York},
  year = {2013},
  pages = {719-726}
}
Barmak, K., Eggeling, E., Kinderlehrer, D., Sharp, R., Ta'asan, S., Rollet, A.D. & Coffey, K., (2013), "Grain Growth and the Puzzle of its Stagnation in Thin Films: The Curious Tale of a Tail and an Ear", Progress in Materials Science, pp.195.
Abstract: The underlying cause of stagnation of grain growth in thin metallic films remains a puzzle. Here it is re-visited by means of detailed comparison of experiments and simulations, using a broad range of metrics that, in addition to grain size, includes the number of sides and the average side class of nearest neighbors. The experimental grain size data reported is large and comprises nearly 35,000 grains from 27 thin film samples of Al and Cu with thicknesses in the range of 25 to 158 nm. The size distributions for the Al and Cu films are remarkably similar to each other despite the many and significant differences in experimental conditions, which include sputtering target purity, substrate type, film thickness, deposition temperature, actual as well as homologous annealing temperatures, annealing time, absolute grain size, and the twin density within the grains. This similarity argues for a universal experimental grain size distribution, which for grain diameters is lognormal as found previously for thin films at stagnation. Comparison of the experimental grain size distribution with that for two dimensional grain growth simulations with isotropic boundary energy shows the distributions to differ in two regions, termed the "ear" and the "tail". It is shown that the excess small grains in the region of the "ear" are primarily the 3 and 4-sided grains, whereas the excess of large grains in the "tail" region are grains with more than 9 sides. The excesses in the ear and tail regions of the experimental distributions are necessarily balanced by a deficiency in the mid-sized grains with 6-8 sides. Five causes are examined to identify the puzzling difference between simulations with isotropic boundary energy and experiments. These are (i) driving forces other than grain boundary energy reduction, (ii) anisotropy of grain boundary energy, (iii) grain boundary grooving, (iv) solute drag and (v) triple junction drag. No single cause is seen to provide an explanation for the observed experimental behavior. However, it is speculated that a combination of causes that include the anisotropy of grain boundary energy will be needed to explain the experimental behavior.
BibTeX:
@article{Barmak*13PMS,
  author = {Barmak, Katayun and Eggeling, Eva and Kinderlehrer, David and Sharp, Richard and Ta'asan, S. and Rollet, Anthony D. and Coffey, Kevin},
  title = {Grain Growth and the Puzzle of its Stagnation in Thin Films: The Curious Tale of a Tail and an Ear},
  journal = {Progress in Materials Science},
  year = {2013},
  pages = {195},
  doi = {http://dx.doi.org/10.1016/j.pmatsci.2013.03.004}
}
Bauer, D., Schneckenburger, J., Settgast, V., Millonig, A. & Gartner, G., (2013), "Hands free steering in a virtual world for the evaluation of guidance systems in pedestrian infrastructures: design and validation", Proceedings of the Transportation Research Board 92nd Annual Meeting, pp.13.
Abstract: This paper presents the development and validation of hands free steering in a cave automatic virtual environment (CAVE) designed to make the reactions of pedestrians to guidance information measureable. The navigation uses the Microsoft Kinect to obtain information on the movement of the user. The user walks on the place to move forward in the virtual world and turns her shoulders to invoke rotations in the virtual world in order to make turns. After the implementation of the hands free steering the validity of the model has been explored using a case study involving parallel test groups exposing individuals to wayfinding exercises in the real world and the corresponding virtual world. The results show that the objective distances and times in the real and the virtual worlds as well as perceptions of distances, times and directions do not differ statistically significantly validating the model for steering.
BibTeX:
@inproceedings{Bauer*2013TRB,
  author = {Dietmar Bauer and Jasmin Schneckenburger and Volker Settgast and Alex Millonig and Georg Gartner},
  title = {Hands free steering in a virtual world for the evaluation of guidance systems in pedestrian infrastructures: design and validation},
  booktitle = {Proceedings of the Transportation Research Board 92nd Annual Meeting},
  year = {2013},
  pages = {13}
}
Beetz, J., Dietze, S., Berndt, R. & Tamke, M., (2013), "Towards the Long-Term Preservation of Building Information Models", 30th International Conference on Applications of IT in the AEC Industry (CIB W78 2013), pp.209-217.
Abstract: Long-term preservation of information about artifacts of the built environment is crucial to provide the ability to retrofit legacy buildings, to preserve cultural heritage, to ensure security precautions, to enable knowledge-reuse of design and engineering solutions and to guarantee the legal liabilities of all stakeholders (e.g. designer, engineers). Efforts for the digital preservation of information have come a long way and a number of mature methods, frameworks, guidelines and software systems are at the disposal of librarians and archivists. However, the focus of these developments has primarily been on textual and audio-visual media types. With the recent paradigm shift in architecture and construction from analog 2D plans and scale models to digital 3D information models of buildings, long-term preservation efforts must turn their attention to this new type of data. Currently, no existing approach is able to provide a secure and efficient long-term preservation solution covering the broad spectrum of 3D architectural data, while at the same time taking into account the demands of institutional collectors like architecture libraries and archives as well as those of the private sector including building industry SMEs, owners, operators and public stakeholders. In this paper, an overview of the challenges of the multi-faceted domain of digital preservation in the built environment is provided. As a contribution to possible solutions to these challenges, the roadmap of the FP7 ICT-2011-9 project 'DURAARK - Durable Architectural Knowledge' is presented. Initial preliminary results of the interdisciplinary working groups within this project ranging from the ingest and storage of voluminous sets of low-level point-cloud data from laser scans to semantically consistent descriptions of heterogeneous building products and their long-term preservation are introduced and discussed.
BibTeX:
@inproceedings{Beetz*13CIB,
  author = {Beetz, Jakob and Dietze, Stefan and Berndt, René and Tamke, Martin},
  title = {Towards the Long-Term Preservation of Building Information Models},
  booktitle = {30th International Conference on Applications of IT in the AEC Industry (CIB W78 2013)},
  year = {2013},
  pages = {209-217}
}
Bernard, J., Wilhelm, N., Krüger, B., May, T., Schreck, T. & Kohlhammer, J., (2013), "MotionExplorer: Exploratory Search in Human Motion Capture Data Based on Hierarchical Aggregation", IEEE Transactions on Visualization and Computer Graphics (Proc. VAST 2013), Vol.19(12), pp.2257-2266.
Abstract: We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
BibTeX:
@article{Bernard*13vast,
  author = {J. Bernard and N. Wilhelm and B. Krüger and T. May and T. Schreck and J. Kohlhammer},
  title = {MotionExplorer: Exploratory Search in Human Motion Capture Data Based on Hierarchical Aggregation},
  journal = {IEEE Transactions on Visualization and Computer Graphics (Proc. VAST 2013)},
  year = {2013},
  volume = {19},
  number = {12},
  pages = {2257--2266},
  doi = {http://dx.doi.org/10.1109/TVCG.2013.178}
}
Bockholt, U., Wientapper, F., Wuest, H. & Fellner, D.W., (2013), "Augmented-Reality-basierte Interaktion mit Smartphone-Systemen zur Unterstützung von Servicetechnikern", at - Automatisierungstechnik(11), pp.793-799.
Abstract: The use of smartphone systems requires new interaction paradigms that can process the integrated sensory (GPS, inertial, compass) but that in particular benefits from the integrated video camera used to capture the environment. In this context Augmented Reality offers high potential to support industrial maintenance and service procedures.
BibTeX:
@article{Bockholt*13AT,
  author = {Bockholt, Ulrich and Wientapper, Folker and Wuest, Harald and Fellner, Dieter W.},
  title = {Augmented-Reality-basierte Interaktion mit Smartphone-Systemen zur Unterstützung von Servicetechnikern},
  journal = {at - Automatisierungstechnik},
  year = {2013},
  number = {11},
  pages = {793-799},
  doi = {http://dx.doi.org/10.1524/auto.2013.1056}
}
Caldera, C., Berndt, R. & Fellner, D.W., (2013), "COMFy -- A Conference Management Framework", International Conference on Electronic Publishing, pp.45-54.
Abstract: Organizing the peer review process for a scientific conference can be a cumbersome task. Electronic conference management systems support chairs and reviewers in managing the huge amount of submissions. These system implement the complete work-flow of a scientific conference. We present a new approach to such systems. By providing an open API framework instead of a closed system it enables external programs to harvest and to utilize open information sources available on the internet today.
BibTeX:
@inproceedings{Caldera*13ELPUB,
  author = {Caldera, Christian and Berndt, René and Fellner, Dieter W.},
  title = {COMFy -- A Conference Management Framework},
  booktitle = {International Conference on Electronic Publishing},
  year = {2013},
  pages = {45-54},
  doi = {http://dx.doi.org/10.3233/978-1-61499-270-7-45}
}
Chen, S., Amid, D., Shir, O., Boaz, D., Schreck, T., Limonad, L. & Anaby-Tavor, A., (2013), "SOMMOS - Self-Organizing Maps for Multi-Objective Pareto Frontiers", Proc. IEEE Pacific Visualization, pp.8.
Abstract: Decision makers often need to take into account multiple conflicting objectives when selecting a solution for their problem. This can result in a potentially large number of candidate solutions to be considered. Visualizing a Pareto Frontier, the optimal set of solutions to a multi-objective problem, is considered a difficult task when the problem at hand spans more than three objective functions. We introduce a novel visual-interactive approach to facilitate coping with multi-objective problems. We propose a characterization of the Pareto Frontier data and the tasks decision makers face as they reach their decisions. Following a comprehensive analysis of the design alternatives, we show how a semantically-enhanced Self-Organizing Map, can be utilized to meet the identified tasks. We argue that our newly proposed design provides both consistent orientation of the 2D mapping as well as an appropriate visual representation of individual solutions. We then demonstrate its applicability with two real-world multi-objective case studies. We conclude with a preliminary empirical evaluation and a qualitative usefulness assessment.
BibTeX:
@inproceedings{Chen*13pacificvis,
  author = {S. Chen and D. Amid and O. Shir and D. Boaz and T. Schreck and L. Limonad and A. Anaby-Tavor},
  title = {SOMMOS - Self-Organizing Maps for Multi-Objective Pareto Frontiers},
  booktitle = {Proc. IEEE Pacific Visualization},
  year = {2013},
  pages = {8},
  doi = {http://dx.doi.org/10.1109/PacificVis.2013.6596140}
}
Dietze, S., Beetz, J., Gadiraju, U., Katsimpras, G., Wessel, R. & Berndt, R., (2013), "Towards Preservation of semantically enriched Architectural Knowledge", 3rd International Workshop on Semantic Digital Archives (SDA 2013), pp.4-15.
Abstract: Preservation of architectural knowledge faces substantial challenges, most notably due the high level of data heterogeneity. On the one hand, low-level architectural models include 3D models and point cloud data up to richer building information models (BIM), often residing in isolated data stores with insufficient support for ensuring consistency and managing change. On the oth-er hand, the Web contains vast amounts of information of potential relevance for stakeholders in the architectural field, such as urban planners, architects or building operators. This includes in particular Linked Data, offering structured data about, for instance, energy-efficiency policies, geodata or traffic and envi-ronmental information but also valuable knowledge which can be extracted from social media, for instance, about peoples' movements in and around build-ings or their perception of certain structures. In this paper we provide an over-view of our early work towards building a sustainable, semantic long-term ar-chive in the architectural domain. In particular we highlight ongoing activities on semantic enrichment of low-level architectural models towards the curation of a semantic archive of architectural knowledge.
BibTeX:
@inproceedings{Dietze*2013SDA,
  author = {Dietze, Stefan and Beetz, Jakob and Gadiraju, Ujwal and Katsimpras, Georgios and Wessel, Raoul and Berndt, René},
  title = {Towards Preservation of semantically enriched Architectural Knowledge},
  booktitle = {3rd International Workshop on Semantic Digital Archives (SDA 2013)},
  year = {2013},
  pages = {4-15}
}
Doulamis, A., Ioannides, M., Doulamis, N., Hadjiprocopis, A., Fritsch, D., Balet, O., Julien, M., Protopapadakis, E., Makantasis, K., Weinlinger, G., Johnsons, P.S., Klein, M., Fellner, D.W., Stork, A. & Santos, P., (2013), "4D Reconstruction of the Past", First International Conference on Remote Sensing and Geoinformation of the Environment, pp.87950J-1-87950J-11, SPIE Press, Bellingham.
Abstract: One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Search engines can search text for keywords using algorithms of varied intelligence and with limited success. Searching images is a much more complex and computationally intensive task but some initial steps have already been made in this direction, mainly in face recognition. This paper aims to describe our proposed pipeline for integrating data available on Internet repositories and social media, such as photographs, animation and text to produce 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EUROPEANA. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web.
BibTeX:
@inproceedings{Doulamis*13spie,
  author = {Doulamis, Anastasios and Ioannides, Marinos and Doulamis, Nikolaos and Hadjiprocopis, Andreas and Fritsch, Dieter and Balet, Olivier and Julien, Martine and Protopapadakis, Eftychios and Makantasis, Kostas and Weinlinger, Guenther and Johnsons, Paul S. and Klein, Michael and Fellner, Dieter W. and Stork, André and Santos, Pedro},
  title = {4D Reconstruction of the Past},
  booktitle = {First International Conference on Remote Sensing and Geoinformation of the Environment},
  publisher = {SPIE Press, Bellingham},
  year = {2013},
  pages = {87950J-1-87950J-11},
  series = {Proceedings of SPIE; 8795},
  doi = {http://dx.doi.org/10.1117/12.2029010}
}
Eggeling, E., Fellner, D.W., Halm, A. & Ullrich, T., (2013), "Optimization of an Autostereoscopic Display for a Driving Simulator", GRAPP 2013 - IVAPP 2013, pp.318-326, SciTePress, Portugal.
Abstract: In this paper, we present an algorithm to optimize a 3D stereoscopic display based on parallax barriers for a driving simulator. The main purpose of the simulator is to enable user studies in reproducible laboratory conditions to test and evaluate driving assistance systems. The main idea of our optimization approach is to determine by numerical analysis the best pattern for an autostereoscopic display with the best image separation for each eye, integrated into a virtual reality environment. Our implementation uses a differential evolution algorithm, which is a parallel, direct search method based on evolution strategies, because it converges fast and is inherently parallel. This allows an execution on a network of computers. The resulting algorithm allows optimizing the display and its corresponding pattern, such that a single user in the simulator environment sees a stereoscopic image without being supported by special eye-wear.
BibTeX:
@inproceedings{Eggeling*13GrappIvapp,
  author = {Eggeling, Eva and Fellner, Dieter W. and Halm, Andreas and Ullrich, Torsten},
  title = {Optimization of an Autostereoscopic Display for a Driving Simulator},
  booktitle = {GRAPP 2013 - IVAPP 2013},
  publisher = {SciTePress, Portugal},
  year = {2013},
  pages = {318-326}
}
Eggeling, E., Fellner, D.W. & Ullrich, T., (2013), "Probability of Globality", World Academy of Science, Engineering and Technology, Vol.73, pp.483-487, WASET.
Abstract: The objective of global optimization is to find the globally best solution of a model. Nonlinear models are ubiquitous in many applications and their solution often requires a global search approach; i.e. for a function $f$ from a set $A subset R^n$ to the real numbers, an element $x_0 in A$ is sought-after, such that $forall x in A : f(x_0) leq f(x)$. Depending on the field of application, the question whether a found solution $x_0$ is not only a local minimum but a global one is very important. This article presents a probabilistic approach to determine the probability of a solution being a global minimum. The approach is independent of the used global search method and only requires a limited, convex parameter domain $A$ as well as a Lipschitz continuous function $f$ whose Lipschitz constant is not needed to be known.
BibTeX:
@inproceedings{Eggeling13*WASET,
  author = {Eggeling, Eva and Fellner, Dieter W. and Ullrich, Torsten},
  title = {Probability of Globality},
  booktitle = {World Academy of Science, Engineering and Technology},
  publisher = {WASET},
  year = {2013},
  volume = {73},
  pages = {483-487}
}
Fellner, D.W., (2013), "Nachhaltige Digitalisierung von Kulturgut -- Warumwir trotz aller Anstrengungen noch am Anfang stehen", Informatik Spektrum, Vol.36(5), pp.429-430.
BibTeX:
@article{Fellner13infspect,
  author = {Dieter W. Fellner},
  title = {Nachhaltige Digitalisierung von Kulturgut -- Warumwir trotz aller Anstrengungen noch am Anfang stehen},
  journal = {Informatik Spektrum},
  year = {2013},
  volume = {36},
  number = {5},
  pages = {429-430},
  doi = {http://dx.doi.org/10.1007/s00287-013-0732-x}
}
Franke, T., Settgast, V., Behr, J. & Raffin, B., Posada, J., Brutzman, D.P., Gracanin, D., Yoo, B. & Oyarzun, D. (ed.) (2013), "VCoRE: a web resource oriented architecture for efficient data exchange", The 18th International Conference on Web3D Technology, Web3D, pp.71-78.
BibTeX:
@inproceedings{Franke*13Web3D,
  author = {Tobias Franke and Volker Settgast and Johannes Behr and Bruno Raffin},
  editor = {Jorge Posada and Donald P. Brutzman and Denis Gracanin and Byounghyun Yoo and David Oyarzun},
  title = {VCoRE: a web resource oriented architecture for efficient data exchange},
  booktitle = {The 18th International Conference on Web3D Technology, Web3D},
  year = {2013},
  pages = {71-78},
  doi = {doi.acm.org/10.1145/2466533.2466545}
}
Havemann, S., Edelsbrunner, J., Wagner, P. & Fellner, D.W., (2013), "Curvature-Controlled Curve Editing Using Piecewise Clothoid Curves", Computers & Graphics, Vol.37(6), pp.764-773.
Abstract: Two-dimensional curves are conventionally designed using splines or Bézier curves. Although formally they are $C^2$ or higher, the variation of the curvature of (piecewise) polynomial curves is difficult to control; in some cases it is practically impossible to obtain the desired curvature. As an alternative we propose piecewise clothoid curves (PCCs). We show that from the design point of view they have many advantages: control points are interpolated, curvature extrema lie in the control points, and adding control points does not change the curve. We present a fast localized clothoid interpolation algorithm that can also be used for curvature smoothing, for curve fitting, for curvature blending, and even for directly editing the curvature. We give a physical interpretation of variational curvature minimization, from which we derive our scheme. Finally, we demonstrate the achievable quality with a range of examples.
BibTeX:
@article{Havemann*13CG,
  author = {Havemann, Sven and Edelsbrunner, Johannes and Wagner, Philipp and Fellner, Dieter W.},
  title = {Curvature-Controlled Curve Editing Using Piecewise Clothoid Curves},
  journal = {Computers & Graphics},
  year = {2013},
  volume = {37},
  number = {6},
  pages = {764-773},
  doi = {http://dx.doi.org/10.1016/j.cag.2013.05.017}
}
Havemann, S., Wagener, O. & Fellner, D.W., Ioannides, M. & Quak, E. (ed.) (2013), "Procedural shape modeling in digital humanities: Potentials and issues", 3D Challenges in Cultural Heritage, Vol.8355, pp.64-77, Springer.
Abstract: Procedural modeling is a technology that has great potential to make the abundant variety of shapes that have to be dealt with in Digital Humanities accessible and understandable. There is a gap, however, between technology on the one hand and the needs and requirements of the users in the Humanities community. In this paper we analyze the reasons for the limited uptake of procedural modeling and sketch possible ways to circumvent the problem. The key insight is that we have to find matching concepts in both fields, which are on the one hand grounded in the way shape is explained, e.g., in art history, but which can also be formalized to make them accessible to digital computers.
BibTeX:
@incollection{Havemann*13LNCS,
  author = {Havemann, Sven and Wagener, Olaf and Fellner, Dieter W.},
  editor = {Ioannides, Marinos and Quak, Ewald},
  title = {Procedural shape modeling in digital humanities: Potentials and issues},
  booktitle = {3D Challenges in Cultural Heritage},
  publisher = {Springer},
  year = {2013},
  volume = {8355},
  pages = {64-77},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-662-44630-0_5}
}
Janetzko, H., Jäckle, D. & Schreck, T., (2013), "Comparative Visual Analysis of Large Customer Feedback Based on Self-Organizing Sentiment Maps", Proc. International Conference on Advances in Information Mining and Management, pp.12-17, IARIA.
Abstract: Textual customer feedback data, e.g., received by surveys or incoming customer email notifications, can be a rich source of information with many applications in Customer Relationship Management (CRM). Nevertheless, to date this valuable source of information is often neglected in practice, as service managers would have to read manually through potentially large amounts of feedback text documents to extract actionable information. As in many cases, a purely manual approach is not feasible, we propose an automatic visualization technique to enable the geospatial-aware visual comparison of customer feedback. Our approach is based on integrating geospatial significance calculations, textual sentiment analysis, and visual clustering and aggregation based on Self-Organzing Maps in an interactive analysis application. Showing significant location dependencies of key concepts and sentiments expressed by the customer feedback, our approach helps to deal with large unstructured customer feedback data. We apply our technique to real-world customer feedback data in a case-study, showing the capabilities of our method by highlighting interesting findings.
BibTeX:
@inproceedings{Janetzko*IMMM,
  author = {H. Janetzko and D. Jäckle and T. Schreck},
  title = {Comparative Visual Analysis of Large Customer Feedback Based on Self-Organizing Sentiment Maps},
  booktitle = {Proc. International Conference on Advances in Information Mining and Management},
  publisher = {IARIA},
  year = {2013},
  pages = {12-17},
  note = {Peer-reviewed full paper, online publication}
}
Kahn, S., Bockholt, U., Kuijper, A. & Fellner, D.W., (2013), "Towards Precise Real-Time 3D Difference Detection for Industrial Applications", Computers in Industry, Vol.64(9), pp.1115-1128.
Abstract: 3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. We present an approach for 3D difference detection with a hand-held depth camera. In contrast to previous approaches, with the presented approach geometric differences can be detected in real-time and from arbitrary viewpoints. The 3D difference detection accuracy is improved by two approaches: first, the precision of the depth camera's pose estimation is improved by coupling the depth camera with a high precision industrial measurement arm. Second, the influence of the depth measurement noise is reduced by integrating a 3D surface reconstruction algorithm. The effects of both enhancements are quantified by a ground-truth based quantitative evaluation, both for a time-of-flight (SwissRanger 4000) and a structured light depth camera (Kinect). With the proposed enhancements, differences of few millimeters can be detected from 1 m measurement distance.
BibTeX:
@article{Kahn*13CI,
  author = {Kahn, Svenja and Bockholt, Ulrich and Kuijper, Arjan and Fellner, Dieter W.},
  title = {Towards Precise Real-Time 3D Difference Detection for Industrial Applications},
  journal = {Computers in Industry},
  year = {2013},
  volume = {64},
  number = {9},
  pages = {1115-1128},
  doi = {http://dx.doi.org/10.1016/j.compind.2013.04.004}
}
Kahn, S., Keil, J., Müller, B., Bockholt, U. & Fellner, D.W., (2013), "Capturing of Contemporary Dance for Preservation and Presentation of Choreographies in Online Scores", 2013 Digital Heritage International Congress. Vol I, pp.273-280, The Institute of Electrical and Electronics Engineers (IEEE), New York.
Abstract: In this paper, we present a generic and affordable approach for an automatized and markerless capturing of movements in dance, which was developed in the Motion Bank / The Forsythe Company project (www.motionbank.org). Thereby within Motion Bank we are considering the complete digitalization workflow starting with the setup of the camera array and ending with a web-based presentation of "Online Scores" visualizing different elements of choreography. Within our project, we have used our technology in two modern dance projects, one "Large Motion Space Performance" covering a large stage in solos and trios and one "Restricted Motion Space Performance" that is suited to be captured with range cameras. The project is realized in close cooperation with different choreographers and dance companies of modern ballet and with multi-media artists forming the visual representations of dance.
BibTeX:
@inproceedings{Kahn*13DH,
  author = {Kahn, Svenja and Keil, Jens and Müller, Benedikt and Bockholt, Ulrich and Fellner, Dieter W.},
  title = {Capturing of Contemporary Dance for Preservation and Presentation of Choreographies in Online Scores},
  booktitle = {2013 Digital Heritage International Congress. Vol I},
  publisher = {The Institute of Electrical and Electronics Engineers (IEEE), New York},
  year = {2013},
  pages = {273-280}
}
Keim, D., Kristajic, M., Rohrdantz, C. & Schreck, T., (2013), "Real-Time Visual Analytics for Text Streams", IEEE Computer, Special Issue on Visual Analytics, Vol.46(7), pp.47-55.
Abstract: Combining automated analysis and visual-interactive displays helps analysts rapidly sort through volumes of raw text to detect critical events and identify surrounding issues.
BibTeX:
@article{Keim*13comptext,
  author = {D. Keim and M. Kristajic and C. Rohrdantz and T. Schreck},
  title = {Real-Time Visual Analytics for Text Streams},
  journal = {IEEE Computer, Special Issue on Visual Analytics},
  year = {2013},
  volume = {46},
  number = {7},
  pages = {47--55},
  doi = {http://dx.doi.org/10.1109/MC.2013.152}
}
Kim, H., Schinko, C., Havemann, S. & Fellner, D.W., (2013), "Tiled Projection onto Bent Screens using Multi-Projectors", IADIS International Conferences Computer Graphics, Visualization, Computer Vision and Image Processing, pp.67-74.
Abstract: We provide a quick and efficient method to project a coherent image that is seamless and perspectively corrected from one particular viewpoint using an arbitrary number of projectors. The rationale is that wide-angle high-resolution cameras have become much more affordable than short-throw projectors, and only one such camera is sufficient for calibration. Our method is suitable for ad-hoc installations since no 3D reconstruction is required. We provide our method as open source solution, including a demonstrative client program for the Processing framework.
BibTeX:
@inproceedings{Kim*13mccsis,
  author = {Kim, Hyosun and Schinko, Christoph and Havemann, Sven and Fellner, Dieter W.},
  title = {Tiled Projection onto Bent Screens using Multi-Projectors},
  booktitle = {IADIS International Conferences Computer Graphics, Visualization, Computer Vision and Image Processing},
  year = {2013},
  pages = {67-74}
}
Landesberger, T. v., Bremm, S., Schreck, T. & Fellner, D.W., (2013), "Feature-based Automatic Identification of Interesting Data Segments in Group Movement Data", Information Visualization, pp.1-23.
Abstract: The study of movement data is an important task in a variety of domains such as transportation, biology, or finance. Often, the data objects are grouped (e.g. countries by continents). We distinguish three main categories of movement data analysis, based on the focus of the analysis: (a) movement characteristics of an individual in the context of its group, (b) the dynamics of a given group, and (c) the comparison of the behavior of multiple groups. Examination of group movement data can be effectively supported by data analysis and visualization. In this respect, approaches based on analysis of derived movement characteristics (called features in this article) can be useful. However, current approaches are limited as they do not cover a broad range of situations and typically require manual feature monitoring. We present an enhanced set of movement analysis features and add automatic analysis of the features for filtering the interesting parts in large movement data sets. Using this approach, users can easily detect new interesting characteristics such as outliers, trends, and task-dependent data patterns even in large sets of data points over long time horizons. We demonstrate the usefulness with two real-world data sets from the socioeconomic and the financial domains.
BibTeX:
@article{Landesberger*13IV-1,
  author = {Landesberger, Tatiana von and Bremm, Sebastian and Schreck, Tobias and Fellner, Dieter W.},
  title = {Feature-based Automatic Identification of Interesting Data Segments in Group Movement Data},
  journal = {Information Visualization},
  year = {2013},
  pages = {1-23},
  doi = {http://dx.doi.org/10.1177/1473871613477851}
}
Leeb, R., Lancelle, M., Kaiser, V., Fellner, D.W. & Pfurtscheller, G., (2013), "Thinking Penguin: Multimodal Brain-Computer Interface Control of a VR Game", IEEE Transactions on Computational Intelligence and AI in Games, Vol.5(2), pp.117-128.
Abstract: In this paper, we describe a multimodal brain-computer interface (BCI) experiment, situated in a highly immersive CAVE. A subject sitting in the virtual environment controls the main character of a virtual reality game: a penguin that slides down a snowy mountain slope.While the subject can trigger a jump action via the BCI, additional steering with a game controller as a secondary task was tested. Our experiment profits from the game as an attractive task where the subject is motivated to get a higher score with a better BCI performance. A BCI based on the so-called brain switch was applied, which allows discrete asynchronous actions. Fourteen subjects participated, of which 50% achieved the required performance to test the penguin game. Comparing the BCI performance during the training and the game showed that a transfer of skills is possible, in spite of the changes in visual complexity and task demand. Finally and most importantly, our results showed that the use of a secondary motor task, in our case the joystick control, did not deteriorate the BCI performance during the game. Through these findings, we conclude that our chosen approach is a suitable multimodal or hybrid BCI implementation, in which the user can even perform other tasks in parallel.
BibTeX:
@article{Leeb*13ieeeTCI,
  author = {Leeb, Robert and Lancelle, Marcel and Kaiser, Vera and Fellner, Dieter W. and Pfurtscheller, Gert},
  title = {Thinking Penguin: Multimodal Brain-Computer Interface Control of a VR Game},
  journal = {IEEE Transactions on Computational Intelligence and AI in Games},
  year = {2013},
  volume = {5},
  number = {2},
  pages = {117-128},
  doi = {http://dx.doi.org/10.1109/TCIAIG.2013.2242072}
}
Nazemi, K., Retz, R., Bernard, J, Kohlhammer, Jö. & Fellner, D.W., (2013), "Adaptive Semantic Visualization for Bibliographic Entries", Advances in Visual Computing. 9th International Symposium, ISVC 2013, Vol.8034, pp.13-24, Springer, Berlin, Heidelberg, New York.
Abstract: Adaptive visualizations aim to reduce the complexity of visual representations and convey information using interactive visualizations. Although the research on adaptive visualizations grew in the last years, the existing approaches do not make use of the variety of adaptable visual variables. Further the existing approaches often premises experts, who has to model the initial visualization design. In addition, current approaches either incorporate user behavior or data types. A combination of both is not proposed to our knowledge. This paper introduces the instantiation of our previously proposed model that combines both: involving different influencing factors for and adapting various levels of visual peculiarities, on visual layout and visual presentation in a multiple visualization environment. Based on data type and users' behavior, our system adapts a set of applicable visualization types. Moreover, retinal variables of each visualization type are adapted to meet individual or canonic requirements on both, data types and users' behavior. Our system does not require an initial expert modeling.
BibTeX:
@inproceedings{Nazemi*13LNCS,
  author = {Nazemi, Kawa and Retz, Reimond and Bernard, Ju"rgen and Kohlhammer, Jörn and Fellner, Dieter W.},
  title = {Adaptive Semantic Visualization for Bibliographic Entries},
  booktitle = {Advances in Visual Computing. 9th International Symposium, ISVC 2013},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2013},
  volume = {8034},
  pages = {13-24},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-41939-3_2}
}
Pan, X., Schröttner, M., Havemann, S., Schiffer, T., Berndt, R., Hecher, M. & Fellner, D.W., (2013), "A Repository Infrastructure for Working with 3D Assets in Cultural Heritage", International Journal of Heritage in the Digital Era, Vol.2(1), pp.144-166.
Abstract: The development of a European market for digital cultural heritage assets is impeded by the lack of a suitable digital marketplace, i.e., a commonly accepted exchange platform for digital assets. We have developed the technology for such a platform over the last two years: The 3D-COFORM Repository Infrastructure (RI) is a secure content management infrastructure for the distributed processing of large-volume datasets. Three of the key features of this system are (1) owners have complete control over their data, (2) binary data must have attached metadata, and (3) processing histories are documented. Our system can support the complete production pipeline for digital assets from data acquisition (photo, 3D scan) over processing (cleaning, whole filling) to interactive presentation and content delivery over the internet. In this paper we present the components of the system and their interplay. One particular focus of the software development was to make it as easy as possible toconnect client-side applications to the RI. Therefore we present the RIAPI in some detail and present several RI-enabled client-side applications that use it.
BibTeX:
@article{Pan*13IJHDE,
  author = {Pan, Xueming and Schröttner, Martin and Havemann, Sven and Schiffer, Thomas and Berndt, Rene and Hecher, Martin and Fellner, Dieter W.},
  title = {A Repository Infrastructure for Working with 3D Assets in Cultural Heritage},
  journal = {International Journal of Heritage in the Digital Era},
  year = {2013},
  volume = {2},
  number = {1},
  pages = {144-166},
  doi = {http://dx.doi.org/10.1260/2047-4970.2.1.143}
}
Peña Serna, S., Stork, A. & Fellner, D.W., (2013), "Embodiment Discrete Processing", Smart Product Engineering, pp.421-429, Springer, Berlin, Heidelberg, New York.
Abstract: The phases of the embodiment stage are sequentially conceived and in some domains even cyclic conceived. Nevertheless, there is no seamless integration between these, causing longer development processes, increment of time lags, loss of inertia, greater misunderstandings, and conflicts. Embodiment Discrete Processing enables the seamless integration of three building blocks. 1) Dynamic Discrete Representation: it is capable to concurrently handle the design and the analysis phases. 2) Dynamic Discrete Design: it deals with the needed modeling operations while keeping the consistency of the discrete shape. 3) Dynamic Discrete Analysis: it efficiently maps the dynamic changes of the shape within the design phase, while streamlining the interpretation processes. These integrated building blocks support the multidisciplinary work between designers and analysts, which was previously unusual. It creates a new understanding of what an integral processing is, whose phases were regarded as independent. Finally, it renders new opportunities toward a general purpose processing.
BibTeX:
@inproceedings{PenaSerna*13LNPE,
  author = {Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Embodiment Discrete Processing},
  booktitle = {Smart Product Engineering},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2013},
  pages = {421-429},
  series = {Lecture Notes in Production Engineering (LNPE)},
  doi = {http://dx.doi.org/10.1007/978-3-642-30817-8_41}
}
Peter, C., Kreiner, A., Schröter, M., Kim, H., Bieber, G., Öhberg, F., Kei, H., Waterworth, E., Waterworth, J. & Ballesteros, S., (2013), "AGNES: Connecting people in a multimodal way", Journal on multimodal user interfaces, Vol.7(3), pp.229-245.
Abstract: Western societies are confronted with a number of challenges caused by the increasing number of older citizens. One important aspect is the need and wish of older people to live as long as possible in their own home and maintain an independent life. As people grew older, their social networks disperse, with friends and families moving to other parts of town, other cities or even countries. Additionally, people become less mobile with age, leading to less active participation in societal life. Combined, this normal, age-related development leads to increased loneliness and social isolation of older people, with negative effects on mental and physical health of those people. In the AGNES project, a home-based system has been developed that allows connecting elderly with their families, friends and other significant people over the Internet. As most older people have limited experience with computers and often special requirements on technology, one focus of AGNES was to develop with the users novel technological means for interacting with their social network. The resulting system uses ambient displays, tangible interfaces and wearable devices providing ubiquitous options for interaction with the network, and secondary sensors for additionally generating carefully chosen information on the person to be relayed to significant persons. Evaluations show that the chosen modalities for interaction are well adopted by the users. Further it was found that use of the AGNES system had positive effects on the mental state of the users, compared to the control group without the technology.
BibTeX:
@article{Peter*13JMUI,
  author = {Peter, Christian and Kreiner, Andreas and Schröter, Martin and Kim, Hyosun and Bieber, Gerald and Öhberg, Fredrik and Kei, Hoshi and Waterworth, Eva and Waterworth, John and Ballesteros, Soledad},
  title = {AGNES: Connecting people in a multimodal way},
  journal = {Journal on multimodal user interfaces},
  year = {2013},
  volume = {7},
  number = {3},
  pages = {229-245},
  doi = {http://dx.doi.org/10.1007/s12193-013-0118-z}
}
Schaefer, M., Zhang, L., Schreck, T., Tatu, A., Lee, J., Verleysen, M. & Keim, D., (2013), "Improving Projection-based Data Analysis by Feature Space Transformations", Proc. IS&T/SPIE Conference on Visualization and Data Analysis, Vol.86540H, pp.15.
Abstract: Generating effective visual embedding of high-dimensional data is difficult - the analyst expects to see the structure of the data in the visualization, as well as patterns and relations. Given the high dimensionality, noise and imperfect embedding techniques, it is hard to come up with a satisfactory embedding that preserves the data structure well, whilst highlighting patterns and avoiding visual clutters at the same time. In this paper, we introduce a generic framework for improving the quality of an existing embedding in terms of both structural preservation and class separation by feature space transformations. A compound quality measure based on structural preservation and visual clutter avoidance is proposed to access the quality of embeddings. We evaluate the effectiveness of our approach by applying it to several widely used embedding techniques using a set of benchmark data sets and the result looks promising.
BibTeX:
@inproceedings{Schaefer*13vda,
  author = {M. Schaefer and L. Zhang and T. Schreck and A. Tatu and J. Lee and M. Verleysen and D. Keim},
  title = {Improving Projection-based Data Analysis by Feature Space Transformations},
  booktitle = {Proc. IS&T/SPIE Conference on Visualization and Data Analysis},
  year = {2013},
  volume = {86540H},
  pages = {15},
  doi = {http://dx.doi.org/10.1117/12.2000701}
}
Scherer, M., v. Landesberger, T. & Schreck, T., (2013), "Visual-Interactive Querying for Multivariate Research Data Repositories Using Bag-of-Words", Proc. ACM/IEEE Joint Conference on Digital Libraries, pp.285-294.
Abstract: Large amounts of multivariate data are collected in different areas of scientific research and industrial production. These data are collected, archived and made publicly available by research data repositories. In addition to meta-data based access, content-based approaches are highly desirable to effectively retrieve, discover and analyze data sets of interest. Several such methods, that allow users to search for particular curve progressions, have been proposed. However, a major challenge when providing content-based access -- interactive feedback during query formulation -- has not received much attention yet. This is important because it can substantially improve the user's search effectiveness. In this paper, we present a novel interactive feedback approach for content-based access to multivariate research data. Thereby, we enable query modalities that were not available for multivariate data before. We provide instant search results and highlight query patterns in the result set. Real-time search suggestions give an overview of important patterns to look for in the data repository. For this purpose, we develop a bag-of-words index for multivariate data as the back-end of our approach. We apply our method to a large repository of multivariate data from the climate research domain. We describe a use-case for the discovery of interesting patterns in maritime climate research using our new visual-interactive query tools.
BibTeX:
@inproceedings{Scherer*13jcdl,
  author = {M. Scherer and T. v. Landesberger and T. Schreck},
  title = {Visual-Interactive Querying for Multivariate Research Data Repositories Using Bag-of-Words},
  booktitle = {Proc. ACM/IEEE Joint Conference on Digital Libraries},
  year = {2013},
  pages = {285--294},
  doi = {http://dx.doi.org/10.1145/2467696.2467705}
}
Schiffer, T. & Fellner, D.W., (2013), "Ray Tracing: Lessons Learned and Future Challenges", IEEE Potentials, Vol.32(5), pp.34-37.
Abstract: Ray tracing on massively parallel hardware allows for the computation of images with a high visual quality in an increasingly short time. However, apping the computations to such architectures in an efficient manner is a challenging task.
BibTeX:
@article{Schiffer-Fellner13IEEE,
  author = {Schiffer, Thomas and Fellner, Dieter W.},
  title = {Ray Tracing: Lessons Learned and Future Challenges},
  journal = {IEEE Potentials},
  year = {2013},
  volume = {32},
  number = {5},
  pages = {34-37},
  doi = {http://dx.doi.org/10.1109/MPOT.2012.2233273}
}
Schiffer, T. & Fellner, D.W., (2013), "Towards Multi-Kernel Ray Tracing for GPUs", VMV 2013: Vision, Modeling & Visualization, pp.227-228, EG.
Abstract: Ray tracing is a widely used algorithm to compute images with high visual quality. Mapping ray tracing computations to massively parallel hardware architectures in an efficient manner is a difficult task. Based on an analysis of current ray tracing algorithms on GPUs, a new ray traversal scheme called batch tracing is proposed. It decomposes the task into multiple kernels, each of which is designed for efficient execution. Our algorithm achieves comparable performance to state-of-the-art approaches and represents a promising avenue for future research.
BibTeX:
@inproceedings{Schiffer-Fellner13VMV,
  author = {Schiffer, Thomas and Fellner, Dieter W.},
  title = {Towards Multi-Kernel Ray Tracing for GPUs},
  booktitle = {VMV 2013: Vision, Modeling & Visualization},
  publisher = {EG},
  year = {2013},
  pages = {227-228},
  doi = {http://dx.doi.org/10.2312/PE.VMV.VMV13.227-228}
}
Schreck, T., Omer, I., Bak, P. & Lerman, Y., (2013), "A Visual Analytics Approach for Assessing Pedestrian Friendliness of Urban Environments", Springer Lecture Notes in Geoinformation and Cartography (Proc. AGILE International Conference on Geographic Information Science), pp.353-368.
Abstract: The availability of efficient transportation facilities is vital to the function and development of modern cities. Promoting walking is crucial for supporting livable communities and cities. Assessing the quality of pedestrian facilities and constructing appropriate pedestrian walking facilities are important tasks in public city planning. Additionally, walking facilities in a community affect commercial activities including private investment decisions such as those of retailers. However, analyzing what we call pedestrian friendliness in an urban environment involves multiple data perspectives, such as street networks, land use, and other multivariate observation measurements, and consequently poses significant challenges. In this study, we investigate the effect of urban environment properties on pedestrian movement in different locations in the metropolitan region of Tel Aviv. The first urban area we investigated was the inner city of the Tel Aviv metropolitan region, one of the central regions in Tel Aviv, a city that serves many non-local residents. For simplicity, we refer to this area as Tel Aviv. We also investigated Bat Yam, a small city, whose residents use many of the services of Tel Aviv. We apply an improved tool for visual analysis of the correlation between multiple independent and one dependent variable in geographical context. We use the tool to investigate the effect of functional and topological properties on the volume of pedestrian movement. The results of our study indicate that these two urban areas differ greatly. The urban area of Tel Aviv has much more correspondence and interdependency among the functional and topological properties of the urban environment that might influence pedestrian movement. We also found that the pedestrian movements as well as the related urban environment properties in this region are distributed geographically in a more equal and organized form.
BibTeX:
@inproceedings{Schreck*13LNGC,
  author = {T. Schreck and I. Omer and P. Bak and Y. Lerman},
  title = {A Visual Analytics Approach for Assessing Pedestrian Friendliness of Urban Environments},
  booktitle = {Springer Lecture Notes in Geoinformation and Cartography (Proc. AGILE International Conference on Geographic Information Science)},
  year = {2013},
  pages = {353--368},
  doi = {http://dx.doi.org/10.1007/978-3-319-00615-4_20}
}
Schreck, T. & Keim, D., (2013), "Visual Analysis of Social Media Data", IEEE Computer, Special Issue on Cutting-Edge Research in Visualization, Vol.46(5), pp.68-75.
Abstract: The application of visual analytics, which combines the advantages of computational knowledge discovery and interactive visualization, to social media data highlights the many benefits of this integrated approach.
BibTeX:
@article{Schreck-Keim13computer,
  author = {T. Schreck and D. Keim},
  title = {Visual Analysis of Social Media Data},
  journal = {IEEE Computer, Special Issue on Cutting-Edge Research in Visualization},
  year = {2013},
  volume = {46},
  number = {5},
  pages = {68--75},
  doi = {http://dx.doi.org/10.1109/MC.2012.430}
}
Schwenk, K., Behr, J. & Fellner, D.W., (2013), "Filtering Noise in Progressive Stochastic Ray Tracing: Four Optimizations to Improve Speed and Robustness", The Visual Computer, Vol.29(5), pp.359-368.
Abstract: We present an improved version of a state-of-theart noise reduction technique for progressive stochastic rendering. Our additions make the method significantly faster at the cost of an acceptable loss in quality. Additionally, we improve the robustness of the method in the presence of difficult features like glossy reflection, caustics, and antialiased edges. We show with visual and numerical comparisons that our extensions improve the overall performance of the original approach and make it more broadly applicable.
BibTeX:
@article{Schwenk*13VisComput,
  author = {Schwenk, Karsten and Behr, Johannes and Fellner, Dieter W.},
  title = {Filtering Noise in Progressive Stochastic Ray Tracing: Four Optimizations to Improve Speed and Robustness},
  journal = {The Visual Computer},
  year = {2013},
  volume = {29},
  number = {5},
  pages = {359-368},
  doi = {http://dx.doi.org/10.1007/s00371-012-0738-4}
}
Senaratne, H., Bröring, A. & Schreck, T., (2013), "Using Reverse Viewshed Analysis to Assess the Location Correctness of Visually Generated VGI", Wiley-Blackwell Transactions in GIS, Vol.17(3), pp.369-386.
Abstract: With the increased availability of user generated data, assessing the quality and credibility of such data becomes important. In this article, we propose to assess the location correctness of visually generated Volunteered Geographic Information (VGI) as a quality reference measure. The location correctness is determined by checking the visibility of the point of interest from the position of the visually generated VGI (observer point); as an example we utilize Flickr photographs. Therefore we first collect all Flickr photographs that conform to a certain point of interest through their textual labelling. Then we conduct a reverse viewshed analysis for the point of interest to determine if it lies within the area of visibility of the observer points. If the point of interest lies outside the visibility of a given observer point, the respective geotagged image is considered to be incorrectly geotagged. In this way, we analyze sample datasets of photographs and make observations regarding the dependency of certain user/photo metadata and (in)correct geotags and labels. In future the dependency relationship between the location correctness and user/photo metadata can be used to automatically infer user credibility. In other words, attributes such as profile completeness, together with location correctness, can serve as a weighted score to assess credibility.
BibTeX:
@article{Senaratne*13txgis,
  author = {H. Senaratne and A. Bröring and T. Schreck},
  title = {Using Reverse Viewshed Analysis to Assess the Location Correctness of Visually Generated VGI},
  journal = {Wiley-Blackwell Transactions in GIS},
  year = {2013},
  volume = {17},
  number = {3},
  pages = {369--386},
  doi = {http://dx.doi.org/10.1111/tgis.12039}
}
Sipiran, I., Bustos, B. & Schreck, T., (2013), "Data-aware 3D Partitioning for Generic Shape Retrieval", Computers & Graphics Special Issue on 3D Object Retrieval, Vol.37(5), pp.460-472.
Abstract: In this paper, we present a new approach for generic 3D shape retrieval based on a mesh partitioning scheme. Our method combines a mesh global description and mesh partition descriptions to represent a 3D shape. The partitioning is useful because it helps us to extract additional information in a more local sense. Thus, part descriptions can mitigate the semantic gap imposed by global description methods. We propose to find spatial agglomerations of local features to generate mesh partitions. Hence, the definition of a distance function is stated as an optimization problem to find the best match between two shape representations. We show that mesh partitions are representative and therefore it helps to improve the effectiveness in retrieval tasks. We present exhaustive experimentation using the SHREC'09 Generic Shape Retrieval Benchmark.
BibTeX:
@article{Sipiran*13cga,
  author = {I. Sipiran and B. Bustos and T. Schreck},
  title = {Data-aware 3D Partitioning for Generic Shape Retrieval},
  journal = {Computers & Graphics Special Issue on 3D Object Retrieval},
  year = {2013},
  volume = {37},
  number = {5},
  pages = {460--472},
  doi = {http://dx.doi.org/10.1016/j.cag.2013.04.002}
}
Sturm, W.J., Berndt, R., Halm, A., Ullrich, T., Eggeling, E. & Fellner, D.W., (2013), "Energy Balance: A Web-based Visualization of Energy for Automotive Engineering using X3DOM", International Conference on Creative Content Technologies , pp.1-10.
Abstract: Automotive systems can be very complex when using multiple forms of energy. To achieve better energy efficiency, engineers require specialized tools to cope with that complexity and to comprehend how energy is spread and consumed. This is especially essential to develop hybrid systems, which generate electricity by various available forms of energy. Therefore, highly specialized visualizations of multiple measured energies are needed. This paper examines several three-dimensional glyph-based visualization techniques for spatial multivariate data. Besides animated glyphs, twodimensional visualization techniques for temporal data to allow detailed trend analysis are considered as well. Investigations revealed that Scaled Data-Driven Spheres are best suited for a detailed 3D exploration of measured data. To gain a better overview of the spatial data, Cumulative Glyphs are introduced. For trend analysis, Theme River and Stacked Area Graphs are used. All these visualization techniques are implemented as a web-based prototype without the need of additional web browser plugins using X3DOM and Data-Driven Documents.
BibTeX:
@inproceedings{Sturm*13Content,
  author = {Sturm, Werner J. and Berndt, René and Halm, Andreas and Ullrich, Torsten and Eggeling, Eva and Fellner, Dieter W.},
  title = {Energy Balance: A Web-based Visualization of Energy for Automotive Engineering using X3DOM},
  booktitle = {International Conference on Creative Content Technologies },
  year = {2013},
  pages = {1-10}
}
Thaller, W., Zmugg, R., Krispel, U., Posch, M., Havemann, S. & Fellner Dieter, W., (2013), "Creating Procedural Windowbuilding Blocks using the Generative Fact Labeling Method", Proceedings of the 5th ISPRS International Workshop 3D-ARCH 2013, pp.235-242.
Abstract: The generative surface reconstruction problem can be stated like this: Given a finite collection of 3D shapes, create a small set of functions that can be combined to generate the given shapes procedurally. We propose generative fact labeling (GFL) as an attempt to organize the iterative process of shape analysis and shape synthesis in a systematic way. We present our results for the reconstruction of complex windows of neo-classical buildings in Graz, followed by a critical discussion of the limitations of the approach.
BibTeX:
@inproceedings{Thaller*13-3DArch,
  author = {Thaller, Wolfgang and Zmugg, René and Krispel, Ulrich and Posch, Martin and Havemann, Sven and Fellner Dieter, W.},
  title = {Creating Procedural Windowbuilding Blocks using the Generative Fact Labeling Method},
  booktitle = {Proceedings of the 5th ISPRS International Workshop 3D-ARCH 2013},
  year = {2013},
  pages = {235-242},
  doi = {http://dx.doi.org/10.5194/isprsarchives-XL-5-W1-235-2013}
}
Thaller, W., Krispel, U., Zmugg, R., Havemann, S. & Fellner, D.W., (2013), "Shape Grammars on Convex Polyhedra", Computers & Graphics, Vol.37(6), pp.707-717.
Abstract: Shape grammars are the method of choice for procedural modeling of architecture. State of the art shape grammar systems define a bounding box for each shape; various operations can then be applied based on this bounding box. Most notably, the box can be split into smaller boxes along any of its three axes. We argue that a greater variety can be obtained by using convex polyhedra as bounding volumes instead. Split operations on convex polyhedra are no longer limited to the three principal axes but can use arbitrary planes. Such splits permit a volumetric decomposition into convex elements; as convex polyhedra can represent many shapes more faithfully than boxes, shape grammar rules can adapt to a much wider array of different contexts. We generalize established shape operations and introduce new operations that now become possible.
BibTeX:
@article{Thaller*13CG,
  author = {Thaller, Wolfgang and Krispel, Ulrich and Zmugg, René and Havemann, Sven and Fellner, Dieter W.},
  title = {Shape Grammars on Convex Polyhedra},
  journal = {Computers & Graphics},
  year = {2013},
  volume = {37},
  number = {6},
  pages = {707-717},
  doi = {http://dx.doi.org/10.1016/j.cag.2013.05.012}
}
Thaller, W., Krispel, U., Zmugg, R., Havemann, S. & Fellner, D.W., (2013), "A Graph-Based Language for Direct Manipulation of Procedural Models", International Journal on Advances in Software , Vol.6(3 \& 4), pp.225-236.
Abstract: Creating 3D content requires a lot of expert knowledge and is often a very time consuming task. Procedural modeling can simplify this process for several application domains. However, creating procedural descriptions is still a complicated task. Graph based visual programming languages can ease the creation workflow; however, direct manipulation of procedural 3D content rather than of a visual program is desirable as it resembles established techniques in 3D modeling. In this paper, we present a dataflow language that features novel contributions towards direct interactive manipulation of procedural 3D models: We eliminate the need to manually program loops (via implicit handling of nested repetitions), we introduce partial reevaluation strategies for efficient execution, and we show the integration of stateful external libraries (scene graphs) into the dataflow model of the proposed language.
BibTeX:
@article{Thaller*13IARIA,
  author = {Thaller, Wolfgang and Krispel, Ulrich and Zmugg, René and Havemann, Sven and Fellner, Dieter W.},
  title = {A Graph-Based Language for Direct Manipulation of Procedural Models},
  journal = {International Journal on Advances in Software },
  year = {2013},
  volume = {6},
  number = {3 & 4},
  pages = {225-236}
}
Ullrich, T., Schinko, C., Schiffer, T. & Fellner, D.W., (2013), "Procedural descriptions for analyzing digitized artifacts", Applied Geomatics, Vol.5(3), pp.185-192.
Abstract: Within the last few years, generative modeling techniques have gained attention especially in the context of cultural heritage. As a generative model describes a rather ideal object than a real one, generative techniques are a basis for object description and classification. This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way: It is an algorithm, which reflects the way objects are designed. Consequently, generative models are not a replacement for established geometry descriptions (based on points, triangles, etc.) but a semantic enrichment. In combination with variance analysis techniques, generative descriptions can be used to validate reconstructions. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Our contribution to this problem is a work flow, which automatically combines generative/procedural descriptions with reconstructed artifacts and performs a nominal/actual value comparison. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object, whereas the actual object is a real-world data set (laser scan or photogrammetric reconstruction) without any additional semantic information.
BibTeX:
@article{Ullrich*13AG,
  author = {Torsten Ullrich and Christoph Schinko and Thomas Schiffer and Dieter W.~Fellner },
  title = {Procedural descriptions for analyzing digitized artifacts},
  journal = {Applied Geomatics},
  year = {2013},
  volume = {5},
  number = {3},
  pages = {185-192},
  doi = {http://dx.doi.org/10.1007/s12518-013-0107-7}
}
Ullrich, T., Silva, N., Eggeling, E. & Fellner, D.W., (2013), "Generative Modeling and Numerical Optimization for Energy Efficient Buildings", Proceedings IECON 2013 -- 39th Annual Conference of the IEEE Industrial Electronics Society, pp.4756-4761.
Abstract: A procedural model is a script, which generates a geometric object. The script's input parameters offer a simple way to specify and modify the scripting output. Due to its algorithmic character, a procedural model is perfectly suited to describe geometric shapes with well-organized structures and repetitive forms. In this paper, we interpret a generative script as a function, which is nested into an objective function. Thus, the script's parameters can be optimized according to an objective. We demonstrate this approach using architectural examples: each generative script creates a building with several free parameters. The objective function is an energy-efficiency-simulation that approximates a building's annual energy consumption. Consequently, the nested objective function reads a set of building parameters and returns the energy needs for the corresponding building. This nested function is passed to a minimization and optimization process. Outcome is the best building (within the family of buildings described by its script) concerning energyefficiency. Our contribution is a new way of modeling. The generative approach separates design and engineering: the complete design is encoded in a script and the script ensures that all parameter combinations (within a fixed range) generate a valid design. Then the design can be optimized numerically.
BibTeX:
@inproceedings{Ullrich*13IECON,
  author = {Ullrich, Torsten and Silva, Nelson and Eggeling, Eva and Fellner, Dieter W.},
  title = {Generative Modeling and Numerical Optimization for Energy Efficient Buildings},
  booktitle = {Proceedings IECON 2013 -- 39th Annual Conference of the IEEE Industrial Electronics Society},
  year = {2013},
  pages = {4756-4761},
  doi = {http://dx.doi.org/10.1109/IECON.2013.6699904}
}
Ullrich, T. & Schinko, C., (2013), "Bibliotheksdienste und semantische Auszeichnungen für digitale Artefakte", Kulturelles Erbe in der Cloud, Vol.4, pp.68-70.
BibTeX:
@inproceedings{Ullrich-Schinko13DB,
  author = {Ullrich, Torsten and Schinko, Christoph},
  title = {Bibliotheksdienste und semantische Auszeichnungen für digitale Artefakte},
  booktitle = {Kulturelles Erbe in der Cloud},
  year = {2013},
  volume = {4},
  pages = {68-70}
}
Weber, D., Bender, J., Schnös, M., Stork, A. & Fellner, D.W., (2013), "Efficient GPU Data Structures and Methods to Solve Sparse Linear Systems in Dynamics Applications", Computer Graphics Forum, Vol.32(1), pp.16-26.
Abstract: We present graphics processing unit (GPU) data structures and algorithms to efficiently solve sparse linear systems that are typically required in simulations of multi-body systems and deformable bodies. Thereby, we introduce an efficient sparse matrix data structure that can handle arbitrary sparsity patterns and outperforms current state-of-the-art implementations for sparse matrix vector multiplication. Moreover, an efficient method to construct global matrices on the GPU is presented where hundreds of thousands of individual element contributions are assembled in a few milliseconds. A finite-element-based method for the simulation of deformable solids as well as an impulse-based method for rigid bodies are introduced in order to demonstrate the advantages of the novel data structures and algorithms. These applications share the characteristic that a major computational effort consists of building and solving systems of linear equations in every time step. Our solving method results in a speed-up factor of up to 13 in comparison to other GPU methods.
BibTeX:
@article{Weber*13CGF,
  author = {Weber, Daniel and Bender, Jan and Schnös, Markus and Stork, André and Fellner, Dieter W.},
  title = {Efficient GPU Data Structures and Methods to Solve Sparse Linear Systems in Dynamics Applications},
  journal = {Computer Graphics Forum},
  year = {2013},
  volume = {32},
  number = {1},
  pages = {16-26},
  doi = {http://dx.doi.org/10.1111/j.1467-8659.2012.03227.x}
}
Wientapper, F., Wuest, H., Rojtberg, P. & Fellner, D.W., (2013), "A Camera-Based Calibration for Automotive Augmented Reality Head-Up-Displays", 12th IEEE International Symposium on Mixed and Augmented Reality 2013 (ISMAR), pp.189-197, IEEE Computer Society, Los Alamitos, Calif..
Abstract: Using Head-up-Displays (HUD) for Augmented Reality requires to have an accurate internal model of the image generation process, so that 3D content can be visualized perspectively correct from the viewpoint of the user. We present a generic and cost-effective camera-based calibration for an automotive HUD which uses the windshield as a combiner. Our proposed calibration model encompasses the view-independent spatial geometry, i.e. the exact location, orientation and scaling of the virtual plane, and a view-dependent image warping transformation for correcting the distortions caused by the optics and the irregularly curved windshield. View-dependency is achieved by extending the classical polynomial distortion model for cameras and projectors to a generic five-variate mapping with the head position of the viewer as additional input. The calibration involves the capturing of an image sequence from varying viewpoints, while displaying a known target pattern on the HUD. The accurate registration of the camera path is retrieved with state-of-the-art vision-based tracking. As all necessary data is acquired directly from the images, no external tracking equipment needs to be installed. After calibration, the HUD can be used together with a head-tracker to form a head-coupled display which ensures a perspectively correct rendering of any 3D object in vehicle coordinates from a large range of possible viewpoints. We evaluate the accuracy of our model quantitatively and qualitatively.
BibTeX:
@inproceedings{Wientapper*13ISMAR,
  author = {Wientapper, Folker and Wuest, Harald and Rojtberg, Pavel and Fellner, Dieter W.},
  title = {A Camera-Based Calibration for Automotive Augmented Reality Head-Up-Displays},
  booktitle = {12th IEEE International Symposium on Mixed and Augmented Reality 2013 (ISMAR)},
  publisher = {IEEE Computer Society, Los Alamitos, Calif.},
  year = {2013},
  pages = {189-197},
  doi = {http://dx.doi.org/10.1109/ISMAR.2013.6671779}
}
Zmugg, R., Thaller, W., Krispel, U., Edelsbrunner, J., Havemann, S. & Fellner, D.W., (2013), "Deformation-Aware Split Grammars for Architectural Models", 2013 International Conference on Cyberworlds (CW), pp.4-11.
Abstract: With the current state of video games growing in scale, manual content creation may no longer be feasible in the future. Split grammars are a promising technology for large scale procedural generation of urban structures, which are very common in video games. Buildings with curved parts, however, can currently only be approximated by static pre-modeled assets, and rules apply only to planar surface parts. We present an extension to current split grammar systems that allows the generation of curved architecture through free-form deformations that can be introduced at any level in a grammar. Further subdivision rules can then adapt to these deformations to maintain length constraints, and repetitions can adjust to more or less space.
BibTeX:
@inproceedings{Zmugg*13CW,
  author = {Zmugg, Rene and Thaller, Wolfgang and Krispel, Ulrich and Edelsbrunner, Johannes and Havemann, Sven and Fellner, Dieter W.},
  title = {Deformation-Aware Split Grammars for Architectural Models},
  booktitle = {2013 International Conference on Cyberworlds (CW)},
  year = {2013},
  pages = {4-11},
  doi = {http://dx.doi.org/10.1109/CW.2013.11}
}

2012

Barmak, K., Eggeling, E., Emelianenko, M., Epshteyn, Y., Kinderlehrer, D., Ta'asan, S. & Sharp, R., (2012), "A first approach toward a Proper Generalized Decomposition", Discrete and continuous dynamical systems (Series A 30), Vol.715-716, pp.279-285.
BibTeX:
@article{Barmak*12MCF,
  author = {K. Barmak and E. Eggeling and M. Emelianenko and Y. Epshteyn and D. Kinderlehrer and S. Ta'asan and R. Sharp},
  title = {A first approach toward a Proper Generalized Decomposition},
  journal = {Discrete and continuous dynamical systems (Series A 30)},
  year = {2012},
  volume = {715-716},
  pages = {279-285},
  doi = {http://dx.doi.org/10.4028/www.scientific.net/MSF.715-716.279}
}
Barmak, K., Eggeling, E., Sharp, R., Roberts, S., Shyu, T., Sun, T., Yao, B., Ta'asan, S., Kinderlehrer, D., Rollett, A.D. & Coffey, K., (2012), "Grain Growth and the Puzzle of its Stagnation in Thin Films: A Detailed Comparison of Experiments and Simulations", Materials Science Forum, Vol.715-716, pp.473-479.
BibTeX:
@article{Barmak*12MSF,
  author = {Katayun Barmak and Eva Eggeling and Richard Sharp and Scott Roberts and Terry Shyu and Tik Sun and Bo Yao and Shlomo Ta'asan and David Kinderlehrer and Anthony D. Rollett and Kevin Coffey},
  title = {Grain Growth and the Puzzle of its Stagnation in Thin Films: A Detailed Comparison of Experiments and Simulations},
  journal = {Materials Science Forum},
  year = {2012},
  volume = {715-716},
  pages = {473-479},
  doi = {http://dx.doi.org/10.4028/www.scientific.net/MSF.715-716.473}
}
Barmak, K., Eggeling, E., Emelianenko, M., Epshteyn, Y., Kinderlehrer, D., Sharp, R. & Ta'asan, S., (2012), "Predictive Theory for the Grain Boundary Character Distribution", Materials Science Forum, Vol.715-716, pp.279-285.
BibTeX:
@article{Barmak*12MSF-2,
  author = {Barmak, Katayun and Eggeling, Eva and Emelianenko, Maria and Epshteyn, Yekaterina and Kinderlehrer, David and Sharp, Richard and Ta'asan, Shlomo},
  title = {Predictive Theory for the Grain Boundary Character Distribution},
  journal = {Materials Science Forum},
  year = {2012},
  volume = {715-716},
  pages = {279-285},
  doi = {http://dx.doi.org/10.4028/www.scientific.net/MSF.715-716.279}
}
Bein, M., Peña Serna, S., Stork, A. & Fellner, D.W., (2012), "Completing Digital Cultural Heritage Objects by Sketching Subdivision Surfaces toward Restoration Planning", Progress in Cultural Heritage Preservation, Vol.7616, pp.301-309, Springer, Berlin, Heidelberg, New York.
Abstract: In the restoration planning process a curator evaluates the condition of a Cultural Heritage (CH) object and accordingly develops a set of hypotheses for improving it. This iterative process is complex, time consuming and requires many manual interventions. In this context, we propose interactive modeling techniques, based on subdivision surfaces, which can support the completion of CH objects toward restoration planning. The proposed technique starts with a scanned and incomplete object, represented by a triangle mesh, from which a subdivision surfaces can be generated. Based on the mixed representation, sketching techniques and modeling operations can be combined to extend and refine the subdivision surface, according to the curator's hypothesis. Thus, curators without rigorous modeling experience can directly create and manipulate surfaces in a similar way as they would do it on a piece of paper. We present the capabilities of the proposed technique on two interesting CH objects.
BibTeX:
@inproceedings{Bein*12lncs,
  author = {Bein, Matthias and Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Completing Digital Cultural Heritage Objects by Sketching Subdivision Surfaces toward Restoration Planning},
  booktitle = {Progress in Cultural Heritage Preservation},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2012},
  volume = {7616},
  pages = {301-309},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-34234-9_30}
}
Bender, J., Kuijper, A., Fellner, D.W. & Guérin, E. (ed.) (2012), "VRIPHYS 12: 9th Workshop in Virtual Reality Interactions and Physical Simulations", Eurographics Association, Goslar.
BibTeX:
@proceedings{Bender*12VRIPHYS,,
  editor = {Bender, Jan and Kuijper, Arjan and Fellner, Dieter W. and Guérin, Eric},
  title = {VRIPHYS 12: 9th Workshop in Virtual Reality Interactions and Physical Simulations},
  publisher = {Eurographics Association, Goslar},
  year = {2012}
}
Bernard, J., Ruppert, T., Scherer, M., Schreck, T. & Kohlhammer, J., (2012), "Guided Discovery of Interesting Relationships Between Time-Series Clusters and Metadata Properties", Proc. International Conference on Knowledge Management and Knowledge Technologies, Special Track on Theory and Applications of Visual Analytics, pp.8, ACM ICPS.
Abstract: Visual cluster analysis provides valuable tools that help analysts to understand large data sets in terms of representative clusters and relationships thereof. Often, the found clusters are to be understood in context of belonging categorical, numerical or textual metadata which are given for the data elements. While often not part of the clustering process, such metadata play an important role and need to be considered during the interactive cluster exploration process. Traditionally, linked-views allow to relate (or loosely speaking: correlate) clusters with metadata or other properties of the underlying cluster data. Manually inspecting the distribution of metadata for each cluster in a linked-view approach is tedious, especially for large data sets, where a large search problem arises. Fully interactive search for potentially useful or interesting cluster to metadata relationships may constitute a cumbersome and long process. To remedy this problem, we propose a novel approach for guiding users in discovering interesting relationships between clusters and associated metadata. Its goal is to guide the analyst through the potentially huge search space. We focus in our work on metadata of categorical type, which can be summarized for a cluster in form of a histogram. We start from a given visual cluster representation, and compute certain measures of interestingness defined on the distribution of metadata categories for the clusters. These measures are used to automatically score and rank the clusters for potential interestingness regarding the distribution of categorical metadata. Identified interesting relationships are highlighted in the visual cluster representation for easy inspection by the user. We present a system implementing an encompassing, yet extensible, set of interestingness scores for categorical metadata, which can also be extended to numerical metadata. Appropriate visual representations are provided for showing the visual correlations, as well as the calculated ranking scores. Focusing on clusters of time series data, we test our approach on a large real-world data set of time-oriented scientific research data, demonstrating how specific interesting views are automatically identified, supporting the analyst discovering interesting and visually understandable relationships.
BibTeX:
@inproceedings{Bernard*12iKnow,
  author = {J. Bernard and T. Ruppert and M. Scherer and T. Schreck and J. Kohlhammer},
  title = {Guided Discovery of Interesting Relationships Between Time-Series Clusters and Metadata Properties},
  booktitle = {Proc. International Conference on Knowledge Management and Knowledge Technologies, Special Track on Theory and Applications of Visual Analytics},
  publisher = {ACM ICPS},
  year = {2012},
  pages = {8},
  doi = {http://dx.doi.org/10.1145/2362456.2362485}
}
Bernard, J., Ruppert, T., Scherer, M., Kohlhammer, J. & Schreck, T., (2012), "Content-Based Layouts for Exploratory Metadata Search in Scientific Research Data", Proc. ACM/IEEE Joint Conference on Digital Libraries, pp.139-148.
Abstract: Today's digital libraries (DLs) archive vast amounts of information in the form of text, videos, images, data measurements, etc. User access to DL content can rely on similarity between metadata elements, or similarity between the data itself (content-based similarity). We consider the problem of exploratory search in large DLs of time-oriented data. We propose a novel approach for overview-first exploration of data collections based on user-selected metadata properties. In a 2D layout representing entities of the selected property are laid out based on their similarity with respect to the underlying data content. The display is enhanced by compact summarizations of underlying data elements, and forms the basis for exploratory navigation of users in the data space. The approach is proposed as an interface for visual exploration, leading the user to discover interesting relationships between data items relying on content-based similarity between data items and their respective metadata labels. We apply the method on real data sets from the earth observation community, showing its applicability and usefulness.
BibTeX:
@inproceedings{Bernard*12jcdl,
  author = {J. Bernard and T. Ruppert and M. Scherer and J. Kohlhammer and T. Schreck},
  title = {Content-Based Layouts for Exploratory Metadata Search in Scientific Research Data},
  booktitle = {Proc. ACM/IEEE Joint Conference on Digital Libraries},
  year = {2012},
  pages = {139--148},
  note = {Best Student Paper Award},
  doi = {http://dx.doi.org/10.1145/2232817.2232844}
}
Bernard, J., Wilhelm, N., Scherer, M., May, T. & Schreck, T., (2012), "TimeSeriesPaths: Projection-Based Explorative Analysis of Multivariate Time Series Data", Proc. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, pp.10, University of West Bohemia, Plzen.
Abstract: The analysis of time-dependent data is an important problem in many application domains, and interactive visualization of time-series data can help in understanding patterns in large time series data. Many effective approaches already exist for visual analysis of univariate time series supporting tasks such as assessment of data quality, detection of outliers, or identification of periodically or frequently occurring patterns. However, much fewer approaches exist which support multivariate time series. The existence of multiple values per time stamp makes the analysis task per se harder, and existing visualization techniques often do not scale well. We introduce an approach for visual analysis of large multivariate time-dependent data, based on the idea of projecting multivariate measurements to a 2D display, visualizing the time dimension by trajectories. We use visual data aggregation metaphors based on grouping of similar data elements to scale with multivariate time series. Aggregation procedures can either be based on statistical properties of the data or on data clustering routines. Appropriately defined user controls allow to navigate and explore the data and interactively steer the parameters of the data aggregation to enhance data analysis. We present an implementation of our approach and apply it on a comprehensive data set from the field of earth observation, demonstrating the applicability and usefulness of our approach.
BibTeX:
@inproceedings{Bernard*12wscg,
  author = {J. Bernard and N. Wilhelm and M. Scherer and T. May and T. Schreck},
  title = {TimeSeriesPaths: Projection-Based Explorative Analysis of Multivariate Time Series Data},
  booktitle = {Proc. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision},
  publisher = {University of West Bohemia, Plzen},
  year = {2012},
  pages = {10}
}
Berndt, R., Schinko, C., Krispel, U., Settgast, V., Havemann, S., Eggeling, E. & Fellner, D.W., (2012), "Ring's Anatomy -- Parametric Design of Wedding Rings", CONTENT 2012, pp.72-78, Xpert Publishing Services, Wilmington, USA.
Abstract: We present a use case that demonstrates the effectiveness of procedural shape modeling for mass customization of consumer products. We show a metadesign that is composed of a few well-defined procedural shape building blocks. It can generate a large variety of shapes and covers most of a design space defined by a collection of exemplars, in our case wedding rings. We describe the process of model abstraction for the shape space spanned by these shapes, arguing that the same is possible for other shape design spaces as well.
BibTeX:
@inproceedings{Berndt*12content,
  author = {René Berndt and Christoph Schinko and Ulrich Krispel and Volker Settgast and Sven Havemann and Eva Eggeling and Dieter W. Fellner},
  title = {Ring's Anatomy -- Parametric Design of Wedding Rings},
  booktitle = {CONTENT 2012},
  publisher = {Xpert Publishing Services, Wilmington, USA},
  year = {2012},
  pages = {72-78}
}
Berndt, R., Blümel, I., Sens, I., Clausen, M., Damm, D., Klein, R., Thomas, V., Wessel, R., Diet, Jü., Fellner, D.W. & Scherer, M., (2012), "PROBADO -- A Digital Library System for Heterogeneous Non-textual Documents", Eleed Journal, Vol.8(1), pp.5 p.
BibTeX:
@article{Berndt*12eleed,
  author = {Berndt, René and Blümel, Ina and Sens, Irina and Clausen, Michael and Damm, David and Klein, Reinhard and Thomas, Verena and Wessel, Raoul and Diet, Jürgen and Fellner, Dieter W. and Scherer, Maximilian },
  title = {PROBADO -- A Digital Library System for Heterogeneous Non-textual Documents},
  journal = {Eleed Journal},
  year = {2012},
  volume = {8},
  number = {1},
  pages = {5 p},
  note = {available from //nbn-resolving.de/urn:nbn:de:0009-5-32740}
}
Bremm, S., Heß, M., Landesberger, T. v. & Fellner, D.W., (2012), "PCDC -- On the Highway to Data. A Tool for the Fast Generation of Large Synthetic Data Sets", EuroVA 2012, pp.7-11, Eurographics Association, Goslar.
Abstract: In this paper, we present Parallel Coordinates for Data Creation (PCDC), a new visual-interactive method for the fast generation of labeled multidimensional data sets. Multivariate data need to be analyzed in various domains such as finance, biology or medicine using complex data mining techniques. For the evaluation or presentation of the techniques, e.g., for assessing their sensitivity to specific data properties, test data need to be generated. PCDC allows for a fast and intuitive creation of multivariate data with several classes. It is based on interactive definition of data regions and data distributions in a parallel coordinates view. It offers a quick definition of data regions over several dimensions in one interface. Moreover, the users can directly see the outcome of their settings in the same view without the need for switching between data generation and output visualization. Our tool enables also an easy adjustment of the data generation parameters for creating additional similar datasets.
BibTeX:
@inproceedings{Bremm*12EuroVA,
  author = {Bremm, Sebastian and Heß, Martin and Landesberger, Tatiana von and Fellner, Dieter W.},
  title = {PCDC -- On the Highway to Data. A Tool for the Fast Generation of Large Synthetic Data Sets},
  booktitle = {EuroVA 2012},
  publisher = {Eurographics Association, Goslar},
  year = {2012},
  pages = {7-11},
  doi = {http://dx.doi.org/10.2312/PE/EuroVAST/EuroVA12/007-011}
}
Brix, T., Fellner, D.W., Krämer, B.J. & Schrader, T., (2012), "Workshop: Centers of Excellence for Research Information -- Digital Text and Data Centers for Science and Open Research", Eleed Journal, Vol.8(1), pp.2 p.
BibTeX:
@article{Brix*12eleed,
  author = {Brix, T. and Fellner, Dieter W. and Krämer, Berndt J. and Schrader, T. },
  title = {Workshop: Centers of Excellence for Research Information -- Digital Text and Data Centers for Science and Open Research},
  journal = {Eleed Journal},
  year = {2012},
  volume = {8},
  number = {1},
  pages = {2 p},
  note = {available from //nbn-resolving.de/urn:nbn:de:0009-5-32732}
}
Bustos, B., Schreck, T., Walter, M., Barrios, J., Schaefer, M. & Keim, D., (2012), "Improving 3D Similarity Search by Enhancing and Combining 3D Descriptors", Springer Multimedia Tools and Applications, Vol.58(1), pp.81-108.
Abstract: Effective content-based retrieval in 3D model databases is an important problem that has attracted much research attention over the last years. Many individual methods proposed to date rely on calculating global 3D model descriptors based on image, surface, volumetric, or structural model properties. Descriptors such as these are then input for determining the degree of similarity between models. Traditionally, the ability of individual descriptors to perform effective 3D search is decided by benchmarking. However, in practice the data set on which 3D retrieval is to be applied may differ from the characteristics of the respective benchmark. Therefore, statically determining the descriptor to use based on a fixed benchmark may lead to suboptimal results. We propose a generic strategy to improve the retrieval effectiveness in 3D retrieval systems consisting of multiple model descriptors. The specific contribution of this paper is two-fold. First, we propose to adaptively combine multiple descriptors by forming weighted descriptor combinations, where the weight of each descriptor is decided at query time. Second, we enhance the set of global model descriptors to be combined by including partial descriptors of the same kind in the combinations. Partial descriptors are obtained by applying a given descriptor extractor on the set of parts of a model, obtained by a simple model partitioning scheme. Thereby, more model information is exposed to the 3D descriptors, leading to a more complete object description. We give a systematic discussion of the descriptor combination space involving static and query-adaptive weighting schemes, and based on descriptors of different type and focus (model global vs. partial). The combination of both global and partial model descriptors is shown to deliver improved retrieval precision, compared to policies using single descriptors or fixed-weight combinations. The resulting scheme is generic and can accommodate a large class of global 3D model descriptors.
BibTeX:
@article{Bustos*11mmtap,
  author = {B. Bustos and T. Schreck and M. Walter and J. Barrios and M. Schaefer and D. Keim},
  title = {Improving 3D Similarity Search by Enhancing and Combining 3D Descriptors},
  journal = {Springer Multimedia Tools and Applications},
  year = {2012},
  volume = {58},
  number = {1},
  pages = {81--108},
  doi = {http://dx.doi.org/10.1007/s11042-010-0689-6}
}
Fellner, D.W., Baier, K., Dürre, S., Melanie, Bornemann, H. & Mentel, K. (ed.) (2012), "Jahresbericht 2011: Fraunhofer-Institut für Graphische Datenverarbeitung IGD", Fraunhofer-Institut für Graphische Datenverarbeitung (IGD).
BibTeX:
@book{Fellner*12ar-igd,,
  editor = {Fellner, Dieter W. and Baier, Konrad and Dürre, Steffen and Melanie and Bornemann, Heidrun and Mentel, Katrin},
  title = {Jahresbericht 2011: Fraunhofer-Institut für Graphische Datenverarbeitung IGD},
  publisher = {Fraunhofer-Institut für Graphische Datenverarbeitung (IGD)},
  year = {2012},
  note = {58 S.}
}
Fellner, D.W., (2012), "Informatik und Open Access -- von der idealistischen Sicht zum umsetzbaren Goldenen Weg", Informatik Spektrum, Vol.35(4), pp.250-252.
BibTeX:
@article{Fellner12infspect,
  author = {Dieter W. Fellner},
  title = {Informatik und Open Access -- von der idealistischen Sicht zum umsetzbaren Goldenen Weg},
  journal = {Informatik Spektrum},
  year = {2012},
  volume = {35},
  number = {4},
  pages = {250-252},
  doi = {http://dx.doi.org/10.1007/s00287-012-0632-5}
}
Ferreira, A., Laga, H., Schreck, T. & Veltkamp, R., (2012), "Preface to Special Issue on EG Workshop on 3D Object Retrieval 2011", Springer Visual Computer, Vol.28(9), pp.899-900.
BibTeX:
@article{Ferreira*123DOR,
  author = {A. Ferreira and H. Laga and T. Schreck and R. Veltkamp},
  title = {Preface to Special Issue on EG Workshop on 3D Object Retrieval 2011},
  journal = {Springer Visual Computer},
  year = {2012},
  volume = {28},
  number = {9},
  pages = {899--900},
  doi = {http://dx.doi.org/10.1007/s00371-012-0747-3}
}
Franke, T., Olbrich, M. & Fellner, D.W., (2012), "A Flexible Approach to Gesture Recognition and Interaction in X3D", Proceedings Web3D 2012, pp.171-174, ACM Press, New York.
Abstract: With the appearance of natural interaction devices such as the Microsoft Kinect or Asus Xtion PRO cameras, a whole new range of interaction modes have been opened up to developers. Tracking frameworks can make use of the additional depth image or skeleton tracking capabilities to recognize gestures. A popular example of one such implementation is the NITE framework from PrimeSense, which enables fine grained gesture recognition. However, recognized gestures come with additional information such as velocity, angle or accuracy, which are not encapsulated in a standardized format and therefore cannot be integrated into X3D in a meaningful way. In this paper, we propose a flexible way to inject gesture based meta data into X3D applications to enable fine grained interaction. We also discuss how to recognize these gestures if the underlying framework provides no mechanism to do so.
BibTeX:
@inproceedings{Franke*12web3D,
  author = {Franke, Tobias and Olbrich, Manuel and Fellner, Dieter W.},
  title = {A Flexible Approach to Gesture Recognition and Interaction in X3D},
  booktitle = {Proceedings Web3D 2012},
  publisher = {ACM Press, New York},
  year = {2012},
  pages = {171-174},
  doi = {http://dx.doi.org/10.1145/2338714.2338743}
}
Franke, T. & Fellner, D.W., (2012), "A Scalable Framework for Image-based Material Representations", Proceedings Web3D 2012, pp.83-91, ACM Press, New York.
Abstract: Complex material-light interaction is modeled mathematically in its most basic form through the 4D BRDF or the 6D spatially varying BRDF. To alleviate the overhead of calculating correct shading with a complex BRDF consisting of many parameters, many methods resort to textures as containers for BRDF information. The most common among them is the Bidirectional Texture Function, where a set of base textures of the material under different illumination and viewing conditions is stored and used as a lookup table at runtime. A wide variety of compression algorithms have been proposed, which usually differ only in some basis notation. Also, several other schemes aside from the BTF exist that make use of multiple textures as containers for surface appearance data, which either compress the surface transfer function or the response in change of luminance with a suitable basis function. We propose a common container for image-based material descriptors, the Image Material node for X3D, with a common interface to unify these different implementations and make them accessible to the X3D developer. We also introduce a new texturing node, the PolynomialTextureMap, which can display Polynomial Texture Map binary container as regular static texture or work in conjunction with an ImageMaterial appearance to unfold its full potential.
BibTeX:
@inproceedings{Franke-Fellner*12web3D,
  author = {Franke, Tobias and Fellner, Dieter W.},
  title = {A Scalable Framework for Image-based Material Representations},
  booktitle = {Proceedings Web3D 2012},
  publisher = {ACM Press, New York},
  year = {2012},
  pages = {83-91},
  doi = {http://dx.doi.org/10.1145/2338714.2338727}
}
Halm, A., Eggeling, E. & Fellner, D.W., Linsen, L., Hagen, H., Hamann, B. & Hege, H.-C. (ed.) (2012), "Embedding Biomolecular Information in a Scene Graph System", Visualization in Medicine and Life Sciences II, pp.249-264, Springer.
Abstract: We present the Bio Scene Graph (BioSG) for visualization of biomolecular structures based on the scene graph system OpenSG. The hierarchical model of primary, secondary and tertiary structures of molecules used in the organic chemistry is mapped to a graph of nodes when loading molecular files. We show that using BioSG, displaying molecules can be integrated in other applications, for example in medical applications. Additionally, existing algorithms and programs can be easily adapted to display the results with BioSG.
BibTeX:
@incollection{Halm*12vmls,
  author = {Andreas Halm and Eva Eggeling and Dieter W. Fellner},
  editor = {Lars Linsen and Hans Hagen and Bernd Hamann and Hans-Christian Hege},
  title = {Embedding Biomolecular Information in a Scene Graph System},
  booktitle = {Visualization in Medicine and Life Sciences II},
  publisher = {Springer},
  year = {2012},
  pages = {249-264},
  doi = {http://dx.doi.org/10.1007/978-3-642-21608-4_14}
}
Havemann, S., Ullrich, T. & Fellner, D., Maybury, M. (ed.) (2012), "The Meaning of Shape and Some Techniques to Extract It", Multimedia Information Extraction: Advances in Video, Audio, and Imagery Analysis for Search, Data Mining, Surveillance, and Authoring, pp.81-98, Wiley-IEEE Computer Society Press.
Abstract: This chapter aims at highlighting some of the fundamental but maybe less obvious limitations of current methods for representing and processing 3D shape, and suggests some possible solutions. We introduce semantic enrichment as central concept relating the subjective nature of interpretation to the objective of classification. Besides a survey on more traditional approaches for extracting semantics from shape (e.g., structural recognition), we also present more far-fetching generative approaches. Their contribution is twofold: A scalable solution for model-based production of new models, and information extraction from existing models by means of automated fitting procedures. We illustrate these concepts with two very active and extremely challenging application domains, urban reconstruction and cultural heritage.
BibTeX:
@incollection{Havemann*12mmie,
  author = {Sven Havemann and Torsten Ullrich and Dieter Fellner},
  editor = {Mark Maybury},
  title = {The Meaning of Shape and Some Techniques to Extract It},
  booktitle = {Multimedia Information Extraction: Advances in Video, Audio, and Imagery Analysis for Search, Data Mining, Surveillance, and Authoring},
  publisher = {Wiley-IEEE Computer Society Press},
  year = {2012},
  pages = {81-98}
}
Havemann, S., Baker, D., Bentkowska-Kafel, A. & Denard, H. (ed.) (2012), "Intricacies and Potentials of Collecting Paradata in the 3D Modelling Workflow", Paradata. Intellectual Transparency in Historical Visualization, pp.154-162, Ashgate.
Abstract: A 3D artefact is supposed to be a faithful recording and documentation of reality. However, unlike taking a photograph, the creation of a 3D artefact is not only a matter of activating a sensor that takes samples of reality, e.g., pixel colours arranged in a regular grid. Instead of being a simple measurement, the 3D modelling workflow takes the measured raw data (3D acquisition) and transforms and combines them through a number of processing steps. They typically involve a number of sophisticated geometric algorithms, some of which are outlined below. This qualifies indeed for the name 3D modelling: 3D artefact creation is much more than just 3D acquisition. In many cases, in particular when high-quality results are requested, the human is also in the loop. Highly skilled and trained 3D operators fill holes in the models, remove scanning deficiencies, and use interactive tools to optimize the surface quality. The great problem with this approach is the loss of authenticity: with the finished 3D artefact it is no longer possible to clearly distinguish between measured data and data that are 'invented' by 3D modelling algorithms. Furthermore, current 3D technology has some inherent limitations that make it principally impossible to collect, along the modelling process, the paradata that would allow assessment of the authenticity of datasets on the basis of individual triangles. It is argued here that a new type of 3D technology is required: a set of algorithms, data structures, and policies that are respected and implemented by all software tools used in the 3D modelling tool chain. Some essential requirements are formulated, and in some cases interesting new ways are indicated how these requirements could be implemented to obtain practical solutions to the problem of collecting paradata during the creation of 3D artefacts.
BibTeX:
@incollection{Havemann10paradata,
  author = {Sven Havemann},
  editor = {Drew Baker and Anna Bentkowska-Kafel and Hugh Denard},
  title = {Intricacies and Potentials of Collecting Paradata in the 3D Modelling Workflow},
  booktitle = {Paradata. Intellectual Transparency in Historical Visualization},
  publisher = {Ashgate},
  year = {2012},
  pages = {154-162},
  series = {Digital Research in the Arts and Humanities}
}
Hecher, M., Möstl, R., Eggeling, E., Derler, C. & Fellner, D.W., Baptista, A.A., Linde, P., Lavesson, N. & de Brito, M.A. (ed.) (2012), "'Tangible Culture' -- Designing Virtual Exhibitions on Multi-Touch Devices", Social Shaping of Digital Publishing: Exploring the Interplay Between Culture and Technology, ELPUB 2012, pp.104-113, IOS Press.
Abstract: Cultural heritage institutions such as galleries, museums and libraries increasingly use digital media to present artifacts to their audience and enable them to immerse themselves in a cultural virtual world. With the application eXhibition: editor3D museum curators and editors have a software tool at hand to interactively plan and visualize exhibitions. In this paper we present an extension to the application that enhances the workflow when designing exhibitions. By introducing multi-touch technology to the graphical user interfaces the designing phase of an exhibition is efficiently simplified, especially for non-technical users. Furthermore, multi-touch technology offers a novel way of collaborative work to be integrated into a decision making process. A flexible export system allows to store created exhibitions in various formats to display them on websites, mobile devices or custom viewers. E.g. the widespread 3D scene standard Extensible 3D (X3D) is one of the export formats and we use it to directly incorporate a realtime preview of the exhibition in the authoring process. The combination of the tangible user interfaces with the realtime preview gives curators and exhibition planers a capable tool for efficiently presenting cultural heritage in electronic media.
BibTeX:
@inproceedings{Hecher*12elpub,
  author = {Hecher, Martin and Möstl, Robert and Eggeling, Eva and Derler, Christian and Fellner, Dieter W.},
  editor = {Ana Alice Baptista and Peter Linde and Niklas Lavesson and Miguel Abrunhosa de Brito},
  title = {'Tangible Culture' -- Designing Virtual Exhibitions on Multi-Touch Devices},
  booktitle = {Social Shaping of Digital Publishing: Exploring the Interplay Between Culture and Technology, ELPUB 2012},
  publisher = {IOS Press},
  year = {2012},
  pages = {104-113},
  doi = {http://dx.doi.org/10.3233/978-1-61499-065-9-104}
}
Hecher, M., Möstl, R., Eggeling, E. & Derler, C., Schenk, M. (ed.) (2012), "angible Culture-- Designing Virtual Exhibitions on Multi-Touch Devices", 15. IFF-Wissenschaftstage 2012. 'Digitales Engineering zum Planen, Testen und Betreiben technischer Systeme' , pp.237-243, Fraunhofer Verlag.
BibTeX:
@inproceedings{Hecher*12iff,
  author = {Hecher, Martin and Möstl, Robert and Eggeling, Eva and Derler, Christian},
  editor = {Schenk, Michael},
  title = {angible Culture-- Designing Virtual Exhibitions on Multi-Touch Devices},
  booktitle = {15. IFF-Wissenschaftstage 2012. 'Digitales Engineering zum Planen, Testen und Betreiben technischer Systeme' },
  publisher = {Fraunhofer Verlag},
  year = {2012},
  pages = {237-243}
}
Kuijper, A., Sourin, A. & Fellner, D.W. (ed.) (2012), "2012 International Conference on Cyberworlds. Proceedings", IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif..
Abstract: Created intentionally or spontaneously, cyberworlds are information spaces and communities that immensely augment the way we interact, participate in business and receive information throughout the world. Cyberworlds seriously impact our lives and the evolution of the world economy by taking such forms as social networking services, 3D shared virtual communities and massively multiplayer online role-playing games. Cyberworlds 2012 was held 25-27 September 2012 and was organized by Fraunhofer IGD and TU Darmstadt, Germany, in cooperation with EUROGRAPHICS Association and supported by the IFIP Workgroup Computer Graphics and Virtual Worlds.
BibTeX:
@proceedings{Kuijper*2012CW,,
  editor = {Kuijper, Arjan and Sourin, Alexei and Fellner, Dieter W.},
  title = {2012 International Conference on Cyberworlds. Proceedings},
  publisher = {IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif.},
  year = {2012},
  doi = {http://dx.doi.org/10.1109/CW.2012.59}
}
Landesberger, T. v., Schreck, T., Fellner, D.W. & Kohlhammer, Jö., (2012), "Visual Search and Analysis in Complex Information Spaces -- Approaches and Research Challenges", Expanding the Frontiers of Visual Analytics and Visualization, pp.45-67, Springer, Berlin, Heidelberg, New York.
Abstract: One of the central motivations for visual analytics research is the so-called information overload - implying the challenge for human users in understanding and making decisions in presence of too much information [37]. Visual-interactive systems, integrated with automatic data analysis techniques, can help in making use of such large data sets [35]. Visual Analytics solutions not only need to cope with data volumes that are large on the nominal scale, but also with data that show high complexity. Important characteristics of complex data are that the data items are difficult to compare in a meaningful way based on the raw data. Also, the data items may be composed of different base data types, giving rise to multiple analytical perspectives. Example data types include research data compound of several base data types, multimedia data composed of different media modalities, etc. In this paper, we discuss the role of data complexity for visual analysis and search, and identify implications for designing respective visual analytics applications. We first introduce a data complexity model, and present current example visual analysis approaches based on it, for a selected number of complex data types. We also outline important research challenges for visual search and analysis we deem important.
BibTeX:
@incollection{Landesberger*12efvav,
  author = {Landesberger, Tatiana von and Schreck, Tobias and Fellner, Dieter W. and Kohlhammer, Jörn},
  title = {Visual Search and Analysis in Complex Information Spaces -- Approaches and Research Challenges},
  booktitle = {Expanding the Frontiers of Visual Analytics and Visualization},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2012},
  pages = {45-67},
  doi = {http://dx.doi.org/10.1007/978-1-4471-2804-5_4}
}
Pan, X., Schiffer, T., Schröttner, M., Berndt, R., Hecher, M., Havemann, S. & Fellner, D.W., (2012), "An Enhanced Distributed Repository for Working with 3D Assets in Cultural Heritage", Progress in Cultural Heritage Preservation, Vol.7616, pp.349-358, Springer, Berlin, Heidelberg, New York.
Abstract: The development of a European market for digital cultural heritage assets is impeded by the lack of a suitable marketplace, i.e., a commonly accepted distributed exchange platform for digital assets. We have developed such a platform over the last two years, a centralized content management system with distributed storage capability and semantic query functionality. It supports the complete pipeline from data acquisition (photo, 3D scan) over processing (cleaning, hole filling) to interactive presentation, and allows collecting a complete process description (paradata) alongside. In this paper we present the components of the system and explain their interplay. Furthermore, we present and explain which functional components, from transactions to permission management, are needed to operate the system. Finally, we prove the suitability of the API and present a few software applications that use it.
BibTeX:
@inproceedings{Pan*12lncs,
  author = {Pan, Xueming and Schiffer, Thomas and Schröttner, Martin and Berndt, Rene and Hecher, Martin and Havemann, Sven and Fellner, Dieter W.},
  title = {An Enhanced Distributed Repository for Working with 3D Assets in Cultural Heritage},
  booktitle = {Progress in Cultural Heritage Preservation},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2012},
  volume = {7616},
  pages = {349-358},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-34234-9_35}
}
Pan, X., Schiffer, T., Hecher, M., Havemann, S., Berndt, R., Fellner, D.W. & Schröttner, M., (2012), "A Scalable Repository Infrastructure for CH Digital Object Management", Proceedings of the VSMM 2012, pp.219-226, IEEE.
Abstract: In recent decades, researchers of archaeological 3D digitalization found that the collection and archive of processing intermediate data are extremely tiresome tasks. They need large of man power and material resources, even though, mistakes can be raised and break the whole working chain. The traditional documentation of digitalization process is also a pending challenge, although, the ISO standard CIDOC-CRM (ISO 21127:2006) has been introduced to the archaeologists and museum professionals since years, but there are still some obvious gaps between practice and theory: (1) How to connect the discrete archaeologists, museums, CH research institutions, and the public? (2) How to ensure the integrity of whole digitalization process and simplify the process? (3) How to maximize the usability of the public digital objects in CH community? (4) How to long term preserve the huge amount of datasets? (5) How to present and disseminate the digital object to the public? This paper presents an operational and optimal infrastructure that realizes not only a distributed storage system, but also a content management system. This infrastructure works as a backbone of whole digitalization process, provides a complete solution suite for archaeologists, museum professionals, museum visitors, and IT technicians.
BibTeX:
@inproceedings{Pan*12vsmm,
  author = {Pan, Xueming and Schiffer, Thomas and Hecher, Martin and Havemann, Sven and Berndt, René and Fellner, Dieter W. and Schröttner, Martin},
  title = {A Scalable Repository Infrastructure for CH Digital Object Management},
  booktitle = {Proceedings of the VSMM 2012},
  publisher = {IEEE},
  year = {2012},
  pages = {219-226},
  doi = {http://dx.doi.org/10.1109/VSMM.2012.6365928}
}
Peter, C. & Urban, B., Dill, J., Earnshaw, R., Kasik, D., Vince, J. & Wong, P. (ed.) (2012), "Emotion in Human-Computer Interaction", Expanding the Frontiers of Visual Analytics and Visualization Expanding the Frontiers of Visual Analytics and Visualization, pp.239-262, Springer.
Abstract: Affect and emotion are probably the most important facets of our lives. They make our lives worth living by enabling us to enjoy experiences, to value the behavior of others and helping us to make decisions more easily. They enforce or fade out the memory of distinct events and make some of them unique in the sequence of episodes that we undergo each day. But also, they function as a modulator of information when interacting with other people and play an essential role in fine-tuning our communication. The ability to express and understand emotional signs can hence be considered vital for interacting with human beings. Leveraging the power of emotion recognition to enhance technology seems obligatory when designing technology for people. This chapter introduces the physiological background of emotion recognition, describes the general approach to detecting emotion using physiological sensors, and gives two examples of affective applications.
BibTeX:
@incollection{Peterurban*12springer,
  author = {C. Peter and B. Urban},
  editor = {J. Dill and R. Earnshaw and D. Kasik and J. Vince and P.C. Wong},
  title = {Emotion in Human-Computer Interaction},
  booktitle = {Expanding the Frontiers of Visual Analytics and Visualization Expanding the Frontiers of Visual Analytics and Visualization},
  publisher = {Springer},
  year = {2012},
  pages = {239-262},
  doi = {http://dx.doi.org/10.1007/978-1-4471-2804-5_14}
}
Riemenschneider, H., Krispel, U., Thaller, W., Donoser, M., Havemann, S., Fellner, D.W. & Bischof, H., (2012), "Irregular lattices for complex shape grammar facade parsing", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.1640-1647, IEEE.
Abstract: High quality urban reconstruction requires more than multi-view reconstruction and local optimization. The structure of facades depends on the general layout, which has to be optimized as a global task. Shape grammars are an established method to express hierarchical spatial relationships, and are therefore suited as constraint of representation for semantic facade interpretation. Usually the inference is carried out using numerical approximations, or specifically tuned to a hard-coded grammar scheme. Existing methods inspired by classical grammar parsing are not practical on real-world images due to their prohibitively high complexity. This work provides feasible generic facade reconstruction by combining low-level classifiers with mid-level object detectors to infer an irregular lattice. The irregular lattice preserves the logical structure of the facade while reducing the search space to a manageable size. Furthermore, we introduce a method for handling symmetry and repetition within the grammar. We show competitive results on two datasets, namely the Paris2010 and the GT50. The former includes only Hausmannian, while the latter includes Baroque, Classicism, Historism, Renaissance, and Art Nouveau architectural styles.
BibTeX:
@inproceedings{Riemenschneider*12cvpr,
  author = {Riemenschneider, Hayko and Krispel, Ulrich and Thaller, Wolfgang and Donoser, Michael and Havemann, Sven and Fellner, Dieter W. and Bischof, Horst},
  title = {Irregular lattices for complex shape grammar facade parsing},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  publisher = {IEEE},
  year = {2012},
  pages = {1640-1647}
}
Scherer, M., v. Landesberger, T. & Schreck, T., (2012), "A Benchmark for Content-Based Retrieval in Bivariate Data Collections", Proc. International Conference on Theory and Practice of Digital Libraries, pp.286-297.
Abstract: Huge amounts of various research data are produced and made publicly available in digital libraries. An important category is bivariate data (measurements of one variable versus the other). Examples of bivariate data include observations of temperature and ozone levels (e.g., in environmental observation), domestic production and unemployment (e.g., in economics), or education and income level levels (in the social sciences). For accessing these data, content-based retrieval is an important query modality. It allows researchers to search for specific relationships among data variables (e.g., quadratic dependence of temperature on altitude). However, such retrieval is to date a challenge, as it is not clear which similarity measures to apply. Various approaches have been proposed, yet no benchmarks to compare their retrieval effectiveness have been defined. In this paper, we construct a benchmark for retrieval of bivariate data. It is based on a large collection of bivariate research data. To define similarity classes, we use category information that was annotated by domain experts. The resulting similarity classes are used to compare several recently proposed content-based retrieval approaches for bivariate data, by means of precision and recall. This study is the first to present an encompassing benchmark data set and compare the performance of respective techniques. We also identify potential research directions based on the results obtained for bivariate data. The benchmark and implementations of similarity functions are made available, to foster research in this emerging area of content-based retrieval.
BibTeX:
@inproceedings{Scherer*12tpdl,
  author = {M. Scherer and T. v. Landesberger and T. Schreck},
  title = {A Benchmark for Content-Based Retrieval in Bivariate Data Collections},
  booktitle = {Proc. International Conference on Theory and Practice of Digital Libraries},
  year = {2012},
  pages = {286-297},
  doi = {http://dx.doi.org/10.1007/978-3-642-33290-6_31}
}
Schiffer, T. & und M. Demuth, F.A., (2012), "Computing Convex Quadrangulations", Discrete Applied Mathematics, pp.648-656.
BibTeX:
@article{Schiffer*12DAM,
  author = {T. Schiffer and F. Aurenhammer und M. Demuth},
  title = {Computing Convex Quadrangulations},
  journal = {Discrete Applied Mathematics},
  year = {2012},
  pages = {648-656},
  doi = {http://dx.doi.org/10.1016/j.dam.2011.11.002}
}
Schinko, C., Ullrich, T. & Fellner, D.W., (2012), "Minimally Invasive Interpreter Construction", COMPUTATION TOOLS 2012, pp.38-44, Xpert Publishing Services, Wilmington, USA.
Abstract: Scripting languages are easy to use and very popular in various contexts. Their simplicity reduces a user's threshold of inhibitions to start programming -- especially, if the user is not a computer science expert. As a consequence, our generative modeling framework Euclides for non-expert users is based on a JavaScript dialect. It consists of a JavaScript compiler including a front-end (lexer, parser, etc.) and backends for several platforms. In order to reduce our users' development times and for fast feedback, we integrated an interactive interpreter based on the already existing compiler. Instead of writing large proportions of new code, whose behavior has to be consistent with the already existing compiler, we used a minimally invasive solution, which allows us to reuse most parts of the compiler's front- and back-end.
BibTeX:
@inproceedings{Schinko*12CT,
  author = {Schinko, Christoph and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Minimally Invasive Interpreter Construction},
  booktitle = {COMPUTATION TOOLS 2012},
  publisher = {Xpert Publishing Services, Wilmington, USA},
  year = {2012},
  pages = {38-44}
}
Schröttner, M., Havemann, S., Theodoridou, M., Doerr, M. & Fellner, D.W., (2012), "A Generic Approach for Generating Cultural Heritage Metadata", Progress in Cultural Heritage Preservation, Vol.7616, pp.231-240, Springer, Berlin, Heidelberg, New York.
Abstract: Rich metadata is crucial for the documentation and retrieval of 3D datasets in cultural heritage. Generating metadata is expensive as it is a very time consuming semi-manual process. The exponential increase of digital assets requires novel approaches for the mass generation of metadata. We present an approach that is generic, minimizes user assistance, and is customizable for different metadata schemes and storage formats as it is based on generic forms. It scales well and was tested with a large database of digital CH objects.
BibTeX:
@inproceedings{Schroettner*12lncs,
  author = {Schröttner, Martin and Havemann, Sven and Theodoridou, Maria and Doerr, Martin and Fellner, Dieter W.},
  title = {A Generic Approach for Generating Cultural Heritage Metadata},
  booktitle = {Progress in Cultural Heritage Preservation},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2012},
  volume = {7616},
  pages = {231-240},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-34234-9_23}
}
Schwenk, K., Kuijper, A., Behr, J. & Fellner, D.W., (2012), "Practical Noise Reduction for Progressive Stochastic Ray Tracing with Perceptual Control", IEEE Computer Graphics and Applications, Vol.32(6), pp.46-55.
Abstract: A proposed method reduces noise in stochastic ray tracing for interactive progressive rendering. The method accumulates high-variance light paths in a separate buffer, which is filtered by a high-quality edge-preserving filter. Then, this method adds a combination of the noisy unfiltered samples and the less noisy (but biased) filtered samples to the low-variance samples to form the final image. A novel per-pixel blending operator combines both contributions in a way that respects a user-defined threshold on perceived noise. This method can provide fast, reliable previews, even in the presence of complex features such as specular surfaces and high-frequency textures. At the same time, it's consistent in that the bias due to filtering vanishes in the limit.
BibTeX:
@article{Schwenk*12CGA,
  author = {Schwenk, Karsten and Kuijper, Arjan and Behr, Johannes and Fellner, Dieter W.},
  title = {Practical Noise Reduction for Progressive Stochastic Ray Tracing with Perceptual Control},
  journal = {IEEE Computer Graphics and Applications},
  year = {2012},
  volume = {32},
  number = {6},
  pages = {46-55},
  doi = {http://dx.doi.org/10.1109/MCG.2012.30}
}
Settgast, V., Eggeling, E. & Fellner, D.W., Schenk, M. (ed.) (2012), "The Preparation of 3D-Content for Interactive Visualization", 15. IFF-Wissenschaftstage 2012, 'Digitales Engineering zum Planen, Testen und Betreiben technischer Systeme', Vol.9, pp.187-192, Fraunhofer Verlag, Stuttgart.
BibTeX:
@inproceedings{Settgast*12iff,
  author = {Settgast, Volker and Eggeling, Eva and Fellner, Dieter W.},
  editor = {Schenk, Michael},
  title = {The Preparation of 3D-Content for Interactive Visualization},
  booktitle = {15. IFF-Wissenschaftstage 2012, 'Digitales Engineering zum Planen, Testen und Betreiben technischer Systeme'},
  publisher = {Fraunhofer Verlag, Stuttgart},
  year = {2012},
  volume = {9},
  pages = {187-192}
}
Settgast, V., Lancelle, M., Bauer, D. & Fellner, D.W., Geiger, C., Herder, J. & Vierjahn, T. (ed.) (2012), "Hands-Free Navigation in Immersive Environments for the Evaluation of the Effectiveness of Indoor Navigation Systems", Virtuelle und Erweiterte Realität : 9. Workshop der GI-Fachgruppe VR/AR, pp.107-118, Herzogenrath : Shaker.
Abstract: While navigation systems for cars are in widespread use, only recently, indoor navigation systems based on smartphone apps became technically feasible. Hence tools in order to plan and evaluate particular designs of information provision are needed. Since tests in real infrastructures are costly and environmental conditions cannot be held constant, one must resort to virtual infrastructures. In this paper we present a hands-free navigation in such virtual worlds using the Microsoft Kinect in our four-sided Definitely Affordable Virtual Environment (DAVE). We designed and implemented navigation controls using the user's gestures and postures as the input to the controls. The installation of expensive and bulky hardware like treadmills is avoided while still giving the user a good impression of the distance she has travelled in virtual space. An advantage in comparison to approaches using head mounted augmented reality is that the DAVE allows the users to interact withtheir smartphone. Thus the effects of different indoor navigation systems can be evaluated already in the planning phase using the resulting system.
BibTeX:
@inproceedings{Settgast*12VRAR,
  author = {Settgast, Volker and Lancelle, Marcel and Bauer, Dietmar and Fellner, Dieter W.},
  editor = {Christian Geiger and Jens Herder and Tom Vierjahn},
  title = {Hands-Free Navigation in Immersive Environments for the Evaluation of the Effectiveness of Indoor Navigation Systems},
  booktitle = {Virtuelle und Erweiterte Realität : 9. Workshop der GI-Fachgruppe VR/AR},
  publisher = {Herzogenrath : Shaker},
  year = {2012},
  pages = {107-118}
}
Stork, A. & Fellner, D.W., (2012), "3D-COFORM -- Tools and Expertise for 3D Collection Formation", Proceedings EVA 2012, pp.35-49, Gesellschaft zur Förderung angewandter Informatik e.V., Berlin.
Abstract: 3D-COFORM has the overall aim to make 3D documentation the standard approach in cultural heritage institutions for collection formation and management. 3D-COFORM is addressing the whole life cycle of digital 3D objects (also called 3D documents) spanning the whole chain from acquisition to processing, and from semantic enrichment to modeling and high-quality presentation -- all that on the basis of an integrated repository infrastructure. The paper will give an overview of 3D-COFORM and present its current results and contributions.
BibTeX:
@inproceedings{Stork-Fellner*12EVA,
  author = {Stork, André and Fellner, Dieter W.},
  title = {3D-COFORM -- Tools and Expertise for 3D Collection Formation},
  booktitle = {Proceedings EVA 2012},
  publisher = {Gesellschaft zur Förderung angewandter Informatik e.V., Berlin},
  year = {2012},
  pages = {35-49}
}
Tatu, A., Zhang, L., Bertini, E., Schreck, T., Keim, D., Bremm, S. & von Landesberger, T., (2012), "ClustNails: Visual Analysis of Subspace Clusters", Tsinghua Science and Technology, Vol.17(4), pp.419-428.
Abstract: Subspace clustering addresses an important problem in clustering multi-dimensional data. In sparse multi-dimensional data, many dimensions are irrelevant and obscure the cluster boundaries. Subspace clustering helps by mining the clusters present in only locally relevant subsets of dimensions. However, understanding the result of subspace clustering by analysts is not trivial. In addition to the grouping information, relevant sets of dimensions and overlaps between groups, both in terms of dimensions and records, need to be analyzed. We introduce a visual subspace cluster analysis system called ClustNails. It integrates several novel visualization techniques with various user interaction facilities to support navigating and interpreting the result of subspace clustering. We demonstrate the effectiveness of the proposed system by applying it to the analysis of real world data and comparing it with existing visual subspace cluster analysis systems.
BibTeX:
@article{Tatu*12tsinghua,
  author = {A. Tatu and L. Zhang and E. Bertini and T. Schreck and D. Keim and S. Bremm and T. von Landesberger},
  title = {ClustNails: Visual Analysis of Subspace Clusters},
  journal = {Tsinghua Science and Technology},
  year = {2012},
  volume = {17},
  number = {4},
  pages = {419--428},
  doi = {http://dx.doi.org/10.1109/TST.2012.6297588}
}
Tatu, A., Maa F., Färber, I., Bertini, E., Schreck, T., Seidl, T. & Keim, D., (2012), "Subspace Search and Visualization to Make Sense of Alternative Clusterings in High-Dimensional Data", Proc. IEEE Conference on Visual Analytics Science and Technology, pp.63-72.
Abstract: In explorative data analysis, the data under consideration often resides in a high-dimensional (HD) data space. Currently many methods are available to analyze this type of data. So far, proposed automatic approaches include dimensionality reduction and cluster analysis, whereby visual-interactive methods aim to provide effective visual mappings to show, relate, and navigate HD data. Furthermore, almost all of these methods conduct the analysis from a singular perspective, meaning that they consider the data in either the original HD data space, or a reduced version thereof. Additionally, HD data spaces often consist of combined features that measure different properties, in which case the particular relationships between the various properties may not be clear to the analysts a priori since it can only be revealed if appropriate feature combinations (subspaces) of the data are taken into consideration. Considering just a single subspace is, however, often not sufficient since different subspaces may show complementary, conjointly, or contradicting relations between data items. Useful information may consequently remain embedded in sets of subspaces of a given HD input data space. Relying on the notion of subspaces, we propose a novel method for the visual analysis of HD data in which we employ an interestingness-guided subspace search algorithm to detect a candidate set of subspaces. Based on appropriately defined subspace similarity functions, we visualize the subspaces and provide navigation facilities to interactively explore large sets of subspaces. Our approach allows users to effectively compare and relate subspaces with respect to involved dimensions and clusters of objects. We apply our approach to synthetic and real data sets. We thereby demonstrate its support for understanding HD data from different perspectives, effectively yielding a more complete view on HD data.
BibTeX:
@inproceedings{Tatu*12vast,
  author = {A. Tatu and F. Maaß and I. Färber and E. Bertini and T. Schreck and T. Seidl and D. Keim},
  title = {Subspace Search and Visualization to Make Sense of Alternative Clusterings in High-Dimensional Data},
  booktitle = {Proc. IEEE Conference on Visual Analytics Science and Technology},
  year = {2012},
  pages = {63--72},
  doi = {http://dx.doi.org/10.1109/VAST.2012.6400488}
}
Thaller, W., Krispel, U., Havemann, S. & Fellner, D., Ullrich, T. & Lorenz, P. (ed.) (2012), "Implicit Nested Repetition in Dataflow for Procedural Modeling", COMPUTATION TOOLS 2012, pp.45-50, IARIA.
Abstract: Creating 3D content requires a lot of expert knowledge and is often a very time consuming task. Procedural modeling can simplify this process for several application domains. However, creating procedural descriptions is still a complicated task. Graph based visual programming languages can ease the creation workflow, however direct manipulation of procedural 3D content rather than of a visual program is desirable as it resembles established techniques in 3D modeling. In this paper, we present a dataflow language that features a novel approach to handling loops in the context of direct interactive manipulation of procedural 3D models and show compilation techniques to translate it to traditional languages used in procedural modeling.
BibTeX:
@inproceedings{Thaller*12CT,
  author = {W. Thaller and U. Krispel and S. Havemann and D. Fellner},
  editor = {T. Ullrich and P. Lorenz},
  title = {Implicit Nested Repetition in Dataflow for Procedural Modeling},
  booktitle = {COMPUTATION TOOLS 2012},
  publisher = {IARIA},
  year = {2012},
  pages = {45-50}
}
Weber, D., Peña Serna, S., Stork, A. & Fellner, D.W., (2012), "Schnelle Strömungsberechnungen mit GPU: Rapid CFD für die frühe konzeptionelle Designphase", Digital Engineering Magazin, Vol.15(5), pp.44-47.
Abstract: Eine neue Tragfläche entsteht am Computer. Ist ihr Auftrieb tatsächlich besser als bei den herkömmlichen? Eine Computersimulation kann hierüber Aufschluss geben. Konventionelle Simulationen liefern die gewünschten Ergebnisse gewöhnlich erst nach mehreren Stunden oder Tagen. Erst anschließend können Modifikationen an der Geometrie vorgenommen werden, um die Eigenschaften zu verbessern. Ein neues Verfahren liefert nun die ersten Simulationsergebnisse bereits in Echtzeit. Es nutzt die Prozessoren der Grafikkarten (Graphics Processing Unit- GPU) für die notwendigen Berechnungen.
BibTeX:
@article{Weber*12DEM,
  author = {Weber, Daniel and Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Schnelle Strömungsberechnungen mit GPU: Rapid CFD für die frühe konzeptionelle Designphase},
  journal = {Digital Engineering Magazin},
  year = {2012},
  volume = {15},
  number = {5},
  pages = {44-47}
}
Weber, D., Peña Serna, S., Stork, A. & Fellner, D.W., (2012), "Rapid CFD für die frühe konzeptionelle Design Phase", NAFEMS Online Magazin, Vol.21(1), pp.70-79.
Abstract: Ein wichtiger Teil des Produktentwicklungszyklus ist die Optimierung der strömungs- oder strukturmechanischen Eigenschaften einer Komponente, die normalerweise in einem iterativen und sehr aufwändigen Prozess stattfindet. Neben der Modifikation, Vereinfachung und des Vernetzens der Bauteilgeometrie, kann die Simulation mitunter Stunden bis Tage dauern. In frühen konzeptionellen Designphasen müssen verschiedene Materialparameter sowie unterschiedliche Geometrien ausprobiert und verglichen werden, um zu einem für das spätere Produkt optimalen Design zu gelangen. Dieser zeitaufwändige Prozess begrenzt deutlich die Anzahl der Möglichkeiten, die analysiert werden können. In dieser Arbeit wird das Framework 'Rapid CFD' vorgestellt, das es ermöglicht, schnelle Strömungssimulationen für die frühe konzeptionelle Designphase einzusetzen. Um eine solche Geschwindigkeit zu erreichen, wird die Berechnung und Visualisierung von zweidimensionalen Strömungen in Echtzeit kombiniert. Das ermöglicht die interaktive Modifikation von Parametern und Randbedingungen und damit eine schnelle Analyse und Bewertung von unterschiedlichen Geometrien und eine frühzeitige Optimierung eines Bauteils. Das Framework führt alle Berechnungen auf der Graphikkarte (graphics processing unit -- GPU) aus und vermeidet damit das aufwändige Kopieren zwischen CPU- und GPU-Hauptspeicher. Die Berechnungen werden auf einem Standard-Desktop PC ausgeführt, sodass die Simulationsergebnisse im Graphikkartenspeicher bleiben und direkt zur Visualisierung verwendet werden können. Für die Modellierung der Geometrie werden B-Splines verwendet, damit Benutzer lokal die Form durch einzelne Kontrollpunkte modifizieren können. Die Diskretisierung wird ebenfalls auf der GPU ausgeführt. Die Berechnung eines einzelnen Zeitschritts auch für Millionen von Unbekannten wird in Bruchteilen von Sekunden durchgeführt. Die intuitive geometrische Manipulation in Kombination mit der unmittelbaren Visualisierung der Simulationsgrößen wie Druck und Geschwindigkeit ermöglichen die direkte Analyse des Einflusses von Geometrie- und Parameteränderungen. Obwohl diese neuartige Simulationstechnik noch nicht die hohe Präzision konventioneller Simulationen erreicht, ermöglicht diese Technik die Beobachtung von Trends und Tendenzen.
BibTeX:
@article{Weber*12nafems,
  author = {Weber, Daniel and Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Rapid CFD für die frühe konzeptionelle Design Phase},
  journal = {NAFEMS Online Magazin},
  year = {2012},
  volume = {21},
  number = {1},
  pages = {70-79}
}
Zhang, L., Stoffel, A., Behrisch, M., Mittelstädt, S., Schreck, T., Pompl, R., Weber, S., Last, H. & Keim, D., (2012), "Visual Analytics for the Big Data Era -- A Comparative Review of State-of-the-Art Commercial Systems", Proc. IEEE Symposium on Visual Analytics Science and Technology, pp.173-182.
Abstract: Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.
BibTeX:
@inproceedings{Zhang*12vast,
  author = {L. Zhang and A. Stoffel and M. Behrisch and S. Mittelstädt and T. Schreck and R. Pompl and S. Weber and H. Last and D. Keim},
  title = {Visual Analytics for the Big Data Era -- A Comparative Review of State-of-the-Art Commercial Systems},
  booktitle = {Proc. IEEE Symposium on Visual Analytics Science and Technology},
  year = {2012},
  pages = {173--182},
  doi = {http://dx.doi.org/10.1109/VAST.2012.6400554}
}
Zmugg, R., Thaller, W., Hecher, M., Schiffer, T., Havemann, S. & Fellner, D.W., (2012), "Authoring Animated Interactive 3D Museum Exhibits using a Digital Repository", VAST 2012, pp.73-80, Eurographics Association, Goslar.
Abstract: We present the prototype of a software system to streamline the serial production of simple interactive 3D animations for the display in museum exhibitions. We propose dividing the authoring process in two phases, a designer phase and a curator phase. The designer creates a set of configurable 3D scene templates that fit with the look of the physical exhibition while the curator inserts 3D models and configures the scene templates; the finished scenes are uploaded to 3D kiosks in the museum. Distinguishing features of our system are the tight integration with an asset repository and the simplified scene graph authoring. We demonstrate the usefulness with a few examples.
BibTeX:
@inproceedings{Zmugg*12VAST,
  author = {Zmugg, René and Thaller, Wolfgang and Hecher, Martin and Schiffer, Thomas and Havemann, Sven and Fellner, Dieter W.},
  title = {Authoring Animated Interactive 3D Museum Exhibits using a Digital Repository},
  booktitle = {VAST 2012},
  publisher = {Eurographics Association, Goslar},
  year = {2012},
  pages = {73-80},
  doi = {http://dx.doi.org/10.2312/VAST/VAST12/073-080}
}

2011

Augsdörfer, U.H., Dodgson, N.A. & Sabin, M.A., (2011), "Artifact analysis on B-splines, box-splines and other surfaces defined by quadrilateral polyhedra", Computer Aided Geometric Design, Vol.28(3), pp.177-197, Elsevier Science Publishers B. V..
Abstract: When using NURBS or subdivision surfaces as a design tool in engineering applications, designers face certain challenges. One of these is the presence of artifacts. An artifact is a feature of the surface that cannot be avoided by movement of control points by the designer. This implies that the surface contains spatial frequencies greater than one cycle per two control points. These are seen as ripples in the surface and are found in NURBS and subdivision surfaces and potentially in all surfaces specified in terms of polyhedrons of control points. Ideally, this difference between designer intent and what emerges as a surface should be eliminated. The first step to achieving this is by understanding and quantifying the artifact observed in the surface. We present methods for analysing the magnitude of artifacts in a surface defined by a quadrilateral control mesh. We use the subdivision process as a tool for analysis. Our results provide a measure of surface artifacts with respect to initial control point sampling for all B-Splines, quadrilateral box-spline surfaces and regular regions of subdivision surfaces. We use four subdivision schemes as working examples: the three box-spline subdivision schemes, Catmull-Clark (cubic B-spline), 4-3, 4-8; and Kobbelt's interpolating scheme.
BibTeX:
@article{Augsdoerfer*11cagd,
  author = {Ursula H. Augsdörfer and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Artifact analysis on B-splines, box-splines and other surfaces defined by quadrilateral polyhedra},
  journal = {Computer Aided Geometric Design},
  publisher = {Elsevier Science Publishers B. V.},
  year = {2011},
  volume = {28},
  number = {3},
  pages = {177-197},
  doi = {http://dx.doi.org/10.1016/j.cagd.2010.04.002}
}
Augsdörfer, U.H., Dodgson, N.A. & Sabin, M.A., (2011), "Artifact analysis on triangular box-splines and subdivision surfaces defined by triangular polyhedra", Computer Aided Geometric Design, Vol.28(3), pp.198-211, Elsevier Science Publishers B. V..
Abstract: Surface artifacts are features in a surface which cannot be avoided by movement of control points. They are present in B-splines, box splines and subdivision surfaces. We showed how the subdivision process can be used as a tool to analyse artifacts in surfaces defined by quadrilateral polyhedra (Sabin et al., 2005; Augsdorfer et al., 2011). In this paper we are utilising the subdivision process to develop a generic expression which can be employed to determine the magnitude of artifacts in surfaces defined by any regular triangular polyhedra. We demonstrate the method by analysing box-splines and regular regions of subdivision surfaces based on triangular meshes: Loop subdivision, Butterfly subdivision and a novel interpolating scheme with two smoothing stages. We compare our results for surfaces defined by triangular polyhedra to those for surfaces defined by quadrilateral polyhedra.
BibTeX:
@article{Augsdoerfer*11cagd2,
  author = {Ursula H. Augsdörfer and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Artifact analysis on triangular box-splines and subdivision surfaces defined by triangular polyhedra},
  journal = {Computer Aided Geometric Design},
  publisher = {Elsevier Science Publishers B. V.},
  year = {2011},
  volume = {28},
  number = {3},
  pages = {198-211},
  doi = {http://dx.doi.org/10.1016/j.cagd.2011.01.003}
}
Barmak, K., Eggeling, E., Emelianenko, M., Epshteyn, Y., Kinderlehrer, D., Sharp, R. & Ta'asan, S., (2011), "An entropy based theory of the grain boundary character distribution", Discrete and continuous dynamical systems (Series A 30), pp.427-454.
Abstract: Cellular networks are ubiquitous in nature. They exhibit behavior on many different length and time scales and are generally metastable. Most technologically useful materials are polycrystalline microstructures composed of a myriad of small monocrystalline grains separated by grain boundaries. The energetics and connectivity of the grain boundary network plays a crucial role in determining the properties of a material across a wide range of scales. A central problem in materials science is to develop technologies capable of producing an arrangement of grains -- a texture -- appropriate for a desired set of material properties. Here we discuss the role of energy in texture development, measured by a character distribution. We derive an entropy based theory based on mass transport and a Kantorovich-Rubinstein-Wasserstein metric to suggest that, to first approximation, this distribution behaves like the solution to a Fokker-Planck Equation.
BibTeX:
@article{Barmak*11dcds-a,
  author = {K. Barmak and E. Eggeling and M. Emelianenko and Y. Epshteyn and D. Kinderlehrer and R. Sharp and S. Ta'asan},
  title = {An entropy based theory of the grain boundary character distribution},
  journal = {Discrete and continuous dynamical systems (Series A 30)},
  year = {2011},
  pages = {427-454}
}
Barmak, K., Eggeling, E., Emelianenko, M., Epshteyn, Y., Kinderlehrer, D., Sharp, R. & Ta'asan, S., (2011), "Critical events, entropy, and the grain boundary character distribution", Phys. Rev. B, Vol.83, pp.134117, American Physical Society.
Abstract: Mesoscale experiment and simulation permit harvesting information about both geometric features and texture in polycrystals. The grain boundary character distribution (GBCD) is an empirical distribution of the relative length [in two dimensions (2D)] or area (in 3D) of an interface with a given lattice misorientation and normal. During the growth process, an initially random distribution of boundary types reaches a steady state that is strongly correlated to the interfacial energy density. In simulation, it is found that if the given energy density depends only on lattice misorientation, then the steady-state GBCD and the energy are related by a Boltzmann distribution. This is among the simplest nonrandom distributions, corresponding to independent trials with respect to the energy. In this paper, we derive an entropy-based theory that suggests that the evolution of the GBCD satisfies a Fokker-Planck equation, an equation whose stationary state is a Boltzmann distribution. Cellular structures coarsen according to a local evolution law, curvature-driven growth, and are limited by space-filling constraints. The interaction between the evolution law and the constraints is governed primarily by the force balance at triple junctions, the natural boundary condition associated with curvature-driven growth, and determines a dissipation relation. A simplified coarsening model is introduced that is driven by the boundary conditions and reflects the network level dissipation relation of the grain growth system. It resembles an ensemble of inertia-free spring-mass dashpots. Application is made of the recent characterization of Fokker-Planck kinetics as a gradient flow for a free energy in deriving the theory. The theory predicts the results of large-scale two-dimensional simulations and is consistent with experiment.
BibTeX:
@article{Barmak*11physRevB,
  author = {Barmak, K. and Eggeling, E. and Emelianenko, M. and Epshteyn, Y. and Kinderlehrer, D. and Sharp, R. and Ta'asan, S.},
  title = {Critical events, entropy, and the grain boundary character distribution},
  journal = {Phys. Rev. B},
  publisher = {American Physical Society},
  year = {2011},
  volume = {83},
  pages = {134117},
  doi = {http://dx.doi.org/10.1103/PhysRevB.83.134117}
}
Bein, M., Fellner, D.W. & Stork, A., (2011), "Genetic B-Spline Approximation on Combined B-Reps", The Visual Computer, Vol.27(6-8), pp.485-494.
Abstract: We present a genetic algorithm for approximating densely sampled curves with uniform cubic B-Splines suitable for Combined B-reps. A feature of this representation is altering the continuity property of the B-Spline at any knot, allowing combining freeform curves and polygonal parts within one representation. Naturally there is a trade-off between different approximation properties like accuracy and the number of control points needed. Our algorithm creates very accurate B-Splines with few control points, as shown in Fig. 1. Since the approximation problem is highly nonlinear, we approach it with genetic methods, leading to better results compared to classical gradient based methods. Parallelization and adapted evolution strategies are used to create results very fast.
BibTeX:
@article{Bein*11cgi,
  author = {Bein, Matthias and Fellner, Dieter W. and Stork, André},
  title = {Genetic B-Spline Approximation on Combined B-Reps},
  journal = {The Visual Computer},
  year = {2011},
  volume = {27},
  number = {6-8},
  pages = {485-494},
  doi = {http://dx.doi.org/10.1007/s00371-011-0592-9}
}
Bernard, J., von Landesberger, T., Bremm, S. & Schreck, T., (2011), "Multiscale visual quality assessment for cluster analysis with Self-Organizing Maps", IS&T/SPIE Conference on Visualization and Data Analysis, pp.78680N.1-78680N.12, SPIE Press.
Abstract: Cluster analysis is an important data mining technique for analyzing large amounts of data, reducing many objects to a limited number of clusters. Cluster visualization techniques aim at supporting the user in better understanding the characteristics and relationships among the found clusters. While promising approaches to visual cluster analysis already exist, these usually fall short of incorporating the quality of the obtained clustering results. However, due to the nature of the clustering process, quality plays an important aspect, as for most practical data sets, typically many different clusterings are possible. Being aware of clustering quality is important to judge the expressiveness of a given cluster visualization, or to adjust the clustering process with refined parameters, among others. In this work, we present an encompassing suite of visual tools for quality assessment of an important visual cluster algorithm, namely, the Self-Organizing Map (SOM) technique. We define, measure, and visualize the notion of SOM cluster quality along a hierarchy of cluster abstractions. The quality abstractions range from simple scalar-valued quality scores up to the structural comparison of a given SOM clustering with output of additional supportive clustering methods. The suite of methods allows the user to assess the SOM quality on the appropriate abstraction level, and arrive at improved clustering results. We implement our tools in an integrated system, apply it on experimental data sets, and show its applicability.
BibTeX:
@inproceedings{Bernard*11vda,
  author = {J. Bernard and T. von Landesberger and S. Bremm and T. Schreck},
  title = {Multiscale visual quality assessment for cluster analysis with Self-Organizing Maps},
  booktitle = {IS&T/SPIE Conference on Visualization and Data Analysis},
  publisher = {SPIE Press},
  year = {2011},
  pages = {78680N.1--78680N.12},
  doi = {http://dx.doi.org/10.1117/12.872545}
}
Berndt, R., Griesbaum, J., Mandl, T. & Womser-Hacker, C. (ed.) (2011), "3D-Modelle in bibliothekarischen Angeboten", Information und Wissen: global, sozial und frei?, Vol.58, pp.498-499.
BibTeX:
@inproceedings{Berndt11isi,
  author = {René Berndt},
  editor = {J. Griesbaum and T. Mandl and C. Womser-Hacker},
  title = {3D-Modelle in bibliothekarischen Angeboten},
  booktitle = {Information und Wissen: global, sozial und frei?},
  year = {2011},
  volume = {58},
  pages = {498-499},
  series = {Schriften zur Informationswissenschaft}
}
Bhatti, N. & Fellner, D.W., Dogru, A.H. & Bicer, V. (ed.) (2011), "Visual Semantic Analysis to Support Semi-Automatic Modeling of Semantic Service Descriptions", Modern Software Engineering Concepts and Practices: Advanced Approaches, pp.151-195, IGI Global.
Abstract: The service-oriented architecture has become one of the most popular approaches for distributed business applications. A new trend service ecosystem is merging, where service providers can augment their core services by using business service delivery-related available functionalities like distribution and delivery. The semantic service description of services for the business service delivery will become a bottleneck in the service ecosystem. In this chapter, the Visual Semantic Analysis approach is presented to support semi-automatic modeling of semantic service description by combining machine learning and interactive visualization techniques. Furthermore, two application scenarios from the project THESEUS-TEXO (funded by German federal ministry of economics and technology) are presented as evaluation of the Visual Semantic Analysis approach.
BibTeX:
@incollection{Bhatti-Fellner11,
  author = {Nadeem Bhatti and Dieter W. Fellner},
  editor = {Ali H. Dogru and Veli Bicer},
  title = {Visual Semantic Analysis to Support Semi-Automatic Modeling of Semantic Service Descriptions},
  booktitle = {Modern Software Engineering Concepts and Practices: Advanced Approaches},
  publisher = {IGI Global},
  year = {2011},
  pages = {151-195},
  doi = {http://dx.doi.org/10.4018/978-1-60960-215-4.ch007}
}
Bieber, G., Haescher, M., Peter, C., Aehnelt, M., Richter, C. & Gohlke, H., (2011), "Handsfree Interaction mittels Handgelenkssensoren für mobile Assistenzsysteme", 6. Kongress Multimediatechnik Wismar 2011, pp.34-40, Hochschule Wismar.
BibTeX:
@inproceedings{Bieber*11aal,
  author = {Gerald Bieber and Marian Haescher and Christian Peter and Mario Aehnelt and Claas Richter and Holger Gohlke},
  title = {Handsfree Interaction mittels Handgelenkssensoren für mobile Assistenzsysteme},
  booktitle = {6. Kongress Multimediatechnik Wismar 2011},
  publisher = {Hochschule Wismar},
  year = {2011},
  pages = {34-40},
  note = {extended abstract}
}
Bieber, G., Luthardt, A., Peter, C. & Urban, B., (2011), "The hearing trousers pocket -- activity recognition by alternative sensors", Association for Computing Machinery (ACM): The 4th ACM International Conference on PErvasive Technologies Related to Assistive Environments : PETRA 2011, pp.123-128, ACM.
Abstract: In daily life, mobile phones accompany the user permanently and are worn often in the front pocket of the trousers. The sensors included in today's mobile phones can hence be used for ubiquitous assistance. For instance, the acceleration sensor could be used for analysis of the person's bodily activity, or the microphone can be used to analyze the environmental noise levels. A possible sensor fusion provides additional and assured environmental and context information. This work presents new methods of activity recognition by acceleration and sound sensors by means of sensors included in commercially available smart phones during everyday life. We could identify that sounds provide valuable additional information on a user's situation that allow to better asses a person's current context.
BibTeX:
@inproceedings{Bieber*11petra,
  author = {Bieber, G. and Luthardt, A. and Peter, C. and Urban, B.},
  title = {The hearing trousers pocket -- activity recognition by alternative sensors},
  booktitle = {Association for Computing Machinery (ACM): The 4th ACM International Conference on PErvasive Technologies Related to Assistive Environments : PETRA 2011},
  publisher = {ACM},
  year = {2011},
  pages = {123-128}
}
Binotto, A., Pereira, C.E., Kuijper, A., Stork, A. & Fellner, D.W., (2011), "An Effective Dynamic Scheduling Runtime and Tuning System for Heterogeneous Multi and Many-Core Desktop Platforms", Proceedings 2011 IEEE International Conference on High Performance Computing and Communications, pp.78-85, IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif..
Abstract: A personal computer can be considered as a one-node heterogeneous cluster that simultaneously processes several application tasks. It can be composed by, for example, asymmetric CPU and GPUs. This way, a high-performance heterogeneous platform is built on a desktop for data intensive engineering calculations. In our perspective, a workload distribution over the Processing Units (PUs) plays a key role in such systems. This issue presents challenges since the cost of a task at a PU is non-deterministic and can be affected by parameters not known a priori. This paper presents a context-aware runtime and tuning system based on a compromise between reducing the execution time of engineering applications - due to appropriate dynamic scheduling - and the cost of computing such scheduling applied on a platform composed of CPU and GPUs. Results obtained in experimental case studies are encouraging and a performance gain of 21.77 percent was achieved in comparison to the static assignment of all tasks to the GPU.
BibTeX:
@inproceedings{Binotto*11hpcc,
  author = {Binotto, Alecio and Pereira, Carlos Eduardo and Kuijper, Arjan and Stork, André and Fellner, Dieter W.},
  title = {An Effective Dynamic Scheduling Runtime and Tuning System for Heterogeneous Multi and Many-Core Desktop Platforms},
  booktitle = {Proceedings 2011 IEEE International Conference on High Performance Computing and Communications},
  publisher = {IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif.},
  year = {2011},
  pages = {78-85},
  doi = {http://dx.doi.org/10.1109/HPCC.2011.20}
}
Bremm, S., Landesberger, T. v., Bernard, J. & Schreck, T., (2011), "Assisted Descriptor Selection Based on Visual Comparative Data Analysis", Wiley-Blackwell Computer Graphics Forum (Proc. EuroVis 2011), Vol.30(3), pp.891-900.
Abstract: Exploration and selection of data descriptors representing objects using a set of features are important components in many data analysis tasks. Usually, for a given dataset, an optimal data description does not exist, as the suitable data representation is strongly use case dependent. Many solutions for selecting a suitable data description have been proposed. In most instances, they require data labels and often are black box approaches. Non-expert users have difficulties to comprehend the coherency of input, parameters, and output of these algorithms. Alternative approaches, interactive systems for visual feature selection, overburden the user with an overwhelming set of options and data views. Therefore, it is essential to offer the users a guidance in this analytical process. In this paper, we present a novel system for data description selection, which facilitates the user's access to the data analysis process. As finding of suitable data description consists of several steps, we support the user with guidance. Our system combines automatic data analysis with interactive visualizations. By this, the system provides a recommendation for suitable data descriptor selections. It supports the comparison of data descriptors with differing dimensionality for unlabeled data. We propose specialized scores and interactive views for descriptor comparison. The visualization techniques are scatterplot-based and grid-based. For the latter case, we apply Self-Organizing Maps as adaptive grids which are well suited for large multi-dimensional data sets. As an example, we demonstrate the usability of our system on a real-world biochemical application.
BibTeX:
@article{Bremm*11eurovis,
  author = {Bremm, S. and Landesberger, T. von and Bernard, J. and Schreck, T.},
  title = {Assisted Descriptor Selection Based on Visual Comparative Data Analysis},
  journal = {Wiley-Blackwell Computer Graphics Forum (Proc. EuroVis 2011)},
  year = {2011},
  volume = {30},
  number = {3},
  pages = {891--900},
  doi = {http://dx.doi.org/10.1111/j.1467-8659.2011.01938.x}
}
Bremm, S., v. Landesberger, T., He M., Schreck, T., Weil, P. & Hamacher, K., (2011), "Interactive Comparison of Multiple Trees", Proc. IEEE Conference on Visual Analytics Science and Technology, pp.31-40, IEEE Computer Society.
Abstract: Traditionally, the visual analysis of hierarchies, respectively, trees, is conducted by focusing on one given hierarchy. However, in many research areas multiple, differing hierarchies need to be analyzed simultaneously in a comparative way - in particular to highlight differences between them, which sometimes can be subtle. A prominent example is the analysis of so-called phylogenetic trees in biology, reflecting hierarchical evolutionary relationships among a set of organisms. Typically, the analysis considers multiple phylogenetic trees, either to account for statistical significance or for differences in derivation of such evolutionary hierarchies; for example, based on different input data, such as the 16S ribosomal RNA and protein sequences of highly conserved enzymes. The simultaneous analysis of a collection of such trees leads to more insight into the evolutionary process. We introduce a novel visual analytics approach for the comparison of multiple hierarchies focusing on both global and local structures. A new tree comparison score has been elaborated for the identification of interesting patterns. We developed a set of linked hierarchy views showing the results of automatic tree comparison on various levels of details. This combined approach offers detailed assessment of local and global tree similarities. The approach was developed in close cooperation with experts from the evolutionary biology domain. We apply it to a phylogenetic data set on bacterial ancestry, demonstrating its application benefit.
BibTeX:
@inproceedings{Bremm*11vast,
  author = {S. Bremm and T. v. Landesberger and M. Heß and T. Schreck and P. Weil and K. Hamacher},
  title = {Interactive Comparison of Multiple Trees},
  booktitle = {Proc. IEEE Conference on Visual Analytics Science and Technology},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {31--40},
  doi = {http://dx.doi.org/10.1109/VAST.2011.6102439}
}
Breuel, F., Berndt, R., Ullrich, T., Eggeling, E. & Fellner, D.W., Tonta, Y., Al, U., Erdogan, P.L. & Baptista, A.A. (ed.) (2011), "Mate in 3D -- Publishing Interactive Content in PDF3D", Publishing in the Networked World: Transforming the Nature of Communication, pp.110-119, Hacettepe University Department of Information Management.
BibTeX:
@inproceedings{Breuel*11elpub,
  author = {Frank Breuel and René Berndt and Torsten Ullrich and Eva Eggeling and D.~W. Fellner},
  editor = {Yasar Tonta and Umut Al and Phyllis Lepon Erdogan and Ana Alice Baptista},
  title = {Mate in 3D -- Publishing Interactive Content in PDF3D},
  booktitle = {Publishing in the Networked World: Transforming the Nature of Communication},
  publisher = {Hacettepe University Department of Information Management},
  year = {2011},
  pages = {110-119}
}
Burkhardt, D., Nazemi, K., Stab, C., Breyer, M., Wichert, R. & Fellner, D.W., (2011), "Natürliche Gesteninteraktion mit Beschleunigungssensorbasierten Eingabegeräten in unterstützenden Umgebungen", Ambient Assisted Living, pp.10, VDE-Verl., Berlin u.a..
Abstract: Die Verwendung von modernen Interaktionsmethoden und Geräten erlaubte eine natürlichere und intuitive Interaktion. Gegenwärtig haben lediglich die Smartphones und Spielekonsolen großen Absatz, welche eine gestenbasierte Interaktion unterstützen. Dies geht einher, dass solche Geräte nicht nur von technisch versierten Konsumenten gekauft werden. Die Interaktion mit solchen Geräten gestaltet sich so einfach, dass oftmals auch ältere Personen mit diesen spielen oder arbeiten. Insbesondere ältere Personen sind häufig gehandicapt, so haben sie oftmals Probleme kleinere Text zu lesen, wie sie häufig auf Fernbedienungen gedruckt sind. Ebenso neigen sie dazu, schnell überfordert zu sein, so dass gerade größere technische Systeme keine Hilfe sind. Wenn die Geräte mit Gesten steuerbar sind, sind die genannten Probleme oftmals vermeidbar. Um aber eine intuitive und einfache Gesteninteraktion zu ermöglichen, müssen entsprechend verständliche und nachvollziehbare Gesten unterstützt werden. Aus diesem Grund versuchen wir in diesem Paper intuitive Gesten für gängige Interaktionsszenarien an computerbasierten Systemen für den Einsatz in unterstützenden Umgebungen zu identifizieren. Im Rahmen der Evaluation sollen die Probanden hierfür ihre bevorzugten Gesten für die verschiedenen Interaktionsszenarien einbringen. Auf Grundlage der Ergebnisse kann später ein intuitiv bedienbares System, unter Verwendung eines beschleunigungssensorbasierten Geräts, entwickelt werden, mit welchem die Nutzer auf intuitive Weise kommunizieren können. Using modern interaction methods and devices provides a more natural and intuitive interaction. Currently, only mobile phones and game consoles which are supporting such gesture-based interactions have good payment-rates. This comes along, that such devices will bought not only by the traditional technical experienced consumers. The interaction with such devices becomes so easy, that also older people playing or working with them. Especially older people have more handicaps, so for them it is hard to read small text, like they are used as description to buttons on remote controls for televisions. They also become fast overstrained, so that bigger technical systems are no help for them. If it is possible to interact with gestures, all these problems can be avoided. But to allow an intuitive and easy gesture interaction, gestures have to be supported, which are easy to understand. Because of that fact, in this paper we tried to identify intuitive gestures for common interaction scenarios on computer-based systems for uses in ambient assisted environment. In this evaluation, the users should commit their opinion of intuitive gestures for different presented scenarios/tasks. Basing on these results, intuitively useable systems can be developed, so that users are able to communicate with technical systems on more intuitive level with accelerometer-based devices.
BibTeX:
@inproceedings{Burkhardt*11aal,
  author = {Burkhardt, Dirk and Nazemi, Kawa and Stab, Christian and Breyer, Matthias and Wichert, Reiner and Fellner, Dieter W.},
  title = {Natürliche Gesteninteraktion mit Beschleunigungssensorbasierten Eingabegeräten in unterstützenden Umgebungen},
  booktitle = {Ambient Assisted Living},
  publisher = {VDE-Verl., Berlin u.a.},
  year = {2011},
  pages = {10}
}
Daoudi, M. & Schreck, T., (2011), "Eurographics 2010 Workshop on 3D Object Retrieval in Cooperation with ACM SIGGRAPH", Wiley-Blackwell Computer Graphics Forum, Vol.30(1), pp.229-230.
BibTeX:
@article{Daoudi-Schreck10,
  author = {M. Daoudi and T. Schreck},
  title = {Eurographics 2010 Workshop on 3D Object Retrieval in Cooperation with ACM SIGGRAPH},
  journal = {Wiley-Blackwell Computer Graphics Forum},
  year = {2011},
  volume = {30},
  number = {1},
  pages = {229--230},
  note = {Event Report}
}
Fellner, D.W., Baier, K., Klingelmeyer, M., Bornemann, H. & Mentel, K. (ed.) (2011), "Jahresbericht 2010: Fraunhofer-Institut für Graphische Datenverarbeitung IGD", Fraunhofer-Institut für Graphische Datenverarbeitung (IGD).
BibTeX:
@book{Fellner*11ar-igd,,
  editor = {Fellner, Dieter W. and Baier, Konrad and Klingelmeyer, Melanie and Bornemann, Heidrun and Mentel, Katrin},
  title = {Jahresbericht 2010: Fraunhofer-Institut für Graphische Datenverarbeitung IGD},
  publisher = {Fraunhofer-Institut für Graphische Datenverarbeitung (IGD)},
  year = {2011},
  note = {58 S.}
}
Fellner, D.W., Havemann, S., Beckmann, P. & Pan, X., (2011), "Practical 3D reconstruction of cultural heritage artefacts from photographs -- potentials and issues", VAR. Virtual Archaeology Review [online], Vol.2(4), pp.95-103.
Abstract: A new technology is on the rise that allows the 3D-reconstruction of Cultural Heritage objects from image sequences taken by ordinary digital cameras. We describe the first experiments we made as early adopters in a community-funded research project whose goal is to develop it into a standard CH technology. The paper describes in detail a step-by-step procedure that can be reproduced using free tools by any CH professional. We also give a critical assessment of the workflow and describe several ideas for developing it further into an automatic procedure for 3D reconstruction from images.
BibTeX:
@article{Fellner*11var,
  author = {Fellner, Dieter W. and Havemann, Sven and Beckmann, Philipp and Pan, Xueming},
  title = {Practical 3D reconstruction of cultural heritage artefacts from photographs -- potentials and issues},
  journal = {VAR. Virtual Archaeology Review [online]},
  year = {2011},
  volume = {2},
  number = {4},
  pages = {95-103}
}
Fellner, D.W. & Schaub, J. (ed.) (2011), "Selected Readings in Computer Graphics 2010", Fraunhofer IGD, Darmstadt.
Abstract: The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2010" consist of 45 articles selected from a total of 186 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2010. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.
BibTeX:
@book{Fellner-Schaub11sr,,
  editor = {Fellner, Dieter W. and Schaub, Jutta},
  title = {Selected Readings in Computer Graphics 2010},
  publisher = {Fraunhofer IGD, Darmstadt},
  year = {2011},
  series = {Selected Readings in Computer Graphics; 21}
}
Havemann, S., Beckmann, P., Pan, X. & Fellner, D.W., (2011), "Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs -- Potentials and Issues", VAR. Virtual Archaeology Review [online], Vol.2(4), pp.95-103.
BibTeX:
@article{Havemann*10arq,
  author = {Sven Havemann and Philipp Beckmann and Xueming Pan and Dieter W. Fellner},
  title = {Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs -- Potentials and Issues},
  journal = {VAR. Virtual Archaeology Review [online]},
  year = {2011},
  volume = {2},
  number = {4},
  pages = {95-103},
  note = {Proc. ARQUEOLOGICA 2.0, Alfredo Grande and Victor Lopez-Menchero, eds. }
}
Havemann, S. & Fellner, D.W., Calude, C.S., Rozenberg, G. & Salomaa, A. (ed.) (2011), "Towards a New Shape Description Paradigm Using the Generative Modeling Language", Rainbow of Computer Science, Vol.6570, pp.200-214, Springer, Berlin, Heidelberg, New York.
Abstract: A procedural description of a three-dimensional shape has undeniable advantages over conventional descriptions that are all based on the exhaustive enumeration paradigm. Although it is a true generalization, a procedural description of a given shape class is not always easy to obtain. The main problem is that procedural descriptions are typically Turing-complete, which makes 3D shape design formally (and practically) a programming task. We describe an approach that circumvents this problem, is efficient, extensible, and conceptually simple. We demonstrate the broad applicability with a number of examples from different domains and sketch possible future applications. But we also discuss some practical and theoretical limitations of the generative paradigm.
BibTeX:
@incollection{Havemann-Fellner11,
  author = {Havemann, Sven and Fellner, Dieter W.},
  editor = {Cristian S. Calude and Grzegorz Rozenberg and Arto Salomaa},
  title = {Towards a New Shape Description Paradigm Using the Generative Modeling Language},
  booktitle = {Rainbow of Computer Science},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2011},
  volume = {6570},
  pages = {200-214},
  series = {Lecture Notes in Computer Science LNCS},
  doi = {http://dx.doi.org/10.1007/978-3-642-19391-0_15}
}
Hecher, M., Möstl, R., Eggeling, E., Derler, C. & Fellner, D.W., (2011), "'Tangible Culture' -- Designing Virtual Exhibitions on Multi-Touch Devices", Ercim News, Vol.86, pp.21-22, ERCIM EEIG.
Abstract: Extracting and archiving information from digital images of documents is one of the goals of the project AMMIRA (multispectral acquisition, enhancing, indexing and retrieval of artifacts), led by Tea- Sas, a service firm based in Catanzaro, Italy, with the collaboration of two Italian research teams, the Institute of Information Science and Technologies of CNR in Pisa, and the Department of Mechanical Engineering of the University of Calabria in Cosenza. AMMIRA is supported by European funding, through the Italian regional program for integrated support to enterprises.
BibTeX:
@inproceedings{Hecher*11ercim,
  author = {Hecher, Martin and Möstl, Robert and Eggeling, Eva and Derler, Christian and Fellner, Dieter W.},
  title = {'Tangible Culture' -- Designing Virtual Exhibitions on Multi-Touch Devices},
  booktitle = {Ercim News},
  publisher = {ERCIM EEIG},
  year = {2011},
  volume = {86},
  pages = {21-22},
  note = {short article overview of Hecher*12elpub}
}
Huff, R., Gierlinger, T., Kuijper, A., Stork, A. & Fellner, D.W., (2011), "A Comparison of xPU Platforms Exemplified with Ray Tracing Algorithms", XIII Symposium on Virtual Reality, pp.1-8, IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif..
Abstract: Over the years, faster hardware -- with higher clock rates -- has been the usual way to improve computing times in computer graphics. Aside from highly costly parallel solutions only affordable by big industries -- like the movie industry --, there was no alternative available to desktop users. Nevertheless, this scenario is dramatically changing with the introduction of more and more parallelism in current desktop PCs. Multi-core CPUs are a common basis in current PCs and the power of modern GPUs -- which have been multi-core for a long time now -- is getting unveiled to developers. nVidia's CUDA is a powerful weapon to explore GPUs parallelism. Yet, its specific target -- nVidia graphic cards only -- does not provide any solution to other parallel hardware present. OpenCL is a new royalty-free cross-platform intended to be portable across different hardware manufacturers or even different platforms. In this paper we focus on a comparison of advantages and disadvantages of xPU platforms with OpenCL and CUDA in terms of time efficiency. As an example application we use ray tracing algorithms. Three kinds of ray tracers have to be developed in order to conduct a fair comparison: one is CPU based, while the other two are GPU based -- using CUDA and OpenCL, respectively. At the end, a comparison is done between them and results are presented and analyzed showing that the CUDA implementation has the best frame rate, but is very closely followed by the OpenCL implementation. Visually, results are identical, showing the high potential of OpenCL as an alternative for CUDA with identical performance.
BibTeX:
@inproceedings{Huff*11vr,
  author = {Huff, Rafael and Gierlinger, Thomas and Kuijper, Arjan and Stork, André and Fellner, Dieter W.},
  title = {A Comparison of xPU Platforms Exemplified with Ray Tracing Algorithms},
  booktitle = {XIII Symposium on Virtual Reality},
  publisher = {IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos, Calif.},
  year = {2011},
  pages = {1-8},
  doi = {http://dx.doi.org/10.1109/SVR.2011.18}
}
Jung, Y., Kuijper, A., Fellner, D.W., Kipp, M., Miksatko, J., Gratch, J. & Thalmann, D., John, N. & Wyvill, B. (ed.) (2011), "Believable Virtual Characters in Human-Computer Dialogs", Eurographics 2011. State of the Art Reports (STARs), pp.75-100, Eurographics Association.
Abstract: For many application areas, where a task is most naturally represented by talking or where standard input devices are difficult to use or not available at all, virtual characters can be well suited as an intuitive man-machineinterface due to their inherent ability to simulate verbal as well as nonverbal communicative behavior. This type of interface is made possible with the help of multimodal dialog systems, which extend common speech dialog systems with additional modalities just like in human-human interaction. Multimodal dialog systems consist at least of an auditive and graphical component, and communication is based on speech and nonverbal communication alike. However, employing virtual characters as personal and believable dialog partners in multimodal dialogs entails several challenges, because this requires not only a reliable and consistent motion and dialog behavior but also regarding nonverbal communication and affective components. Besides modeling the 'mind' andcreating intelligent communication behavior on the encoding side, which is an active field of research in artificial intelligence, the visual representation of a character including its perceivable behavior, from a decoding perspective, such as facial expressions and gestures, belongs to the domain of computer graphics and likewise implicates many open issues concerning natural communication. Therefore, in this report we give a comprehensive overview how to go from communication models to actual animation and rendering.
BibTeX:
@inproceedings{Jung*11eg,
  author = {Yvonne Jung and Arjan Kuijper and Dieter W. Fellner and Michael Kipp and Jan Miksatko and Jonathan Gratch and Daniel Thalmann},
  editor = {N. John and B. Wyvill},
  title = {Believable Virtual Characters in Human-Computer Dialogs},
  booktitle = {Eurographics 2011. State of the Art Reports (STARs)},
  publisher = {Eurographics Association},
  year = {2011},
  pages = {75-100}
}
Laga, H., Schreck, T., Ferrera, A., Godil, A. & Veltkamp, R., (2011), "Eurographics 2011 Workshop on 3D Object Retrieval in Cooperation with ACM SIGGRAPH", Wiley-Blackwell Computer Graphics Forum, Vol.30(6), pp.1865-1866.
BibTeX:
@article{Laga*11,
  author = {H. Laga and T. Schreck and A. Ferrera and A. Godil and R. Veltkamp},
  title = {Eurographics 2011 Workshop on 3D Object Retrieval in Cooperation with ACM SIGGRAPH},
  journal = {Wiley-Blackwell Computer Graphics Forum},
  year = {2011},
  volume = {30},
  number = {6},
  pages = {1865--1866},
  note = {Event Report}
}
Lancelle, M. & Fellner, D.W., (2011), "Smooth Transitions for Large Scale Changes in Multi-Resolution Images", 16th International Workshop on Vision, Modeling, and Visualization (VMV), pp.81-87.
Abstract: Today's super zoom cameras offer a large optical zoom range of over 30x. It is easy to take a wide angle photograph of the scene together with a few zoomed in high resolution crops. Only little work has been done to appropriately display the high resolution photo as an inset. Usually, to hide the resolution transition, alpha blending is used. Visible transition boundaries or ghosting artifacts may result. In this paper we introduce a different, novel approach to overcome these problems. Across the transition, we gradually attenuate the maximum image frequency. We achieve this with a Gaussian blur with an exponentially increasing standard deviation.
BibTeX:
@inproceedings{Lancelle-Fellner11VMV,
  author = {Marcel Lancelle and Dieter W. Fellner},
  title = {Smooth Transitions for Large Scale Changes in Multi-Resolution Images},
  booktitle = {16th International Workshop on Vision, Modeling, and Visualization (VMV)},
  year = {2011},
  pages = {81-87},
  doi = {http://dx.doi.org/10.2312/PE/VMV/VMV11/081-087}
}
Lancelle, M. & Fellner, D.W., (2011), "Soft Edge and Soft Corner Blending", Workshop Virtuelle & Erweiterte Realität (VR/AR), pp.63-71.
Abstract: We address artifacts at corners in soft edge blend masks for tiled projector arrays. We compare existing and novel modifications of the commonly used weighting function and analyze the first order discontinuities of the resulting blend masks. In practice, e.g. when the projector lamps are not equally bright or with rear projection screens, these discontinuities may lead to visible artifacts. By using first order continuous weighting functions, we achieve significantly smoother results compared to commonly used blend masks.
BibTeX:
@inproceedings{Lancelle-Fellner11vrar,
  author = {Lancelle, Marcel and Fellner, Dieter W.},
  title = {Soft Edge and Soft Corner Blending},
  booktitle = {Workshop Virtuelle & Erweiterte Realität (VR/AR)},
  year = {2011},
  pages = {63-71}
}
Lancelle, M., (2011), "Visual Computing in Virtual Environments".
Abstract: This thesis covers research on new and alternative ways of interaction with computers. Virtual Reality and multi touch setups are discussed with a focus on three dimensional rendering and photographic applications in the field of Computer Graphics. Virtual Reality (VR) and Virtual Environments (VE) were once thought to be the future interface to computers. However, a lot of problems prevent an everyday use. This work shows solutions to some of the problems and discusses remaining issues. Hardware for Virtual Reality is diverse and many new devices are still being developed. An overview on historic and current devices and VE setups is given and our setups are described. The DAVE, an immersive projection room, and the HEyeWall Graz, a large high resolution display with multi touch input are presented. Available processing power and in some parts rapidly decreasing prices lead to a continuous change of the best choice of hardware. A major influence of this choice is the application. VR and multi touch setups often require sensing or tracking the user, optical tracking being a common choice. Hardware and software of an optical 3D marker tracking and an optical multi touch system are explained. The Davelib, a software framework for rendering 3D models in Virtual Environments is presented. It allows to easily port existing 3D applications to immersive setups with stereoscopic rendering and head tracking. Display calibration and rendering issues that are special to VR setups are explained. User interfaces for navigation and manipulation are described, focusing on interaction techniques for the DAVE and for multi touch screens. Intuitive methods are shown that are easy to learn and use, even for computer illiterates. Exemplary applications demonstrate the potential of immersive and non-immersive setups, showing which applications can most benefit from Virtual Environments. Also, some image processing applications in the area of computational photography are explained, that help to better depict the captured scene.
BibTeX:
@phdthesis{Lancelle11:Phd,
  author = {Lancelle, Marcel},
  title = {Visual Computing in Virtual Environments},
  school = {TU Graz, Diss., 2011},
  year = {2011},
  note = {228 p. //diglib.eg.org/EG/DL/dissonline/doc/lancelle.pdf}
}
Landesberger, T. v., Kuijper, A., Schreck, T., Kohlhammer, Jö., van Wijk, J., Fekete, J.-D. & Fellner, D.W., (2011), "Visual Analysis of Large Graphs: State-of-the-Art and Future Research Challenges", Computer Graphics Forum, Vol.30(6), pp.1719-1749.
Abstract: The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as timevarying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field.
BibTeX:
@article{Landesberger*11cgf,
  author = {Landesberger, Tatiana von and Kuijper, Arjan and Schreck, Tobias and Kohlhammer, Jörn and van Wijk, Jarke and Fekete, Jean-Daniel and Fellner, Dieter W.},
  title = {Visual Analysis of Large Graphs: State-of-the-Art and Future Research Challenges},
  journal = {Computer Graphics Forum},
  year = {2011},
  volume = {30},
  number = {6},
  pages = {1719-1749},
  doi = {http://dx.doi.org/10.1111/j.1467-8659.2011.01898.x}
}
Nazemi, K., Burkhardt, D., Stab, C., Breyer, M., Wichert, R. & Fellner, D.W., (2011), "Natural Gesture Interaction with Accelerometer-based Devices in Ambient Assisted Environments", Ambient Assisted Living, pp.75-90, Springer Science and Business Media, Berlin.
Abstract: Using modern interaction methods and devices provides a more natural and intuitive interaction. Currently, only mobile phones and game consoles which are supporting such gesture-based interactions have good payment-rates. This comes along, that such devices will bought not only by the traditional technical experienced consumers. The interaction with such devices becomes so easy, that also older people playing or working with them. Especially older people have more handicaps, so for them it is hard to read small text, like they are used as description to buttons on remote controls for televisions. They also become fast overstrained, so that bigger technical systems are no help for them. If it is possible to interact with gestures, all these problems can be avoided. But to allow an intuitive and easy gesture interaction, gestures have to be supported, which are easy to understand. Because of that fact, in this paper we tried to identify intuitive gestures for common interaction scenarios on computer-based systems for uses in ambient assisted environment. In this evaluation, the users should commit their opinion of intuitive gestures for different presented scenarios/tasks. Basing on these results, intuitively useable systems can be developed, so that users are able to communicate with technical systems on more intuitive level with accelerometer-based devices.
BibTeX:
@inproceedings{Nazemi*10aal,
  author = {Nazemi, Kawa and Burkhardt, Dirk and Stab, Christian and Breyer, Matthias and Wichert, Reiner and Fellner, Dieter W.},
  title = {Natural Gesture Interaction with Accelerometer-based Devices in Ambient Assisted Environments},
  booktitle = {Ambient Assisted Living},
  publisher = {Springer Science and Business Media, Berlin},
  year = {2011},
  pages = {75-90},
  series = {Advanced Technologies and Societal Change; 1},
  doi = {http://dx.doi.org/10.1007/978-3-642-18167-2_6}
}
Peña Serna, S., Stork, A. & Fellner, D.W., (2011), "Interactive Exploration of Design Variations", A World of Engineering Simulation: Industrial Needs, Best Practice, Visions for the Future, pp.18, NAFEMS, Glasgow.
Abstract: The digital exploration of design variations is a key procedure in the embodiment phase of engineering design, in order to efficiently develop optimal solutions. This procedure requires the combination of modeling and simulation capabilities, enabling the engineer to assess the physical and functional behaviors of the proposed solution. Nowadays, this procedure is performed by means of iterating between designers and analysts with their corresponding tools and demanding reciprocal understanding between them. This is nonetheless a very time consuming activity with the currently available tools and technology, even the advance Computer Aided Design (CAD) systems, which can cope with almost any modeling requirement and which presently provide direct connection (i.e. meshing) to analysis modules for models with limited complexity, cannot deal with the interactive exploration of design variations. Moreover, the promising isogeometric analysis, which aims to simulate 3D NURBS representations also requires special transformations (i.e. meshing), which do not allow for interactive exploration of design variations. On the other side, the Computer Aided Engineering (CAE) systems offering morphing support, are only able to explore restricted variations, since large variations or deformations of the model involves expensive remeshing processes. In order to overcome the above mentioned issues and to enable a fully interactive exploration of design variations within an analysis environment, we enhance the simulating model with a high level representation for interacting with semantic features rather than with single elements, we perform combined morphing techniques with local mesh modification for preserving the stability of the numerical model during large variations, and we decouple the storage of the linear system entries and the sequential matrixvector multiplication for getting the solution, in order to permit the update of local entries of the matrix representing the local mesh modifications without the need of a rebuild of the entire system. Our methodology allows the engineers to independently and interactively explore conceptual design variations without restrictions. Hence, the investigation and understanding of the influence of different design features can easily and fast be evaluated and the development of optimal solutions for the design requirements can closely be fulfilled.
BibTeX:
@inproceedings{Pena*11nwc,
  author = {Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Interactive Exploration of Design Variations},
  booktitle = {A World of Engineering Simulation: Industrial Needs, Best Practice, Visions for the Future},
  publisher = {NAFEMS, Glasgow},
  year = {2011},
  pages = {18}
}
Peña Serna, S., Stork, A. & Fellner, D.W., (2011), "Considerations toward a Dynamic Mesh Data Structure", SIGRAD 2011, pp.83-90, Linköping University Electronic Press, Linköping.
Abstract: The use of 3D shapes in different domains such as in engineering, entertainment, cultural heritage or medicine, is essential for representing 3D physical reality. Regardless of whether the 3D shapes are representing physically or digitally born objects, meshes are a versatile and common representation for the 3D reality. Nonetheless, the mesh generation process does not always produce qualitative results, thus incomplete, non-orientable or non-manifold meshes frequently are the input for the domain application. The domain application itself also demands special requirements, e.g. an engineering simulation requires a volumetric mesh either tetrahedral or hexahedral, while a cultural heritage color enhancement uses a triangular or quadrangular mesh, or in both cases even hybrid meshes. Moreover, the processes applied on the meshes (e.g. modeling, simulation, visualization) need to support some operations, such as querying neighboring information or enabling dynamic changes of geometry and topology. These operations need to be robust, hence the neighboring information can be consistently updated, during the dynamic changes. Dealing with this mesh diversity usually requires dedicated data structures for performing in the given domain application. This paper compiles the considerations toward designing a data structure for dynamic meshes in a generic and robust manner, despite the type and the quality of the input mesh. These aspects enable a flexible representation of 3D shapes toward general purpose geometry processing for dynamic meshes in 2D and 3D.
BibTeX:
@inproceedings{Pena*11sigrad,
  author = {Peña Serna, Sebastian and Stork, André and Fellner, Dieter. W.},
  title = {Considerations toward a Dynamic Mesh Data Structure},
  booktitle = {SIGRAD 2011},
  publisher = {Linköping University Electronic Press, Linköping},
  year = {2011},
  pages = {83-90},
  series = {Linköping Electronic Conference Proceedings; 65}
}
Pfister, H.-R., Wollstädter, S. & Peter, C., (2011), "Affective Responses to System Messages in Human-Computer-Interaction: Effects of Modality and Message Type", Interacting with Computers, Vol.23, pp.372-383.
Abstract: Affective responses of users to system messages in human-computer interaction are a key to study user satisfaction. However, little is known about the particular affective patterns elicited by various types of system messages. In this experimental study we examined if and how different system messages, presented in different modalities, influence users' affective responses. Three types of messages, input requests, status notifications, and error messages, were presented either as text or speech, and either alone or in combination with icons or sounds, while users worked on several typical computer tasks. Affective responses following system messages were assessed employing a multi-modal approach, using subjective rating scales as well as physiological measures. Results show that affective responses vary systematically depending on the type of message, and that spoken messages generally elicit more positive affect than written messages. Implications on how to enhance usersatisfaction by appropriate message design are discussed.
BibTeX:
@article{Pfister*11,
  author = {Pfister, H.-R. and Wollstädter, S. and Peter, C. },
  title = {Affective Responses to System Messages in Human-Computer-Interaction: Effects of Modality and Message Type},
  journal = {Interacting with Computers},
  year = {2011},
  volume = {23},
  pages = {372-383},
  doi = {http://dx.doi.org/10.1016/j.intcom.2011.05.006}
}
Pratikakis, I., Schreck, T., Theoharis, T. & Veltkamp, R., (2011), "Preface to Special Issue on EG Workshop on 3D Object Retrieval 2010", Springer Visual Computer, Vol.27(11), pp.949-950.
BibTeX:
@article{Pratikakis*11,
  author = {I. Pratikakis and T. Schreck and T. Theoharis and R. Veltkamp},
  title = {Preface to Special Issue on EG Workshop on 3D Object Retrieval 2010},
  journal = {Springer Visual Computer},
  year = {2011},
  volume = {27},
  number = {11},
  pages = {949--950}
}
Roth, P.M., Settgast, V., Widhalm, P., Lancelle, M., Birchbauer, J., Brändle, N., Havemann, S. & Bischof, H., (2011), "Next-Generation 3D Visualization for Visual Surveillance", 2011 8th IEEE International Conference on Advanced Video and Signal-Based Surveillancei (AVSS), pp.6.
Abstract: Existing visual surveillance systems typically require that a human operator observes video streams from different cameras. Since, due to the ever increasing number of cameras, this becomes more and more infeasible, automatic systems are required. However, such systems can often not cope with complex scenes and are not reliable enough. Thus, in this paper, we present a novel combination between automatic visual surveillance systems and interactive visualization methods. Our novel visualization takes advantage of a high resolution display and given 3D information to focus the operator's attention to interesting/critical areas of the observed area. This is realized by embedding the results of automatic scene analysis techniques into the visualization. By providing different visualization modes, the user can easily switch between the different modes and can select the mode which provides most information. The system is demonstrated for a real setup on a campus of an university.
BibTeX:
@inproceedings{Roth*11avss,
  author = {Roth, Peter M. and Settgast, Volker and Widhalm, Peter and Lancelle, Marcel and Birchbauer, Josef and Brändle, Norbert and Havemann, Sven and Bischof, Horst},
  title = {Next-Generation 3D Visualization for Visual Surveillance},
  booktitle = {2011 8th IEEE International Conference on Advanced Video and Signal-Based Surveillancei (AVSS)},
  year = {2011},
  pages = {6}
}
Scherer, M., Bernard, J. & Schreck, T., (2011), "Retrieval and Exploratory Search in Multivariate Research Data Repositories using Regressional Features", Proc. ACM/IEEE Joint Conference on Digital Libraries, pp.363-372.
Abstract: Increasing amounts of data are collected in many areas of research and application. The degree to which this data can be accessed, retrieved, and analyzed is decisive to obtain progress in fields such as scientific research or industrial production. We present a novel method supporting content-based retrieval and exploratory search in repositories of multivariate research data. In particular, functional dependencies are a key characteristic of data that researchers are often interested in. Our methods are able to describe the functional form of such dependencies, e.g., the relationship between inflation and unemployment in economics. Our basic idea is to use feature vectors based on the goodness-of-fit of a set of regression models, to describe the data mathematically. We denote this approach Regressional Features and use it for content-based search and, since our approach motivates an intuitive definition of interestingness, for exploring the most interesting data. We apply our method on considerable real-world research datasets, showing the usefulness of our approach for user-centered access to research data in a Digital Library system.
BibTeX:
@inproceedings{Scherer*11jcdl,
  author = {Scherer, M. and Bernard, J. and Schreck, T.},
  title = {Retrieval and Exploratory Search in Multivariate Research Data Repositories using Regressional Features},
  booktitle = {Proc. ACM/IEEE Joint Conference on Digital Libraries},
  year = {2011},
  pages = {363--372},
  doi = {http://dx.doi.org/10.1145/1998076.1998144}
}
Schiffer, T., Schinko, C., Ullrich, T. & Fellner, D.W., (2011), "Real-World Geometry and Generative Knowledge", Ercim News, Vol.86, pp.15-16.
Abstract: The current methods of describing the shape of three-dimensional objects can be classified into two groups: composition of primitives and procedural description. As a 3D acquisition device simply returns an agglomeration of elementary objects (eg a laser scanner returns points) a real-world data set is always a -- more or less noisy -- composition of primitives. A generative model, on the other hand, describes an ideal object rather than a real one. Owing to this abstract view of an object, generative techniques are often used to describe objects semantically. Consequently, generative models, rather than being a replacement for established geometry descriptions (based on points, triangles, etc.), offer a sensible, semantic enrichment.
BibTeX:
@article{Schiffer*11ercim,
  author = {Schiffer, Thomas and Schinko, Christoph and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Real-World Geometry and Generative Knowledge},
  journal = {Ercim News},
  year = {2011},
  volume = {86},
  pages = {15-16}
}
Schinko, C., Strobl, M., Ullrich, T. & Fellner, D.W., (2011), "Scripting Technology for Generative Modeling", International Journal on Advances in Software, Vol.4(3-4), pp.308-326.
Abstract: In the context of computer graphics, a generative model is the description of a three-dimensional shape: Each class of objects is represented by one algorithm M. Furthermore, each described object is a set of high-level parameters x, which reproduces the object, if an interpreter evaluates M(x). This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way. Generative models are designed by programming. In order to make generative modeling accessible to non-computer scientists, we created a generative modeling framework based on the easy-to-use scripting language JavaScript (JS). Furthermore, we did not implement yet another interpreter, but a JS-translator and compiler. As a consequence, our framework can translate generative models from JavaScript to various platforms. In this paper we present an overview of Euclides and quintessential examples of supported platforms: Java, Differential Java, and GML. Java is a target language, because all frontend and framework components are written in Java making it easier to be embedded in an integrated development environment. The Differential Java backend can compute derivatives of functions, which is a necessary task in many applications of scientific computing, e.g., validating reconstruction and fitting results of laser scanned surfaces. The postfix notation of GML is very similar to that of Adobes Postscript. It allows the creation of high-level shape operators from low-level shape operators. The GML serves as a platform for a number of applications because it is extensible and comes with an integrated visualization engine. This innovative meta-modeler concept allows a user to export generative models to other platforms without losing its main feature -- the procedural paradigm. In contrast to other modelers, the source code does not need to be interpreted or unfolded, it is translated. Therefore, it can still be a very compact representation of a complex model.
BibTeX:
@article{Schinko*11ijas,
  author = {Schinko, Christoph and Strobl, Martin and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Scripting Technology for Generative Modeling},
  journal = {International Journal on Advances in Software},
  year = {2011},
  volume = {4},
  number = {3-4},
  pages = {308-326}
}
Schinko, C., Ullrich, T. & Fellner, D., (2011), "Simple and Efficient Normal Encoding with Error Bounds", Theory and Practice of Computer Graphics, pp.63-66, Eurographics.
Abstract: Normal maps and bump maps are commonly used techniques to make 3D scenes more realistic. Consequently, the efficient storage of normal vectors is an important task in computer graphics. This work presents a fast, lossy compression/decompression algorithm for arbitrary resolutions. The complete source code is listed in the appendix and is ready to use.
BibTeX:
@inproceedings{Schinko*11tpcg,
  author = {Christoph Schinko and Torsten Ullrich and Dieter Fellner},
  title = {Simple and Efficient Normal Encoding with Error Bounds},
  booktitle = {Theory and Practice of Computer Graphics},
  publisher = {Eurographics},
  year = {2011},
  pages = {63-66},
  doi = {http://dx.doi.org/10.2312/LocalChapterEvents/TPCG/TPCG11/063-065}
}
Schröder, M., Pirker, H., Lamolle, M., Burkhardt, F., Peter, C. & Zovato, E., Petta, P., Cowie, R. & Pelachaud, C. (ed.) (2011), "Representing emotions and related states in technological systems", Emotion-Oriented Systems - The Humaine Handbook, pp.367-386, Springer.
Abstract: In many cases when technological systems are to operate on emotions and related states, they need to represent these states. Existing representations are limited to application-specific solutions that fall short of representing the full range of concepts that have been identified as relevant in the scientific literature. The present chapter presents a broad conceptual view on the possibility to create a generic representation of emotions that can be used in many contexts and for many purposes. Potential use cases and resulting requirements are identified and compared to the scientific literature on emotions. Options for the practical realisation of an Emotion Markup Language are discussed in the light of the requirement to extend the language to different emotion concepts and vocabularies, and ontologies are investigated as a means to provide limited "mapping" mechanisms between different emotion representations.
BibTeX:
@incollection{Schroeder*11,
  author = {Schröder, M. and Pirker, H. and Lamolle, M. and Burkhardt, F. and Peter, C. and Zovato, E},
  editor = {Petta, P. and Cowie, R. and Pelachaud, C. },
  title = {Representing emotions and related states in technological systems},
  booktitle = {Emotion-Oriented Systems - The Humaine Handbook},
  publisher = {Springer},
  year = {2011},
  pages = {367-386},
  doi = {http://dx.doi.org/10.1007/978-3-642-15184-2_19}
}
Schröder, M., Baggia, P., Burkhardt, F., Pelachaud, C., Peter, C. & Zovato, E., (2011), "EmotionML -- an upcoming standard for representing emotions and related states", Proc. Affective Computing and Intelligent Interaction, Vol.6975, pp.316-325, Springer.
Abstract: The present paper describes the specification of Emotion Markup Language (EmotionML) 1.0, which is undergoing standardisation at the World Wide Web Consortium (W3C). The language aims to strike a balance between practical applicability and scientific wellfoundedness. We briefly review the history of the process leading to the standardisation of EmotionML. We describe the syntax of EmotionML as well as the vocabularies that are made available to describe emotions in terms of categories, dimensions, appraisals and/or action tendencies. The paper concludes with a number of relevant aspects of emotion that are not covered by the current specification.
BibTeX:
@inproceedings{Schroeder*11lncs,
  author = {Marc Schröder and Paolo Baggia and Felix Burkhardt and Catherine Pelachaud and Christian Peter and Enrico Zovato},
  title = {EmotionML -- an upcoming standard for representing emotions and related states},
  booktitle = {Proc. Affective Computing and Intelligent Interaction},
  publisher = {Springer},
  year = {2011},
  volume = {6975},
  pages = {316-325},
  series = {Lecture Notes in Computer Science}
}
Schröder, M., Baggia, P., Burkhardt, F., Pelachaud, C., Peter, C. & Zovato, E., (2011), "Emotion Markup Language (EmotionML) 1.0 ".
Abstract: As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and scientific well-foundedness. The language is conceived as a 'plug-in' language suitable for use in three different areas: (1) manual annotation of data; (2) automatic recognition of emotion-related states from user behavior; and (3) generation of emotion-related system behavior.
BibTeX:
@techreport{Schroeder*11TR,
  author = {M. Schröder and P. Baggia and F. Burkhardt and C. Pelachaud and Christian Peter and E. Zovato},
  title = {Emotion Markup Language (EmotionML) 1.0 },
  year = {2011},
  note = {//www.w3.org/TR/emotionml/}
}
Schröder, M., Pelachaud, C., Ashimura, K., Baggia, P., Burkhardt, F., Oltramari, A., Peter, C. & Zovato, E., (2011), "Vocabularies for EmotionML".
Abstract: This document provides a list of emotion vocabularies that can be used with EmotionML to represent emotions and related states. EmotionML provides mechanisms to represent emotions in terms of scientifically valid descriptors: categories, dimensions, appraisals, and action tendencies. Given the lack of agreement in the community, EmotionML does not provide a single vocabulary of emotion terms, but gives users a choice to select the most suitable emotion vocabulary in their annotations. In order to promote interoperability, publicly defined vocabularies should be used where possible and reasonable from the point of view of the target application. The present document provides a number of emotion vocabularies that can be used for this purpose.
BibTeX:
@techreport{Schroeder*11TR2,
  author = {Schröder, M. and Pelachaud, C. and Ashimura, K. and Baggia, P. and Burkhardt, F. and Oltramari, A. and Peter, C. and Zovato, E.},
  title = {Vocabularies for EmotionML},
  year = {2011},
  note = {//www.w3.org/TR/emotion-voc/}
}
Schwenk, K., Behr, J. & Fellner, D.W., (2011), "An Error Bound for Decoupled Visibility with Application to Relighting", Eurographics 2011. Short Papers, pp.25-28, Eurographics Association.
Abstract: Monte Carlo estimation of direct lighting is often dominated by visibility queries. If an error is tolerable, the calculations can be sped up by using a simple scalar occlusion factor per light source to attenuate radiance, thus decoupling the expensive estimation of visibility from the comparatively cheap sampling of unshadowed radiance and BRDF. In this paper we analyze the error associated with this approximation and derive an upper bound. We demonstrate in a simple relighting application how our result can be used to reduce noise by introducing a controlled error if a reliable estimate of the visibility is already available.
BibTeX:
@inproceedings{Schwenk*11EG,
  author = {Schwenk, Karsten and Behr, Johannes and Fellner, Dieter W.},
  title = {An Error Bound for Decoupled Visibility with Application to Relighting},
  booktitle = {Eurographics 2011. Short Papers},
  publisher = {Eurographics Association},
  year = {2011},
  pages = {25-28}
}
Schwenk, K., Behr, J. & Fellner, D.W., (2011), "CommonVolumeShader: Simple and Portable Specification of Volumetric Light Transport in X3D", Proceedings Web3D 2011, pp.39-44, ACM Press, New York.
Abstract: A new technology is on the rise that allows the 3D-reconstruction of Cultural Heritage objects from image sequences taken by ordinary digital cameras. We describe the first experiments we made as early adopters in a community-funded research project whose goal is to develop it into a standard CH technology. The paper describes in detail a step-by-step procedure that can be reproduced using free tools by any CH professional. We also give a critical assessment of the workflow and describe several ideas for developing it further into an automatic procedure for 3D reconstruction from images.
BibTeX:
@inproceedings{Schwenk*11web3d,
  author = {Schwenk, Karsten and Behr, Johannes and Fellner, Dieter W.},
  title = {CommonVolumeShader: Simple and Portable Specification of Volumetric Light Transport in X3D},
  booktitle = {Proceedings Web3D 2011},
  publisher = {ACM Press, New York},
  year = {2011},
  pages = {39-44},
  doi = {http://dx.doi.org/10.1145/2010425.2010432}
}
Thaller, W., Krispel, U., Havemann, S., Redi, I., Redi, A. & Fellner, D., Remondino, F. & El-Hakim, S. (ed.) (2011), "Developing Parametric Building Models - the GANDIS Use Case", Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011, ISPRS.
Abstract: In the course of a project related to green building design, we have created a group of eight parametric building models that can be manipulated interactively with respect to dimensions, number of floors, and a few other parameters. We report on the commonalities and differences between the models and the abstractions that we were able to identify.
BibTeX:
@inproceedings{Thaller*10gandis,
  author = {Wolfgang Thaller and Ulrich Krispel and Sven Havemann and Ivan Redi and Andrea Redi and Dieter Fellner},
  editor = {Fabio Remondino and Sabry El-Hakim},
  title = {Developing Parametric Building Models - the GANDIS Use Case},
  booktitle = {Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011},
  publisher = {ISPRS},
  year = {2011}
}
Tzompanaki, K., Doerr, M., Theodoridou, M. & Havemann, S., (2011), "3D-COFORM: A Large-Scale Digital Production Environment", Ercim News, Vol.86, pp.18-19.
Abstract: The systematic large-scale production of digital scientific objects, such as 3D models, requires much more infrastructure than a classical digital archive connected to a workflow manager. The size of the data to be handled, the distribution of expertise, acquisition and production sites, and the complexity of the processes involved require an innovative integrated environment that combines content management and information retrieval (IR) services with a centralized knowledge management in order to monitor, manage and document processes and products in a flexible manner.
BibTeX:
@article{Tzompanaki*11Ercim,
  author = {Tzompanaki, Katerina and Doerr, Martin and Theodoridou, Maria and Havemann, Sven},
  title = {3D-COFORM: A Large-Scale Digital Production Environment},
  journal = {Ercim News},
  year = {2011},
  volume = {86},
  pages = {18-19},
  note = {European Research Consortium for Informatics and Mathematics}
}
Ullrich, T., Schiffer, T., Schinko, C. & Fellner, D.W., (2011), "Variance Analysis and Comparison in Computer-Aided Design", Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011, pp.5.
Abstract: The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. In computer graphics, for example, differences of surfaces are used for analyzing mesh processing algorithms such as mesh compression. They are also used to validate reconstruction and fitting results of laser scanned surfaces. As laser scanning has become very important for the acquisition and preservation of artifacts, scanned representations are used for documentation as well as analysis of ancient objects. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Differences of surfaces are analyzed to check the quality of productions. Our contribution to this problem is a workflow, which compares a reference / nominal surface with an actual, laser-scanned data set. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object; whereas the laser-scanned object is a real-world data set without any additional semantic information.
BibTeX:
@inproceedings{Ullrich*11arch3d,
  author = {Ullrich, Torsten and Schiffer, Thomas and Schinko, Christoph and Fellner, Dieter W.},
  title = {Variance Analysis and Comparison in Computer-Aided Design},
  booktitle = {Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011},
  year = {2011},
  pages = {5},
  series = {The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; XXXVIII-5/W16}
}
Ullrich, T. & Fellner, D.W., Laga, H., Schreck, T., Ferreira, A., Godil, A., Pratikakis, I. & Veltkamp, R. (ed.) (2011), "Generative Object Definition and Semantic Recognition", 3D Object Retrieval 2011, Eurographics Symposium Proceedings, pp.1-8, Eurographics Association.
Abstract: What is the difference between a cup and a door? These kinds of questions have to be answered in the context of digital libraries. This semantic information, which describes an object on a high, abstract level, is needed in order to provide digital library services such as indexing, markup and retrieval. In this paper we present a new approach to encode and to extract such semantic information. We use generative modeling techniques to describe a class of objects: each class is represented by one algorithm; and each object is one set of high-level parameters, which reproduces the object if passed to the algorithm. Furthermore, the algorithm is annotated with semantic information, i.e. a human-readable description of the object class it represents. We use such an object description to recognize objects in real-world data e.g. laser scans. Using an algorithmic object description, we are able to identify 3D subparts, which can be described and generated by the algorithm. Furthermore, we can determine the needed input parameters. In this way, we can classify objects, recognize them semantically and we can determine their parameters (cup's height, radius, etc.)
BibTeX:
@inproceedings{Ullrich-Fellner*11eg3dor,
  author = {Torsten Ullrich and Dieter W. Fellner},
  editor = {H. Laga and T. Schreck and A. Ferreira and A. Godil and I. Pratikakis and R. Veltkamp},
  title = {Generative Object Definition and Semantic Recognition},
  booktitle = {3D Object Retrieval 2011, Eurographics Symposium Proceedings},
  publisher = {Eurographics Association},
  year = {2011},
  pages = {1-8}
}
Ullrich, T. & Fellner, D.W., (2011), "Linear Algorithms in Sublinear Time -- a tutorial on statistical estimation", IEEE Computer Graphics and Applications, Vol.31(2), pp.58-66.
Abstract: In this tutorial we present techniques of probability theory to boost linear algorithms. The main idea is based on statistics and uses educated guesses instead of comprehensive calculations. As estimates can be calculated in sublinear time, many algorithms can benefit from statistical estimation. In our examples linear algorithms are boosted significantly without negative effects on the algorithms results. We demonstrate this technique on a RANSAC algorithm, an image processing algorithm and on a geometrical reconstruction. The theoretic foundation of this techniques take advantage of the fact that in many cases the amount of information in a data set increases asymptotically sublinear if its size or sampling density increases. Conversely, an algorithm with expected sublinear running time can extract most information.
BibTeX:
@article{Ullrich-Fellner10cga,
  author = {Ullrich, Torsten and Fellner, Dieter W.},
  title = {Linear Algorithms in Sublinear Time -- a tutorial on statistical estimation},
  journal = {IEEE Computer Graphics and Applications},
  year = {2011},
  volume = {31},
  number = {2},
  pages = {58-66},
  doi = {http://dx.doi.org/10.1109/MCG.2010.21}
}
Ullrich, T., (2011), "Reconstructive Geometry".
Abstract: The thesis "Reconstructive Geometry" by TORSTEN ULLRICH presents a new collision detection algorithm, a novel approach to generative modeling, and an innovative shape recognition technique. All these contributions are centred around the questions "how to combine acquisition data with generative model descriptions" and "how to perform this combination efficiently". Acquisition data - such as point clouds and triangle meshes - are created e.g. by a 3D scanner or a photogrammetric process. They can describe a shape's geometry very well, but do not contain any semantic information. With generative descriptions it's the other way round: a procedure describes a rather ideal object and its construction process. This thesis builds a bridge between both types of geometry descriptions and combines them to a semantic unit. An innovative shape recognition technique, presented in this thesis, determines whether a digitized real-world object might have been created by a given generative description, and if so, it identifies the high-level parameters that have been passed to the generative script. Such a generative script is a simple JavaScript function. Using the generative modeling compiler "Euclides" the function can be understood in a mathematical sense; i.e. it can be differentiated with respect to its input parameters, it can be embedded into an objective function, and it can be optimized using standard numerical analysis. This approach offers a wide range of applications for generative modeling techniques; parameters do not have to be set manually - they can be set automatically according to a reasonable objective function. In case of shape recognition, the objective function is distance-based and measures the similarity of two objects. The techniques that are used to efficiently perform this task (space partitioning, hierarchical structures, etc.) are the same in collision detection where the question, whether two objects have distance zero, is answered. To sum up, distance functions and distance calculations are a main part of this thesis along with their application in geometric object descriptions, semantic enrichment, numerical analysis and many more.
BibTeX:
@phdthesis{Ullrich11:PhD,
  author = {Ullrich, Torsten},
  title = {Reconstructive Geometry},
  school = {TU Graz, Diss., 2011},
  year = {2011},
  note = {322 p.}
}
Weber, D., Peña Serna, S., Stork, A. & Fellner, D.W., (2011), "Rapid CFD for the Early Conceptual Design Phase", The Integration of CFD into the Product Development Process, pp.9, NAFEMS, Glasgow.
Abstract: An important step of the product development is the optimization of the components' physical behavior, which is usually done in a costly iterative process. Besides the modification, simplification, and (re-) meshing of the component's geometry, simulating its behavior can take hours or even days. In the early conceptual design phase, different material properties and shapes need to be tested and compared, in order to optimally design the component. Nonetheless, time consuming simulations limit the realm of possibilities. We have developed a framework for enabling rapid Computational Fluid Dynamics (CFD) for the early conceptual design phase. In order to achieve this, we combine the computation and visualization of 2D fluid flow in real time with the modification of fluid parameters, boundary conditions and geometry. This allows for the rapid assessment and analysis of different shapes and therefore the optimization of the component. Our framework is completely based on graphic processing units (GPUs), i.e., all computations are performed on the GPU avoiding costly memory transfers between graphic hardware and CPU memory. The computations are performed on a single desktop PC, thus the simulation results can reside in GPU memory and can directly be visualized. B-Spline curves are used for modelling the geometry and the user can interactively modify it by means of inserting and moving control points or applying local smooth deformations, with the corresponding rapid update of the discretization on the GPU. Computing one single time step is performed in fractions of a second, even if the fluid flow is modelled with about one million degrees of freedom. The fast geometric manipulation combined with the direct visualization of quantities like velocity or pressure field allows for an immediate feedback of shape or parameter changes. Although fast simulations do not yet achieve the high precision compared to conventional simulations, their results are suitable for analyzing trends.
BibTeX:
@inproceedings{Weber*11cfd,
  author = {Weber, Daniel and Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Rapid CFD for the Early Conceptual Design Phase},
  booktitle = {The Integration of CFD into the Product Development Process},
  publisher = {NAFEMS, Glasgow},
  year = {2011},
  pages = {9}
}
Weber, D., Kalbe, T., Stork, A., Fellner, D.W. & Goesele, M., (2011), "Interactive Deformable Models with Quadratic Bases in Bernstein-Bézier-Form", The Visual Computer, Vol.27(6-8), pp.473-483.
Abstract: We present a physically based interactive simulation technique for de formable objects. Our method models the geometry as well as the displacements using quadratic basis functions in Bernstein-Bézier form on a tetrahedral finite element mesh. The Bernstein-Bézier formulation yields significant advantages compared to approaches using the monomial form. The implementation is simplified, as spatial derivatives and integrals of the displacement field are obtained analytically avoiding the need for numerical evaluations of the elements' stiffness matrices. We introduce a novel traversal accounting for adjacency in order to accelerate the reconstruction of the global matrices. We show that our proposed method can compensate the additional effort introduced by the co-rotational formulation to a large extent. We validate our approach on several models and demonstrate new levels of accuracy and performance in comparison to current state-of-the-art.
BibTeX:
@article{Weber*11vc,
  author = {Weber, Daniel and Kalbe, Thomas and Stork, André and Fellner, Dieter W. and Goesele, Michael},
  title = {Interactive Deformable Models with Quadratic Bases in Bernstein-Bézier-Form},
  journal = {The Visual Computer},
  year = {2011},
  volume = {27},
  number = {6-8},
  pages = {473-483},
  doi = {http://dx.doi.org/10.1007/s00371-011-0579-6}
}

2010

Andrienko, G., Andrienko, N., Bremm, S., Schreck, T., von Landesberger, T., Bak, P. & Keim, D., (2010), "Space-in-Time and Time-in-Space Self-Organizing Maps for Exploring Spatiotemporal Patterns", Wiley-Blackwell Computer Graphics Forum, Vol.29(3), pp.913-922.
Abstract: Spatiotemporal data pose serious challenges to analysts in geographic and other domains. Owing to the complexity of the geospatial and temporal components, this kind of data cannot be analyzed by fully automatic methods but require the involvement of the human analyst's expertise. For a comprehensive analysis, the data need to be considered from two complementary perspectives: (1) as spatial distributions (situations) changing over time and (2) as profiles of local temporal variation distributed over space. In order to support the visual analysis of spatiotemporal data, we suggest a framework based on the "Self-Organizing Map" (SOM) method combined with a set of interactive visual tools supporting both analytic perspectives. SOM can be considered as a combination of clustering and dimensionality reduction. In the first perspective, SOM is applied to the spatial situations at different time moments or intervals. In the other perspective, SOM is applied to the local temporal evolution profiles. The integrated visual analytics environment includes interactive coordinated displays enabling various transformations of spatiotemporal data and post-processing of SOM results. The SOM matrix display offers an overview of the groupings of data objects and their two-dimensional arrangement by similarity. This view is linked to a cartographic map display, a time series graph, and a periodic pattern view. The linkage of these views supports the analysis of SOM results in both the spatial and temporal contexts. The variable SOM grid coloring serves as an instrument for linking the SOM with the corresponding items in the other displays. The framework has been validated on a large dataset with real city traffic data, where expected spatiotemporal patterns have been successfully uncovered. We also describe the use of the framework for discovery of previously unknown patterns in 41-years time series of 7 crime rate attributes in the states of the USA.
BibTeX:
@article{Andrienko*10eurovis,
  author = {G. Andrienko and N. Andrienko and S. Bremm and T. Schreck and T. von Landesberger and P. Bak and D. Keim},
  title = {Space-in-Time and Time-in-Space Self-Organizing Maps for Exploring Spatiotemporal Patterns},
  journal = {Wiley-Blackwell Computer Graphics Forum},
  year = {2010},
  volume = {29},
  number = {3},
  pages = {913--922},
  note = {(Proceedings of Eurographics/IEEE-VGTC Symposium on Visualization 2010)},
  doi = {http://dx.doi.org/10.1111/j.1467-8659.2009.01664.x}
}
Andrienko, G., Andrienko, N., Bak, P., Bremm, S., Keim, D., von Landesberger, T., Pölitz, C. & Schreck, T., (2010), "A Framework for Using Self-Organizing Maps to Analyze Spatio-Temporal Patterns, Exemplified by Analysis of Mobile Phone Usage", Taylor & Francis Journal of Location Based Services, Vol.4(3-4), pp.200-221.
Abstract: We suggest a visual analytics framework for the exploration and analysis of spatially and temporally referenced values of numeric attributes. The framework supports two complementary perspectives on spatio-temporal data: as a temporal sequence of spatial distributions of attribute values (called spatial situations) and as a set of spatially referenced time series of attribute values representing local temporal variations. To handle a large amount of data, we use the self-organising map (SOM) method, which groups objects and arranges them according to similarity of relevant data features. We apply the SOM approach to spatial situations and to local temporal variations and obtain two types of SOM outcomes, called space-in-time SOM and time-in-space SOM, respectively. The examination and interpretation of both types of SOM outcomes are supported by appropriate visualisation and interaction techniques. This article describes the use of the framework by an example scenario of data analysis. We also discuss how the framework can be extended from supporting explorative analysis to building predictive models of the spatio-temporal variation of attribute values. We apply our approach to phone call data showing its usefulness in real-world analytic scenarios.
BibTeX:
@article{Andrienko*10ijlbs,
  author = {G. Andrienko and N. Andrienko and P. Bak and S. Bremm and D. Keim and T. von Landesberger and C. Pölitz and Tobias Schreck},
  title = {A Framework for Using Self-Organizing Maps to Analyze Spatio-Temporal Patterns, Exemplified by Analysis of Mobile Phone Usage},
  journal = {Taylor & Francis Journal of Location Based Services},
  year = {2010},
  volume = {4},
  number = {3--4},
  pages = {200--221},
  doi = {http://dx.doi.org/10.1080/17489725.2010.532816}
}
Augsdörfer, U.H., Dodgson, N.A. & Sabin, M.A., (2010), "Variations on the four-point subdivision scheme ", Computer Aided Geometric Design, Vol.27(1), pp.78-95, Elsevier Science Publishers B. V..
Abstract: A step of subdivision can be considered to be a sequence of simple, highly local stages. By manipulating the stages of a subdivision step we can create families of schemes, each designed to meet different requirements. We postulate that such modification can lead to improved behaviour. We demonstrate this using the four-point scheme as an example. We explain how it can be broken into stages and how these stages can be manipulated in various ways. Six variants that all improve on the quality of the limit curve are presented and analysed. We present schemes which perfectly preserve circles, schemes which improve the Hölder continuity, and schemes which relax the interpolating property to achieve higher smoothness.
BibTeX:
@article{Augsdoerfer*10cagd,
  author = {Ursula H. Augsdörfer and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Variations on the four-point subdivision scheme },
  journal = {Computer Aided Geometric Design},
  publisher = {Elsevier Science Publishers B. V.},
  year = {2010},
  volume = {27},
  number = {1},
  pages = {78-95},
  doi = {http://dx.doi.org/10.1016/j.cagd.2009.09.002}
}
Bak, P., Omer, I. & Schreck, T., (2010), "Visual Analytics of Urban Environments using High-Resolution Data", Lecture Notes in Geoinformation and Cartography (Proc. AGILE International Conference on Geographic Information Science), pp.25-42, Springer.
Abstract: High-resolution urban data at house level are essential for understanding the relationship between objects of the urban built environment (e.g. streets, housing types, public resources and open spaces). However, it is rather difficult to analyze such data due to the huge amount of urban objects, their multidimensional character and the complex spatial relation between them. In this paper we propose a methodology for assessing the spatial relation between geo-referenced urban environmental variables, in order to identify typical or significant spatial configurations as well as to characterize their geographical distribution. Configuration in this sense refers to the unique combination of different urban environmental variables. We structure the analytic process by defining spatial configurations, multidimensional clustering of the individual configurations, and identifying emerging patterns of interesting configurations. This process is based on the tight combination of interactive visualization methods with automatic analysis techniques. We demonstrate the usefulness of the proposed methods and methodology in an application example on the relation between street network topology and distribution of land uses in a city.
BibTeX:
@incollection{Bak*10agile,
  author = {P. Bak and I. Omer and T. Schreck},
  title = {Visual Analytics of Urban Environments using High-Resolution Data},
  booktitle = {Lecture Notes in Geoinformation and Cartography (Proc. AGILE International Conference on Geographic Information Science)},
  publisher = {Springer},
  year = {2010},
  pages = {25--42},
  series = {Lecture Notes in Geoinformation and Cartography},
  doi = {http://dx.doi.org/10.1007/978-3-642-12326-9_2}
}
Behr, J., Jung, Y., Keil, J., Drevensek, T., Zöllner, M., Eschler, P. & Fellner, D.W., (2010), "A Scalable Architecture for the HTML5 / X3D Integration Model X3DOM", Proceedings Web3D 2010, pp.185-193, ACM Press, New York.
Abstract: We present a scalable architecture, which implements and further evolve the HTML/X3D integration model X3DOM introduced in [Behr et al. 2009]. The goal of this model is to integrate and update declarative X3D content directly in the HTML DOM tree. The model was previously presented in a very abstract and generic way by only suggesting implementation strategies. The available opensource x3dom.js architecture provides concrete solutions to the previously open points and extents the generic model if necessary. The outstanding feature of the architecture is to provide a single declarative interface to application developers and at the same time support of various backends through a powerful fallback-model. This fallback-model does not provide a single implementation strategy for the runtime and rendering module but supports different methods transparently. This includes native browser implementations and X3D-plugins as well as a WebGL-based scene-graph, which allows running the content without the need for installing additional plugins on all browsers that support WebGL. The paper furthermore discusses generic aspects of the architecture like encoding and introspection, but also provides details concerning two backends. It shows how the system interfaces with X3D-plugins and WebGL and also discusses implementation specific features and limitations.
BibTeX:
@inproceedings{Behr*10web3d,
  author = {Behr, Johannes and Jung, Yvonne and Keil, Jens and Drevensek, Timm and Zöllner, Michael and Eschler, Peter and Fellner, Dieter W.},
  title = {A Scalable Architecture for the HTML5 / X3D Integration Model X3DOM},
  booktitle = {Proceedings Web3D 2010},
  publisher = {ACM Press, New York},
  year = {2010},
  pages = {185-193},
  doi = {http://dx.doi.org/10.1145/1836049.1836077}
}
Bernard, Jü., Brase, J., Fellner, D.W., Koepler, O., Kohlhammer, Jö., Ruppert, T., Schreck, T. & Sens, I., Bentes, C. & et al. (ed.) (2010), "A Visual Digital Library Approach for Time-Oriented Scientific Primary Data", Research and Advanced Technology for Digital Libraries, pp.352-363.
Abstract: Digital Library support for textual and certain types of non-textual documents has significantly advanced over the last years. While Digital Library support implies many aspects along the whole library workflow model, interactive and visual retrieval allowing effective query formulation and result presentation are important functions. Recently, new kinds of non-textual documents which merit Digital Library support, but yet cannot be accommodated by existing Digital Library technology, have come into focus. Scientific primary data, as produced for example, by scientific experimentation, earth observation, or simulation, is such a data type. We report on a concept and first implementation of Digital Library functionality, supporting visual retrieval and exploration in a specific important class of scientific primary data, namely, time-oriented data. The approach is developed in an interdisciplinary effort by experts from the library, natural sciences, and visual analytics communities. In addition to presenting the concept and discussing relevant challenges, we present results from a first implementation of our approach as applied on a real-world scientific primary data set.
BibTeX:
@inproceedings{Bernard*10ecdl,
  author = {Bernard, Jürgen and Brase, Jan and Fellner, Dieter W. and Koepler, Oliver and Kohlhammer, Jörn and Ruppert, Tobias and Schreck, Tobias and Sens, Irina},
  editor = {Bentes, Cristiana and et al.},
  title = {A Visual Digital Library Approach for Time-Oriented Scientific Primary Data},
  booktitle = {Research and Advanced Technology for Digital Libraries},
  year = {2010},
  pages = {352-363},
  doi = {http://dx.doi.org/10.1007/978-3-642-15464-5_35}
}
Blümel, I., Berndt, R., Ochmann, S., Vock, R. & Wessel, R., (2010), "PROBADO3D -- Indexing and Searching 3D CAD Databases: Supporting Planning through Content-Based Indexing and 3D Shape Retrieval", Proceedings International Conference on Design and Decision Support Systems in Architecture and Urban Planning, pp.411-425.
BibTeX:
@inproceedings{Berndt*10ddss,
  author = {Ina Blümel and Rene Berndt and Sebastian Ochmann and Richard Vock and Raoul Wessel},
  title = {PROBADO3D -- Indexing and Searching 3D CAD Databases: Supporting Planning through Content-Based Indexing and 3D Shape Retrieval},
  booktitle = {Proceedings International Conference on Design and Decision Support Systems in Architecture and Urban Planning},
  year = {2010},
  pages = {411--425}
}
Berndt, R., Blümel, I., Clausen, M., Damm, D., Diet, Jü., Fellner, D.W., Fremerey, C., Klein, R., Scherer, M., Schreck, T., Sens, I., Thomas, V. & Wessel, R., Lalmas, M. & et al. (ed.) (2010), "The PROBADO Project -- Approach and Lessons Learned in Building a Digital Library System for Heterogeneous Non-textual Documents", Research and Advanced Technology for Digital Libraries, 14th European Conference ECDL. Proceedings ECDL 2010, Vol.6273, pp.376-383, Springer.
Abstract: The PROBADO Project is a research effort to develop and operate advanced Digital Library support for non-textual documents. The main goal is to contribute to all parts of the Digital Library work flow from content acquisition over indexing to search and presentation. While not limited in terms of supported document types, reference support is developed for classical digital music and 3D architectural models. In this paper, we review the overall goals, approaches taken, and lessons learned so far in a highly integrated effort of university researchers and library experts. We address the problem of technology transfer, aspects of repository compilation, and the problem of inter-domain retrieval. The experiences are relevant for other project efforts in the non-textual Digital Library domain.
BibTeX:
@inproceedings{Berndt*10ecdl,
  author = {Berndt, Rene and Blümel, Ina and Clausen, Michael and Damm, David and Diet, Jürgen and Fellner, Dieter~W. and Fremerey, Christian and Klein, Reinhard and Scherer, Maximilian and Schreck, Tobias and Sens, Irina and Thomas, Verena and Wessel, Raoul},
  editor = {Lalmas, Mounia and et al.},
  title = {The PROBADO Project -- Approach and Lessons Learned in Building a Digital Library System for Heterogeneous Non-textual Documents},
  booktitle = {Research and Advanced Technology for Digital Libraries, 14th European Conference ECDL. Proceedings ECDL 2010},
  publisher = {Springer},
  year = {2010},
  volume = {6273},
  pages = {376--383},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-15464-5_37}
}
Berndt, R., Blümel, I. & Wessel, R., (2010), "PROBADO3D -- Towards an Automatic Multimedia Indexing Workflow for Architectural 3D Models", ELPUB 2010 - Publishing in the networked world: transforming the nature of communication, pp.79-88.
BibTeX:
@inproceedings{Berndt*10elpub,
  author = {R. Berndt and I. Blümel and R. Wessel},
  title = {PROBADO3D -- Towards an Automatic Multimedia Indexing Workflow for Architectural 3D Models},
  booktitle = {ELPUB 2010 - Publishing in the networked world: transforming the nature of communication},
  year = {2010},
  pages = {79-88}
}
Berndt, R., Buchgraber, G., Havemann, S., Settgast, V. & Fellner, D.W., Ioannides, M., Fellner, D.W., Georgopoulos, A. & Hadjimitsis, D. (ed.) (2010), "A Publishing Workflow for Cultural Heritage Artifacts from 3D-Reconstruction to Internet Presentation", Digital Heritage. Third International Conference, EuroMed 2010, Vol.6436, pp.166-178, Springer.
Abstract: Publishing cultural heritage as 3D models with embedded annotations and additional information on the web is still a major challenge. This includes the acquisition of the digital 3D model, the authoring and editing of the additional information to be attached to the digital model as well as publishing it in a suitable format. These steps usually require very expensive hardware and software tools. Especially small museums cannot afford an expensive scanning campaign in order to generate the 3D models from the real artefacts. In this paper we propose an affordable publishing workflow from acquisition of the data to authoring and enriching it with the related metadata and information to finally publish it in a way suitable for access by means of a web browser over the internet. All parts of the workflow are based on open source solutions and free services.
BibTeX:
@inproceedings{Berndt*10euromed,
  author = {Rene Berndt and Gerald Buchgraber and Sven Havemann and Volker Settgast and Dieter W.~Fellner},
  editor = {Ioannides, Marinos and Fellner, Dieter~W. and Georgopoulos, Andreas and Hadjimitsis, Diofantos},
  title = {A Publishing Workflow for Cultural Heritage Artifacts from 3D-Reconstruction to Internet Presentation},
  booktitle = {Digital Heritage. Third International Conference, EuroMed 2010},
  publisher = {Springer},
  year = {2010},
  volume = {6436},
  pages = {166-178},
  series = {Lecture Notes in Computer Science (LNCS)},
  doi = {http://dx.doi.org/10.1007/978-3-642-16873-4_13}
}
Berndt, R., Blümel, I. & Wessel, R., (2010), "PROBADO3D -- New Ways of Indexing and Experiencing Architectural 3D Databases", Proceedings of FOCUS K3D Conference on Semantic 3D Media and Content, pp.89-90, INRIA.
Abstract: Nowadays, in Digital Libraries, non-textual documents are indexed and accessed based on textual meta data. This kind of meta data is expensive to obtain, and in many cases, the content cannot be described completely and free of ambiguities. PROBADO3D aims to overcome this limitation by developing content-based access methods for 3D models in the architectural domain.
BibTeX:
@inproceedings{Berndt*10focus,
  author = {R. Berndt and I. Blümel and R. Wessel},
  title = {PROBADO3D -- New Ways of Indexing and Experiencing Architectural 3D Databases},
  booktitle = {Proceedings of FOCUS K3D Conference on Semantic 3D Media and Content},
  publisher = {INRIA},
  year = {2010},
  pages = {89-90}
}
Berndt, R., Blümel, I., Clausen, M., Damm, D., Diet, Jü., Fellner, D.W., Fremerey, C., Klein, R., Scherer, M., Schreck, T., Sens, I., Thomas, V. & Wessel, R., Mittermaier, B. (ed.) (2010), "Aufbau einer verteilten digitalen Bibliothek für nichttextuelle Dokumente -- Ansatz und Erfahrungen des PROBADO Projekts", eLibrary -- den Wandel gestalten, pp.219-236.
BibTeX:
@inproceedings{Berndt*10wisskom,
  author = {Rene Berndt and Ina Blümel and Michael Clausen and David Damm and Jürgen Diet and Dieter W.~Fellner and Christian Fremerey and Reinhard Klein and Maximilian Scherer and Tobias Schreck and Irina Sens and Verena Thomas and Raoul Wessel},
  editor = {Bernhard Mittermaier},
  title = {Aufbau einer verteilten digitalen Bibliothek für nichttextuelle Dokumente -- Ansatz und Erfahrungen des PROBADO Projekts},
  booktitle = {eLibrary -- den Wandel gestalten},
  year = {2010},
  pages = {219--236},
  series = {Schriften des Forschungszentrums Jülich, Reihe Bibliothek}
}
Binotto, A., Daniel, C.G., Weber, D., Kuijper, A., Stork, A., Pereira, C.E. & Fellner, D.W., (2010), "Iterative SLE Solvers over a CPU-GPU Platform", Proceedings 12th IEEE International Conference on High Performance Computing and Communications, pp.305-313.
Abstract: GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multi-core CPUs, a powerful heterogeneous execution platform is built for massive calculations. To improve application performance and explore this heterogeneity, a distribution of workload in a balanced way over the PUs (Processing Units) plays an important role for the system. However, this problem faces challenges since the cost of a task at a PU is non-deterministic and can be influenced by several parameters not known a priori, like the problem size domain. We present a comparison of iterative SLE (Systems of Linear Equations) solvers, used in many scientific and engineering applications, over a heterogeneous CPU-GPUs platform and characterize scenarios where the solvers obtain better performances. A new technique to improve memory access on matrix vector multiplication used by SLEs on GPUs is described and compared to standard implementations for CPU and GPUs. Such timing profiling is analyzed and break-even points based on the problem sizes are identified for this implementation, pointing whether our technique is faster to use GPU instead of CPU. Preliminary results show the importance of this study applied to a real-time CFD (Computational Fluid Dynamics) application with geometry modification.
BibTeX:
@inproceedings{Binotto*10hpcc,
  author = {Binotto, Alecio and Daniel, Christian G. and Weber, Daniel and Kuijper, Arjan and Stork, André and Pereira, Carlos Eduardo and Fellner, Dieter W.},
  title = {Iterative SLE Solvers over a CPU-GPU Platform},
  booktitle = {Proceedings 12th IEEE International Conference on High Performance Computing and Communications},
  year = {2010},
  pages = {305-313}
}
Binotto, A., Pereira, C. & Fellner, D., (2010), "Towards Dynamic Reconfigurable Load-balancing for Hybrid Desktop Platforms", 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum, pp.1-4, IEEE.
Abstract: High-performance platforms are required by applications that use massive calculations. Actually, desktop accelerators (like the GPUs) form a powerful heterogeneous platform in conjunction with multi-core CPUs. To improve application performance on these hybrid platforms, load-balancing plays an important role to distribute workload. However, such scheduling problem faces challenges since the cost of a task at a Processing Unit (PU) is non-deterministic and depends on parameters that cannot be known a priori, like input data, online creation of tasks, scenario changing, etc. Therefore, self-adaptive computing is a potential paradigm as it can provide flexibility to explore computational resources and improve performance on different execution scenarios. This paper presents an ongoing PhD research focused on a dynamic and reconfigurable scheduling strategy based on timing profiling for desktop accelerators. Preliminary results analyze the performance of solvers for SLEs (Sy0stems of Linear Equations) over a hybrid CPU and multi-GPU platform applied to a CFD (Computational Fluid Dynamics) application. The decision of choosing the best solver as well as its scheduling must be performed dynamically considering online parameters in order to achieve a better application performance.
BibTeX:
@inproceedings{Binotto*10ipdps,
  author = {A. Binotto and C. Pereira and D. Fellner},
  title = {Towards Dynamic Reconfigurable Load-balancing for Hybrid Desktop Platforms},
  booktitle = {2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum},
  publisher = {IEEE},
  year = {2010},
  pages = {1-4},
  doi = {http://dx.doi.org/10.1109/IPDPSW.2010.5470804}
}
Binotto, A., Pedras, B., Götz, M., Kuijper, A., Pereira, C.E., Stork, A. & Fellner, D.W., (2010), "Effective Dynamic Scheduling on Heterogeneous MultiManycore Desktop Platforms", SBAC-PADW 2010, pp.37-42.
Abstract: GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multicore CPUs and other co-processors, a powerful heterogeneous execution platform is built on a desktop for data intensive calculations. In our perspective, we see the modern desktop as a heterogeneous cluster that can deal with several applications' tasks at the same time. To improve application performance and explore such heterogeneity, a distribution of workload over the asymmetric PUs (Processing Units) plays an important role for the system. However, this problem faces challenges since the cost of a task at a PU is non-deterministic and can be influenced by several parameters not known a priori, like the problem size domain. We present a context-aware architecture that maximizes application performance on such platforms. This approach combines a model for a first scheduling based on an offline performance benchmark with a runtime model that keeps track of tasks' real performance. We carried a demonstration using a CPU-GPU platform for computing iterative SLEs (Systems of Linear Equations) solvers using the number of unknowns as the main parameter for assignment decision. We achieved a gain of 38.3% in comparison to the static assignment of all tasks to the GPU (which is done by current programming models, such as OpenCL and CUDA for Nvidia).
BibTeX:
@inproceedings{Binotto*10wammca,
  author = {Binotto, Alecio and Pedras, Bernardo and Götz, Marcelo and Kuijper, Arjan and Pereira, Carlos Eduardo and Stork, Andre and Fellner, Dieter W.},
  title = {Effective Dynamic Scheduling on Heterogeneous MultiManycore Desktop Platforms},
  booktitle = {SBAC-PADW 2010},
  year = {2010},
  pages = {37-42},
  doi = {http://dx.doi.org/10.1109/SBAC-PADW.2010.6}
}
Bremm, S., Schreck, T., Boba, P., Held, S. & Hamacher, K., (2010), "Computing and Visually Analyzing Mutual Information in Molecular Co-evolution", BMC Bioinformatics, Vol.11:330.
Abstract: Selective pressure in molecular evolution leads to uneven distributions of amino acids and nucleotides. In fact one observes correlations among such constituents due to a large number of biophysical mechanisms (folding properties, electrostatics, ...). To quantify these correlations the mutual information -after proper normalization - has proven most effective. The challenge is to navigate the large amount of data, which in a study for a typical protein cannot simply be plotted.
BibTeX:
@article{Bremm*10bmc,
  author = {S. Bremm and T. Schreck and P. Boba and S. Held and K. Hamacher},
  title = {Computing and Visually Analyzing Mutual Information in Molecular Co-evolution},
  journal = {BMC Bioinformatics},
  year = {2010},
  volume = {11:330},
  doi = {http://dx.doi.org/10.1186/1471-2105-11-330}
}
Buchgraber, G., Berndt, R. & Fellner, D.W., (2010), "FO3D -- Formatting Objects for PDF3D", Proceedings Web3D 2010, Proceedings of the 15th International Conference on Web 3D Technology, pp.63-71.
Abstract: 3D is useful in many real-world applications beyond computer games. The efficiency of communication is greatly enhanced by combining interlinked verbal descriptions with 3D content. However, there is a wide gap between the great demand for 3D content and the inconvenience and cost of delivering it. We propose using PDF, which is extremely well supported by standard content production workflows. Producing PDF with embedded 3D is currently not an easy task. As a solution to the problem we offer a freely available tool that makes embedding 3D in PDF documents an easy to use technology. Our solution is very flexible, extensible, and can be easily integrated with existing document workflow technology.
BibTeX:
@inproceedings{Buchgraber*10web3d,
  author = {Gerald Buchgraber and René Berndt and Dieter~W. Fellner},
  title = {FO3D -- Formatting Objects for PDF3D},
  booktitle = {Proceedings Web3D 2010, Proceedings of the 15th International Conference on Web 3D Technology},
  year = {2010},
  pages = {63-71},
  doi = {http://dx.doi.org/10.1145/1836049.1836059}
}
Burkhardt, D., Hofmann, C., Nazemi, K., Stab, C., Breyer, M. & Fellner, D., (2010), "Intuitive Semantic-Editing for Regarding Needs of Domain-Experts", Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online], pp.860-869, AACE.
Abstract: Ontologies are used to represent knowledge and their semantic information from different topics, to allow users a better way to explore knowledge and find information faster, because of the data-structuring. To achieve a well filled knowledgebase, editors have to be used, to enter new and to edit existing information. But most of the existing ontology-editors are designed for experienced ontology-experts. Experts from other topic fields e.g. physicians are often novices in the area of ontology-creating, they need adequate tools, which hide the complexity of ontology-structures. In the area of e-learning experts are also teachers as well. In this paper we will present a method, how the needs of domain-experts can be regarded and so an editor can designed, which allows an editing and adding of information by users without having experiences of creating ontologies. With such an editor domain-experts are able to commit their expert-knowledge into the ontology.
BibTeX:
@inproceedings{Burkhardt*10edmedia,
  author = {D. Burkhardt and C. Hofmann and K. Nazemi and C. Stab and M. Breyer and D. Fellner},
  title = {Intuitive Semantic-Editing for Regarding Needs of Domain-Experts},
  booktitle = {Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]},
  publisher = {AACE},
  year = {2010},
  pages = {860-869}
}
Burkhardt, D., Stab, C., Nazemi, K., Breyer, M. & Fellner, D., Chova, C.G., Belenguer, D.M. & Torres, I.C. (ed.) (2010), "Approaches for 3D-Visualizations and Knowledge Worlds for Exploratory Learning", Proc. International Conference on Education and New Learning Technologies [CD-ROM], pp.006427-006437, IATED.
Abstract: Graphical knowledge representations open promising perspectives to support the explorative learning on web. 2D-visualization are recently evaluated as gainful knowledge exploration systems, whereas 3D-visualization systems did not find their way into web-based explorative learning. 3D-visualizations and '3D Knowledge Worlds', as virtual environment in context of e-learning, comprise a high degree of authenticity, because the used metaphors are known by the users from the real world. But different challenges like the usage of 3D-Knowledge World without losing the learning context and the focused learning goals are rarely investigated and considered. New technologies provide the opportunity to introduce 3D-visualizations and environments on web to support a web-based explorative learning. Therefore it is necessary to investigate the prospects of 3D-visualization for transferring and adopting knowledge on web. The following paper describes different approaches to use 3D-visualization and Knowledge Worlds for conveying knowledge on web-based systems using web-based contents. The approaches for 3D visualizations are classified into different layout algorithm and the knowledge worlds are classified interaction character.
BibTeX:
@inproceedings{Burkhardt*10edulearn,
  author = {D. Burkhardt and C. Stab and K. Nazemi and M. Breyer and D. Fellner},
  editor = {C. Gomez Chova and D. Marti Belenguer and I. Candel Torres},
  title = {Approaches for 3D-Visualizations and Knowledge Worlds for Exploratory Learning},
  booktitle = {Proc. International Conference on Education and New Learning Technologies [CD-ROM]},
  publisher = {IATED},
  year = {2010},
  pages = {006427-006437}
}
Echizen, I., Pan, J.-S., Fellner, D.W., Nouak, A., Kuijper, A. & Jain, L.C. (ed.) (2010), "Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Proceedings: IIH-MSP 2010", IEEE Computer Society, Los Alamitos, Calif..
BibTeX:
@proceedings{Echizen*10iih,,
  editor = {Echizen, Isao and Pan, Jeng-Shyang and Fellner, Dieter W. and Nouak, Alexander and Kuijper, Arjan and Jain, Lakhmi C.},
  title = {Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Proceedings: IIH-MSP 2010},
  publisher = {IEEE Computer Society, Los Alamitos, Calif.},
  year = {2010}
}
Fellner, D.W., Baier, K., Fey, T., Bornemann, H., Wehner, D. & Mentel, K. (ed.) (2010), "Annual Report 2009 : Fraunhofer Institute for Computer Graphics Research IGD", Fraunhofer-Institut für Graphische Datenverarbeitung (IGD).
BibTeX:
@book{Fellner*10ar-igd,,
  editor = {Fellner, Dieter W. and Baier, Konrad and Fey, Thekla and Bornemann, Heidrun and Wehner, Detlef and Mentel, Katrin},
  title = {Annual Report 2009 : Fraunhofer Institute for Computer Graphics Research IGD},
  publisher = {Fraunhofer-Institut für Graphische Datenverarbeitung (IGD)},
  year = {2010}
}
Fellner, D.W. & Schaub, J. (ed.) (2010), "Selected Readings in Computer Graphics 2009", Fraunhofer Verlag, Stuttgart.
Abstract: The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The 'Selected Readings in Computer Graphics 2009' consist of 38 articles selected from a total of 183 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2009. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.
BibTeX:
@book{Fellner*10sr,,
  editor = {Fellner, Dieter W. and Schaub, Jutta},
  title = {Selected Readings in Computer Graphics 2009},
  publisher = {Fraunhofer Verlag, Stuttgart},
  year = {2010},
  series = {Selected Readings in Computer Graphics, 20}
}
Doerr, M., Tzompanaki, K., Theodoridou, M., Georgis, C., Axaridou, A. & Havemann, S., (2010), "A Repository for 3D Model Production and Interpretation in Culture and Beyond", VAST 2010, pp.97-104.
Abstract: In order to support the work of researchers in the production, processing and interpretation of complex digital objects and the dissemination of valuable and diverse information to a broad spectrum of audience there is need for an integrated high performance environment that will combine knowledge base features with content management and information retrieval (IR) technologies. In this paper we describe the design and implementation of an integrated repository to ingest, store, manipulate, and export 3D Models, their related digital objects and metadata and to enable efficient access, use, reuse and preservation of the information, ensuring referential and semantic integrity. The repository design is based on an integrated coherent conceptual schema that models complex metadata regarding provenance information, structured models, formats, compatibility of 3D models, historical events and real world objects. This repository is not implemented just to be a storage location for digital objects; it is meant to be a working integrated platform for distant users who participate in a process chain consisting of several steps. A first prototype, in the field of Cultural Heritage, has already been implemented in the context of 3D-COFORM project, an integrated research project funded by the European Community's Seventh Framework Programme (FP7/2007-2013, no 231809) and the results are satisfactory, proving the feasibility of the design decisions which are absolutely new, ambitious, and extraordinarily generic for e-science.
BibTeX:
@inproceedings{Havemann*10vast,
  author = {Martin Doerr and Katerina Tzompanaki and Maria Theodoridou and Ch. Georgis and A. Axaridou and Sven Havemann},
  title = {A Repository for 3D Model Production and Interpretation in Culture and Beyond},
  booktitle = {VAST 2010},
  year = {2010},
  pages = {97-104},
  doi = {http://dx.doi.org/10.2312/VAST/VAST10/097-104}
}
Hofmann, C., Boettcher, U. & Fellner, D.W., (2010), "Change Awarness for Collaborative Video Annotation", Proceedings COOP 2010, pp.101-117, Springer.
Abstract: Collaborative Video Annotation is a broad field of research and is widely used in productive environments. While it is easy to follow changes in small systems with few users, keeping in touch with all changes in large environments can easily get overwhelming. The easiest way and a first approach to prevent the users from getting lost are to show them all changes in an appropriate way. This list of changes can also become very large when many contributors add new information to shared data resources. To prevent users from getting lost while having a list of changes, this paper introduces a way to subscribe to parts of the system and only to have the relevant changes shown. To achieve this goal, the framework provides an approach to check the relevance of changes, which is not trivial in three dimensional spaces, and to be accumulated for later reference by the subscribing user. The benefit for users is to need fewer times to be up-to-date and to have more time for applying ownchanges.
BibTeX:
@inproceedings{Hofmann*10coop,
  author = {C. Hofmann and U. Boettcher and D.~W. Fellner},
  title = {Change Awarness for Collaborative Video Annotation},
  booktitle = {Proceedings COOP 2010},
  publisher = {Springer},
  year = {2010},
  pages = {101--117},
  doi = {http://dx.doi.org/10.1007/978-1-84996-211-7_7}
}
Hofmann, C., Burkhardt, D., Breyer, M., Nazemi, K., Stab, C. & Fellner, D., (2010), "Towards a Workflow-Based Design of Multimedia Annotation Systems", Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online], pp.1224-1233, AACE.
Abstract: Annotation techniques for multimedia contents have found their way into multiple areas of daily use as well as professional fields. A large number of research projects can be assigned to different specific subareas of digital annotation. Nevertheless, the annotation process, bringing out multiple workflows depending on different application scenarios, has not sufficiently been taken into consideration. A consideration of respective processes and workflows requires detailed knowledge about practices of digital multimedia annotation. In order to establish fundamental groundwork towards workflow-related research, this paper presents a comprehensive process model of multimedia annotation which results from a conducted empirical study. Furthermore, we provide a survey of the tasks that have to be accomplished by users and computing devices, tools and algorithms that are used to handle specific tasks, and types of data that are transferred between workflow steps. These aspects areassigned to the identified sub-processes of the model.
BibTeX:
@inproceedings{Hofmann*10edmedia,
  author = {C. Hofmann and D. Burkhardt and M. Breyer and K. Nazemi and C. Stab and D. Fellner},
  title = {Towards a Workflow-Based Design of Multimedia Annotation Systems},
  booktitle = {Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]},
  publisher = {AACE},
  year = {2010},
  pages = {1224--1233}
}
Hofmann, C. & Fellner, D.W., (2010), "Supporting Collaborative Workflows of Digital Multimedia Annotation", Proceedings COOP 2010, pp.79-99, Springer.
Abstract: Collaborative annotation techniques for digital multimedia contents have found their way into a vast amount of areas of daily use as well as professional fields. Attendant research has issued a large number of research projects that can be assigned to different specific subareas of annotation. These projects focus on one or only few aspects of digital annotation. However, the whole annotation process as a operative unit has not sufficiently been taken into consideration, especially for the case of collaborative settings. In order to attend to that lack of research, we present a framework that supports multiple collaborative workflows related to digital multimedia annotation. In that context, we introduce a process-based architecture model, a formalized specification of collaborative annotation processes, and a concept for personalized workflow visualization and user assistance.
BibTeX:
@inproceedings{Hofmann-Fellner10coop,
  author = {C. Hofmann and D.~W. Fellner},
  title = {Supporting Collaborative Workflows of Digital Multimedia Annotation},
  booktitle = {Proceedings COOP 2010},
  publisher = {Springer},
  year = {2010},
  pages = {79-99},
  doi = {http://dx.doi.org/10.1007/978-1-84996-211-7_6}
}
Hohmann, B., Havemann, S., Krispel, U. & Fellner, D., (2010), "A GML shape grammar for semantically enriched 3D building models", Computers & Graphics, Vol.34(4), pp.322-334.
Abstract: The creation of building and facility models is a tedious and complicated task. Existing CAD models are typically not well suited since they contain too much or not enough detail; the manual modeling approach does not scale; different views on the same model are needed, and different levels of detail and abstraction; and finally, conventional modeling tools are inappropriate for models with many internal parameter dependencies. As a solution to this problem we propose a combination of a procedural approach with shape grammars. The model is created in a top-down manner; high-level changeability and re-usability are much less of a problem; and it can be interactively evaluated to provide different views at runtime. We present some insights on the relation between imperative and declarative grammar descriptions, and show a detailed case study with facility surveillance as a practical application.
BibTeX:
@article{Hohmann*10CG,
  author = {B. Hohmann and S. Havemann and U. Krispel and D. Fellner},
  title = {A GML shape grammar for semantically enriched 3D building models},
  journal = {Computers & Graphics},
  year = {2010},
  volume = {34},
  number = {4},
  pages = {322-334},
  doi = {http://dx.doi.org/10.1016/j.cag.2010.05.007}
}
Huff, R., Neves, T., Gierlinger, T., Kuijper, A., Stork, A. & Fellner, D.W., (2010), "A General Two-Level Acceleration Structure for Interactive Ray Tracing on the GPU", Computer Graphics International 2010. Short Papers, pp.4.
Abstract: Despite the superior image quality generated by ray tracing, programmers of time-critical applications have historically avoided it because of its computational costs. Nowadays, the hardware of modern desktops allows the execution of realtime ray tracers but requires a specialized implementation based on specific characteristics of each application, such as scene complexity, kinds of motion, ray distribution, model structure and hardware. The evaluation and development of these requirements are complex and time-consuming, especially for developers with no familiarity in rendering algorithms and graphics hardware programming. The aim of our work is to provide a general and practical method to efficiently execute interactive ray tracing in most systems. We considered the most common aspects of current computer graphics applications, like the use of a scene graph and support to static and dynamic objects. In addition, we also took into account the common desktop hardware. This led us to the development of a special acceleration structure and its implementation on the GPU. In this paper, we present the development of our work showing the combination of different techniques and our results.
BibTeX:
@inproceedings{Huff*10cgi,
  author = {Huff, Rafael and Neves, Tiago and Gierlinger, Thomas and Kuijper, Arjan and Stork, André and Fellner, Dieter W.},
  title = {A General Two-Level Acceleration Structure for Interactive Ray Tracing on the GPU},
  booktitle = {Computer Graphics International 2010. Short Papers},
  year = {2010},
  pages = {4}
}
Huff, R., Neves, T., Gierlinger, T., Kuijper, A., Stork, A. & Fellner, D.W., (2010), "OpenCL vs. CUDA for Ray Tracing", XII Symposium on Virtual and Augmented Reality, pp.4, Everton Cavalcante, Brazil.
Abstract: For many years the Graphics Processing Unit (GPU) of common desktops was just used to accelerate certain parts of the graphics pipeline. After developers had access to the native instruction set and memory of the massive parallel computational elements of GPUs a lot has changed. GPUs became powerful and programmable. Nowadays two SDKs are most used for GPU programming: CUDA and OpenCL. CUDA is the most adopted general purpose parallel computing architecture for GPUs but is restricted to Nvidia graphic cards only. In contrast, OpenCL is a new royalityfree framework for parallel programming intended to be portable across different hardware manufacturers or even different platforms. In this paper, we evaluate both solutions considering a typical parallel algorithm: Ray Tracing. We show our performance results and experiences on developing both implementations that could be easily adapted to solve other problems.
BibTeX:
@inproceedings{Huff*10svr,
  author = {Huff, Rafael and Neves, Tiago and Gierlinger, Thomas and Kuijper, Arjan and Stork, André and Fellner, Dieter W.},
  title = {OpenCL vs. CUDA for Ray Tracing},
  booktitle = {XII Symposium on Virtual and Augmented Reality},
  publisher = {Everton Cavalcante, Brazil},
  year = {2010},
  pages = {4}
}
Ioannides, M., Fellner, D.W., Georgopoulos, A. & Hadjimitsis, D.G. (ed.) (2010), "Digital Heritage. Third International Conference (EuroMed 2010)", Springer, Berlin, Heidelberg, New York.
BibTeX:
@proceedings{Ioannides*10euromed,,
  editor = {Ioannides, Marinos and Fellner, Dieter W. and Georgopoulos, Andreas and Hadjimitsis, Diofantos G.},
  title = {Digital Heritage. Third International Conference (EuroMed 2010)},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2010},
  series = {Lecture Notes in Computer Science (LNCS), 6436},
  doi = {http://dx.doi.org/10.1007/978-3-642-16873-4}
}
Ioannides, M., Fellner, D.W., Georgopoulos, A. & Hadjimitsis, D.G. (ed.) (2010), "Digital Heritage", Vol.6436, Springer.
BibTeX:
@proceedings{Ioannides*10lncs,,
  editor = {Marinos Ioannides and Dieter W. Fellner and Andreas Georgopoulos and Diofantos G. Hadjimitsis},
  title = {Digital Heritage},
  publisher = {Springer},
  year = {2010},
  volume = {6436},
  series = {Lecture Notes in Computer Science}
}
Jung, Y., Webel, S., Olbrich, M., Drevensek, T., Franke, T., Roth, M. & Fellner, D.W., (2010), "Interactive Textures as Spatial User Interfaces in X3D", Proceedings Web3D 2010, pp.147-150.
Abstract: 3D applications, e.g. in the context of visualization or interactive design review, can require complex user interaction to manipulate certain elements, a typical task which requires standard user interface elements. However, there are still no generalized methods for selecting and manipulating objects in 3D scenes and 3D GUI elements often fail to gather support for reasons of simplicity, leaving developers encumbered to replicate interactive elements themselves. Therefore, we present a set of nodes that introduce different kinds of 2D user interfaces to X3D. We define a base type for these user interfaces called 'InteractiveTexture', which is a 2D texture node implementing slots for input forwarding. From this node we derive several user interface representations to enable complex user interaction suitable for both, desktop and immersive interaction.
BibTeX:
@inproceedings{Jung*10web3d,
  author = {Jung, Yvonne and Webel, Sabine and Olbrich, Manuel and Drevensek, Timm and Franke, Tobias and Roth, Marcus and Fellner, Dieter W.},
  title = {Interactive Textures as Spatial User Interfaces in X3D},
  booktitle = {Proceedings Web3D 2010},
  year = {2010},
  pages = {147-150},
  doi = {http://dx.doi.org/10.1145/1836049.1836071}
}
Jung, Y., Wagner, S., Jung, C., Behr, J. & Fellner, D.W., (2010), "Storyboarding and Pre-Visualization with X3D", Proceedings Web3D 2010, pp.73-81.
Abstract: This paper presents methods based on the open standard X3D to rapidly describe life-like characters and other scene elements in the context of storyboarding and pre-visualization. Current frameworks that employ virtual agents often rely on non-standardized pipelines and lack functionality to describe lighting, camera staging or character behavior in a descriptive and simple manner. Even though demand for such a system is high, ranging from edutainment to pre-visualization in the movie industry, few such systems exist. Thereto, we present the ANSWER framework, which provides a set of interconnected components that aid a film director in the process of film production from the planning stage to post-production. Rich and intuitive user interfaces are used for scene authoring and the underlying knowledge model is populated using semantic web technologies over which reasoning is applied. This transforms the user input into animated pre-visualizations that enable a director to experience and understand certain film making decisions before production begins. In this context we also propose some extensions to the current X3D standard for describing cinematic contents.
BibTeX:
@inproceedings{Jung*10web3d-2,
  author = {Jung, Yvonne and Wagner, Sebastian and Jung, Christoph and Behr, Johannes and Fellner, Dieter W.},
  title = {Storyboarding and Pre-Visualization with X3D},
  booktitle = {Proceedings Web3D 2010},
  year = {2010},
  pages = {73-81},
  doi = {http://dx.doi.org/10.1145/1836049.1836060}
}
Kahn, S., Wuest, H., Stricker, D. & Fellner, D.W., (2010), "3D Discrepancy Check via Augmented Reality", 9th IEEE International Symposium on Mixed and Augmented Reality 2010, pp.241-242, IEEE Computer Society, Los Alamitos, Calif..
Abstract: For many tasks like markerless model-based camera tracking it is essential that the 3D model of a scene accurately represents the real geometry of the scene. It is therefore very important to detect deviations between a 3D model and a scene. We present an innovative approach which is based on the insight that camera tracking can not only be used for Augmented Reality visualization but also to solve the correspondence problem between 3D measurements of a real scene and their corresponding positions in the 3D model. We combine a time-of-flight camera (which acquires depth images in real time) with a custom 2D camera (used for the camera tracking) and developed an analysis-by-synthesis approach to detect deviations between a scene and a 3D model of the scene.
BibTeX:
@inproceedings{Kahn*10ismar,
  author = {Kahn, Svenja and Wuest, Harald and Stricker, Didier and Fellner, Dieter W.},
  title = {3D Discrepancy Check via Augmented Reality},
  booktitle = {9th IEEE International Symposium on Mixed and Augmented Reality 2010},
  publisher = {IEEE Computer Society, Los Alamitos, Calif.},
  year = {2010},
  pages = {241-242},
  doi = {http://dx.doi.org/10.1109/ISMAR.2010.5643587}
}
Kahn, S., Wuest, H. & Fellner, D., (2010), "Time-of-Flight Based Scene Reconstruction with a Mesh Processing Tool for Model Based Camera Tracking", Proceedings VISIGRAPP 2010; International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp.302-309, INSTICC Press, 2010.
Abstract: The most challenging algorithmical task for markerless Augmented Reality applications is the robust estimation of the camera pose. With a given 3D model of a scene the camera pose can be estimated via model-based camera tracking without the need to manipulate the scene with fiducial markers. Up to now, the bottleneck of model-based camera tracking is the availability of such a 3D model. Recently time-of-flight cameras were developed which acquire depth images in real time. With a sensor fusion approach combining the color data of a 2D color camera and the 3D measurements of a time-of-flight camera we acquire a textured 3D model of a scene. We propose a semi-manual reconstruction step in which the alignment of several submeshes with a mesh processing tool is supervised by the user to ensure a correct alignment. The evaluation of our approach shows its applicability for reconstructing a 3D model which is suitable for model-based camera tracking even for objects which are difficultto measure reliably with a time-of-flight camera due to their demanding surface characteristics.
BibTeX:
@inproceedings{Kahn*10visapp,
  author = {S. Kahn and H. Wuest and D. Fellner},
  title = {Time-of-Flight Based Scene Reconstruction with a Mesh Processing Tool for Model Based Camera Tracking},
  booktitle = {Proceedings VISIGRAPP 2010; International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
  publisher = {INSTICC Press, 2010},
  year = {2010},
  pages = {302-309}
}
Krispel, U., Havemann, S. & Fellner, D.W., (2010), "FaMoS -- A visual editor for hierachical volumetric modeling", Tagungsband 05. Kongress Multimediatechnik Wismar, pp.1-6.
Abstract: Shape grammar systems are the suitable method for modeling hierarchically structured 3D objects. A great variety of similar objects can be modeled using only a small set of rules. We present a prototypical graphical user interface for hierarchical volumetric modeling using the split grammar approach. Our focus is on creating split rules interactively rather than through scripting. Furthermore, we extend the concept of subdividing boxes to a more general representation and evaluate it in the context of generating 3D-facades of complex buildings.
BibTeX:
@inproceedings{Krispel*10wismar,
  author = {Krispel, Ulrich and Havemann, Sven and Fellner, Dieter W.},
  title = {FaMoS -- A visual editor for hierachical volumetric modeling},
  booktitle = {Tagungsband 05. Kongress Multimediatechnik Wismar},
  year = {2010},
  pages = {1-6}
}
Landesberger, T., Kuijper, A., Schreck, T., Kohlhammer, J., van Wijk, J., Fekete, J.-D. & Fellner, D., Hauser, H. & Reinhard, E. (ed.) (2010), "Visual Analysis of Large Graphs", Eurographics 2010. State of the Art Reports (STARs), pp.113-136, Eurographics.
Abstract: The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand, and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques were presented by Herman et al. [HMM00] and Diaz [DPS02]. The first work surveyed the main techniques for visualization of hierarchies and graphs in general that had been introduced until 2000. The second work concentrated on graph layouts introduced until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as time-varying graphs. Also, in accordance with evergrowing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review firstly considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process.
BibTeX:
@inproceedings{Landesberger*10EG,
  author = {T. Landesberger and A. Kuijper and T. Schreck and J. Kohlhammer and J. van Wijk and J.-D. Fekete and D. Fellner},
  editor = {H. Hauser and E. Reinhard},
  title = {Visual Analysis of Large Graphs},
  booktitle = {Eurographics 2010. State of the Art Reports (STARs)},
  publisher = {Eurographics},
  year = {2010},
  pages = {113-136}
}
Nazemi, K., Breyer, M., Stab, C., Burkhardt, D. & Fellner, D., Chova, C.G., Belenguer, D.M. & Torres, I.C. (ed.) (2010), "Intelligent Exploration System -- an Approach for User-Centered Exploratoy Learning", Proc. International Conference on Education and New Learning Technologies [CD-ROM], pp.006476-006484, IATED.
Abstract: The following paper describes the conceptual design of an Intelligent Exploration System (IES) that offers a user-adapted graphical environment of web-based knowledge repositories, to support and optimize the explorative learning. The paper starts with a short definition of learning by exploring and introduces the Intelligent Tutoring System and Semantic Technologies for developing such an Intelligent Exploration System. The IES itself will be described with a short overview of existing learner or user analysis methods, visualization techniques for exploring knowledge with semantics technology and the explanation of the characteristics of adaptation to offer a more efficient learning environment.
BibTeX:
@inproceedings{Nazemi*10edulearn,
  author = {K. Nazemi and M. Breyer and C. Stab and D. Burkhardt and D. Fellner},
  editor = {C. Gomez Chova and D. Marti Belenguer and I. Candel Torres},
  title = {Intelligent Exploration System -- an Approach for User-Centered Exploratoy Learning},
  booktitle = {Proc. International Conference on Education and New Learning Technologies [CD-ROM]},
  publisher = {IATED},
  year = {2010},
  pages = {006476-006484}
}
Nazemi, K., Stab, C. & Fellner, D.W., (2010), "Interaction Analysis for Adaptive User Interfaces", Advanced Intelligent Computing Theories and Applications, pp.362-371.
Abstract: Adaptive User Interfaces are able to facilitate the handling of computer systems through the automatic adaptation to users needs and preferences. For the realization of these systems, information about the individual user is needed. This user information can be extracted from user events by applying analytical methods without the active information input by the user. In this paper we introduce a reusable interaction analysis system based on probabilistic methods that predicts user interactions, recognizes user activities and detects user preferences on different levels of abstraction. The evaluation reveals that the prediction quality of the developed algorithm outperforms the quality of other established prediction methods.
BibTeX:
@inproceedings{Nazemi*10icic,
  author = {Nazemi, Kawa and Stab, Christian and Fellner, Dieter~W.},
  title = {Interaction Analysis for Adaptive User Interfaces},
  booktitle = {Advanced Intelligent Computing Theories and Applications},
  year = {2010},
  pages = {362-371},
  doi = {http://dx.doi.org/10.1007/978-3-642-14922-1_45}
}
Nazemi, K., Stab, C. & Fellner, D.W., (2010), "Interaction Analysis: An Algorithm for Interaction Prediction and Activity Recognition in Adaptive Systems", Proceedings IEEE International Conference on Intelligent Computing and Intelligent Systems, pp.607-612.
Abstract: Predictive statistical models are used in the area of adaptive user interfaces to model user behavior and to infer user information from interaction events in an implicit and non-intrusive way. This information constitutes the basis for tailoring the user interface to the needs of the individual user. Consequently, the user analysis process should model the user with information, which can be used in various systems to recognize user activities, intentions and roles to accomplish an adequate adaptation to the given user and his current task. In this paper we present the improved prediction algorithm KO*/19, which is able to recognize, beside interaction predictions, behavioral patterns for recognizing user activities. By means of this extension, the evaluation shows that the KO*/19-Algorithm improves the Mean Prediction Rank more than 19% compared to other well-established prediction algorithms.
BibTeX:
@inproceedings{Nazemi*10icis,
  author = {Nazemi, Kawa and Stab, Christian and Fellner, Dieter W.},
  title = {Interaction Analysis: An Algorithm for Interaction Prediction and Activity Recognition in Adaptive Systems},
  booktitle = {Proceedings IEEE International Conference on Intelligent Computing and Intelligent Systems},
  year = {2010},
  pages = {607-612},
  doi = {http://dx.doi.org/10.1109/ICICISYS.2010.5658514}
}
Nazemi, K., Burkhardt, D., Breyer, M., Stab, C. & Fellner, D.W., (2010), "Semantic Visualization Cockpit: Adaptable Composition of Semantics-Visualization Techniques for Knowledge-Exploration", ICL 2010 Proceedings, International Conference Interactive Computer Aided Learning Academic and Corporate E-Learning in a Global Context, pp.163-173.
Abstract: Semantic-Web and ontology-based information processing systems are established technologies and techniques, in more than only research areas and institutions. Different worldwide projects and enterprise companies identified already the added value of semantic technologies and work on different sub-topics for gathering and conveying knowledge. As the process of gathering and structuring semantic information plays a key role in the most developed applications, the process of transferring and adopting knowledge to and by humans is neglected, although the complex structure of knowledge-design opens many research-questions. The following paper describes a new approach for visualizing semantic information as a composition of different adaptable ontology-visualization techniques. We start with a categorized description of existing ontology visualization techniques and show potential gaps. After that the new approach will be described and its added value to existing systems. A case study within the greatest German program for semantic information processing will show the usage of the system in real scenarios.
BibTeX:
@inproceedings{Nazemi*10icl,
  author = {Nazemi, Kawa and Burkhardt, Dirk and Breyer, Matthias and Stab, Christian and Fellner, Dieter W.},
  title = {Semantic Visualization Cockpit: Adaptable Composition of Semantics-Visualization Techniques for Knowledge-Exploration},
  booktitle = {ICL 2010 Proceedings, International Conference Interactive Computer Aided Learning Academic and Corporate E-Learning in a Global Context},
  year = {2010},
  pages = {163-173}
}
Nazemi, K., Breyer, M., Burkhardt, D. & Fellner, D.W., (2010), "Visualization Cockpit: Orchestration of Multiple Visualizations for Knowledge-Exploration", International Journal of Advanced Corporate Learning, Vol.3(4), pp.26-34.
Abstract: Semantic-Web technologies and ontology-based information processing systems are established techniques, in more than only research areas and institutions. Different worldwide projects and enterprise companies identified already the added value of semantic technologies, so they work on different sub-topics for gathering and conveying knowledge. As the process of gathering and structuring semantic information plays a key role in the most developed applications, the process of transferring and adopting knowledge to and by humans is neglected, although the complex structure of knowledge-design opens many research-questions. The customization of the presentation itself and the interaction techniques with these presentation artifacts is a key question for gainful and effective work with semantic information. The following paper describes a new approach for visualizing semantic information as a composition of different adaptable ontology-visualization techniques. We start with a categorized description of existing ontology visualization techniques and show potential gaps.
BibTeX:
@article{Nazemi*10jacl,
  author = {Nazemi, Kawa and Breyer, Matthias and Burkhardt, Dirk and Fellner, Dieter W.},
  title = {Visualization Cockpit: Orchestration of Multiple Visualizations for Knowledge-Exploration},
  journal = {International Journal of Advanced Corporate Learning},
  year = {2010},
  volume = {3},
  number = {4},
  pages = {26-34},
  doi = {http://dx.doi.org/10.3991/ijac.v3i4.1473}
}
Omer, I., Bak, P. & Schreck, T., (2010), "Using space-time visual analytic methods for exploring the dynamics of ethnic groups' residential patterns", Taylor & Francis International Journal of Geographical Information Science, Vol.24(10), pp.1481-1496.
Abstract: In this article, we present a methodological framework, based on georeferenced houselevel socio-demographic and infrastructure data, for investigating minority (or ethnic) group residential pattern dynamics in cities. This methodology, which uses visual analytical tools, is meant to help researchers examine how local land-use configurations shape minorities' residential dynamics and, thereby, affect the level of minority-majority segregation. This methodology responds to the need to refer to the relationship between local land-use configurations and the identity of a building's residents, without simultaneously revealing sensitive house-related details. The research was instantiated on the residential patterns exhibited by the Arab community in Jaffa, Israel. The residential data were collected for over 40 years at four different moments, each associated with the population and housing censuses conducted by Israel's Central Bureau of Statistics and the Ministry of the Interior. Using this methodology enabled us to remain on the level of the individual building when identifying the relationships between spatial land-use configurations and rates of change in ethnic composition and the Arab community's residence pattern dynamics at different geographical scales. It likewise allowed us to identify the qualitative changes in the population's residential preferences during the pattern's development.
BibTeX:
@article{Omer*10ijgis,
  author = {I. Omer and P. Bak and T. Schreck},
  title = {Using space-time visual analytic methods for exploring the dynamics of ethnic groups' residential patterns},
  journal = {Taylor & Francis International Journal of Geographical Information Science},
  year = {2010},
  volume = {24},
  number = {10},
  pages = {1481--1496},
  doi = {http://dx.doi.org/10.1080/13658816.2010.513982}
}
Pan, X., Beckmann, P., Havemann, S., Tzompanaki, K., Doerr, M. & Fellner, D.W., (2010), "A Distributed Object Repository for Cultural Heritage", VAST 2010, pp.105-114.
Abstract: This paper describes the design and the implementation of a distributed object repository that offers cultural heritage experts and practitioners a working platform to access, use, share and modify digital content. The principle of collecting paradata to document each step in a potentially long sequence of processing steps implies a number of design decisions for the data repository, which are described and explained. Furthermore, we provide a description of the concise API our implementation. Our intention is to provide an easy-to-understand recipe that may be valuable also for other data repository implementations that incorporate and operationalize the more theoretical concepts of intellectual transparency, collecting paradata, and compatibility to semantic networks.
BibTeX:
@conference{Pan*10vast,
  author = {Pan, Xueming and Beckmann, Philipp and Havemann, Sven and Tzompanaki, Katerina and Doerr, Martin and Fellner, Dieter W.},
  title = {A Distributed Object Repository for Cultural Heritage},
  booktitle = {VAST 2010},
  year = {2010},
  pages = {105-114},
  doi = {http://dx.doi.org/10.2312/VAST/VAST10/105-114}
}
Peña Serna, S., Stork, A. & Fellner, D.W., (2010), "Tetrahedral Mesh-Based Embodiment Design", ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, pp.10.
Abstract: The engineering design is a systematic approach implemented in the product development process, which is composed of several phases and supported by different tools. Computer-Aided Design (CAD) and Computer-Aided Engineering (CAE) tools are particularly dedicated to the embodiment phase and these enable engineers to design and analyze a potential solution. Nonetheless, the lack of integration between CAD and CAE restricts the exploration of design variations. Hence, we aim at incorporating functionalities of a CAD system within a CAE environment, by means of building a high level representation of the mesh and allowing the engineer to handle and manipulate semantic features, avoiding the direct manipulation of single elements. Thus, the engineer will be able to perform extruding, rounding or dragging operations regardless of geometrical and topological limitations. We present in this paper, the intelligence that a simulating mesh needs to support, in order to enable such operations.
BibTeX:
@inproceedings{Pena*10asme,
  author = {Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Tetrahedral Mesh-Based Embodiment Design},
  booktitle = {ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference},
  year = {2010},
  pages = {10}
}
Peña Serna, S., Stork, A. & Fellner, D.W., (2010), "Embodiment Mesh Processing", Research in Interactive Design. Volume 3, pp.6, Springer, Paris, Berlin, Heidelberg.
Abstract: During the last two decades, several approaches have been proposed, in order to deal with the integration in the embodiment phase of the engineering design. This phase deals with the virtual product development and it is supported by Computer-Aided Design (CAD) and Computer-Aided Engineering (CAE). Nonetheless, this integration has not really been achieved. There is a well established communication from design to analysis, but there is a lack of design operations and functionalities within an analysis environment. This lack of integration will always be presented as long as there are used different representation schemes for design and analysis. Hence, Embodiment Mesh Processing (EMP) is based on a common mesh representation and it aims to provide mesh-based modeling functionalities within an analysis environment. We present our reasoning behind EMP and the needed building blocks for enabling a fully integrated design-analysis interaction loop and the exploration of design variations.
BibTeX:
@inproceedings{Pena*10idmme,
  author = {Peña Serna, Sebastian and Stork, André and Fellner, Dieter W.},
  title = {Embodiment Mesh Processing},
  booktitle = {Research in Interactive Design. Volume 3},
  publisher = {Springer, Paris, Berlin, Heidelberg},
  year = {2010},
  pages = {6}
}
Ruppert, T., May, T., Kohlhammer, J. & Schreck, T., (2010), "Allgemeinbildung in Deutschland - Erkenntnisse aus dem SPIEGEL-Studentenpisa-Test", pp.87-104, VS Verlag für Sozialwissenschaften (Springer Fachmedien).
Abstract: Wenn Wissenschaftler Daten (z.B. des Studentenpisa-Tests des SPIEGEL) analysieren, stellen sie gemeinhin Hypothesen auf und uberprufen diese dann. In diesem Beitrag wird ein anderes Verfahren vorgestellt. Es handelt sich um ein exploratives Vorgehen, das es erlaubt, versteckte Zusammenhange in grossen und komplexen Datensammlungen zu finden. Dazu werden die Daten ohne die Formulierung einer bestimmten Fragestellung mittels interaktiver graphischer Darstellungen untersucht. Der Beitrag erlautert diese Herangehensweise und stellt drei Techniken vor, die fur diesen Zweck am Fraunhofer-Institut fur Graphische Datenverarbeitung in Darmstadt und an der TU Darmstadt entwickelt worden sind. Die Vielzahl von moglichen Aussagen uber einen unbekannten Datensatz wird mit diesen und vergleichbaren Techniken auf jene reduziert, die es wert sind, genauer untersucht zu werden.
BibTeX:
@inbook{Ruppert*10,
  author = {T. Ruppert and T. May and J. Kohlhammer and T. Schreck},
  title = {Allgemeinbildung in Deutschland - Erkenntnisse aus dem SPIEGEL-Studentenpisa-Test},
  publisher = {VS Verlag für Sozialwissenschaften (Springer Fachmedien)},
  year = {2010},
  pages = {87--104},
  note = {Editors: S. Trepte and M. Verbeet}
}
Scherer, M., Walter, M. & Schreck, T., (2010), "Histograms of Oriented Gradients for 3D Model Retrieval", International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, pp.41-48, University of West Bohemia, Plzen.
Abstract: 3D object retrieval has received much research attention during the last years. To automatically determine the similarity between 3D objects, the global descriptor approach is very popular, and many competing methods for extracting global descriptors have been proposed to date. However, no single descriptor has yet shown to outperform all other descriptors on all retrieval benchmarks or benchmark classes. Instead, combinations of different descriptors usually yield improved performance over any single method. Therefore, enhancing the set of candidate descriptors is an important prerequisite for implementing effective 3D object retrieval systems. Inspired by promising recent results from image processing, in this paper we adapt the Histogram of Oriented Gradients (HOG) 2D image descriptor to the 3D domain. We introduce a concept for transferring the HOG descriptor extraction algorithm from 2D to 3D. We provide an implementation framework for extracting 3D HOG features from 3D mesh models, and present a systematic experimental evaluation of the retrieval effectiveness of this novel 3D descriptor. The results show that our 3D HOG implementation provides competitive retrieval performance, and is able to boost the performance of one of the best existing 3D object descriptors when used in a combined descriptor.
BibTeX:
@inproceedings{Scherer*10wscg,
  author = {M. Scherer and M. Walter and T. Schreck},
  title = {Histograms of Oriented Gradients for 3D Model Retrieval},
  booktitle = {International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision},
  publisher = {University of West Bohemia, Plzen},
  year = {2010},
  pages = {41--48}
}
Schiefer, A., Berndt, R., Settgast, V., Ullrich, T. & Fellner, D.W., (2010), "Service-oriented scene graph manipulation", Proceedings Web3D 2010, Proceedings of the 15th International Conference on Web 3D Technology, pp.55-62.
Abstract: In this paper we present a software architecture for the integration of a RESTful web service interface in OpenSG applications. The proposed architecture can be integrated into any OpenSG application with minimal changes to the sources. Extending a scene graph application with a web service interface offers many new possibilities. Without much effort it is possible to review and control the scene and its components using a web browser. New ways of (browser based) user interactions can be added on all kinds of web enabled devices. As an example we present the integration of 'SweetHome3D' into an existing virtual reality setup.
BibTeX:
@inproceedings{Schiefer*10web3d,
  author = {Andreas Schiefer and René Berndt and Volker Settgast and Torsten Ullrich and Dieter W.~Fellner},
  title = {Service-oriented scene graph manipulation},
  booktitle = {Proceedings Web3D 2010, Proceedings of the 15th International Conference on Web 3D Technology},
  year = {2010},
  pages = {55--62},
  doi = {http://dx.doi.org/10.1145/1836049.1836057}
}
Schiffer, T., Schiefer, A., Berndt, R., Ullrich, T., Settgast, V. & Fellner, D.W., (2010), "Enlightened by the Web -- A service-oriented architecture for real-time photorealistic rendering ", Tagungsband 05. Kongress Multimediatechnik Wismar, pp.1-8.
BibTeX:
@inproceedings{Schiffer*10wismar,
  author = {Schiffer, Thomas and Schiefer, Andreas and Berndt, René and Ullrich, Torsten and Settgast, Volker and Fellner, Dieter W.},
  title = {Enlightened by the Web -- A service-oriented architecture for real-time photorealistic rendering },
  booktitle = {Tagungsband 05. Kongress Multimediatechnik Wismar},
  year = {2010},
  pages = {1-8}
}
Schinko, C., Strobl, M., Ullrich, T. & Fellner, D.W., (2010), "Modeling Procedural Knowledge: A Generative Modeler for Cultural Heritage", Digital Heritage. Third International Conference, EuroMed 2010, pp.153-165.
Abstract: Within the last few years generative modeling techniques have gained attention especially in the context of cultural heritage. As a generative model describes a rather ideal object than a real one, generative techniques are a basis for object description and classification. This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way. It can be applied to a task. This similarity to algorithms is reflected in the way generative models are designed: they are programmed. In order to make generative modeling accessible to cultural heritage experts, we created a generative modeling framework which accounts for their special needs. The result is a generative modeler (http://www.cgv.tugraz.at/euclides) based on an easy-to-use scripting language (JavaScript). The generative model meets the demands on documentation standards and fulfils sustainability conditions. Its integrated meta-modeler approach makes it independent from hardware, software and platforms
BibTeX:
@inproceedings{Schinko*10euromed,
  author = {Schinko, Christoph and Strobl, Martin and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Modeling Procedural Knowledge: A Generative Modeler for Cultural Heritage},
  booktitle = {Digital Heritage. Third International Conference, EuroMed 2010},
  year = {2010},
  pages = {153-165},
  doi = {http://dx.doi.org/10.1007/978-3-642-16873-4_12}
}
Schreck, T., von Landesberger, T. & Bremm, S., (2010), "Techniques for Precision-Based Visual Analysis of Projected Data", Palgrave MacMillan Information Visualization, Vol.9(3), pp.181-193.
Abstract: The analysis of high-dimensional data is an important, yet inherently difficult problem. Projection techniques such as Principal Component Analysis, Multi-dimensional Scaling and Self-Organizing Map can be used to map high-dimensional data to 2D display space. However, projections typically incur a loss in information. Often, uncertainty exists regarding the precision of the projection as compared with its original data characteristics. While the output quality of these projection techniques can be discussed in terms of aggregate numeric error values, visualization is often helpful for better understanding the projection results. We address the visual assessment of projection precision by an approach integrating an appropriately designed projection precision measure directly into the projection visualization. To this end, a flexible projection precision measure is defined that allows the user to balance the degree of locality at which the measure is evaluated. Several visual mappings are designed for integrating the precision measure into the projection visualization at various levels of abstraction. The techniques are implemented in an interactive system, including methods supporting the user in finding appropriate settings of relevant parameters. We demonstrate the usefulness of the approach for visual analysis of classified and unclassified high-dimensional data sets. We show how our interactive precision quality visualization system helps to examine the preservation of original data properties in projected space.
BibTeX:
@article{Schreck*10ivs,
  author = {T. Schreck and T. von Landesberger and S. Bremm},
  title = {Techniques for Precision-Based Visual Analysis of Projected Data},
  journal = {Palgrave MacMillan Information Visualization},
  year = {2010},
  volume = {9},
  number = {3},
  pages = {181--193},
  doi = {http://dx.doi.org/10.1057/ivs.2010.2}
}
Schreck, T., von Landesberger, T. & Bremm, S., (2010), "Techniques for Precision-Based Visual Analysis of Projected Data", IS&T/SPIE Conference on Visualization and Data Analysis, pp.7500E.1-7500E.12, SPIE Press.
Abstract: The analysis of high-dimensional data is an important, yet inherently difficult problem. Projection techniques such as Principal Component Analysis, Multi-dimensional Scaling and Self-Organizing Map can be used to map high-dimensional data to 2D display space. However, projections typically incur a loss in information. Often, uncertainty exists regarding the precision of the projection as compared with its original data characteristics. While the output quality of these projection techniques can be discussed in terms of aggregate numeric error values, visualization is often helpful for better understanding the projection results. We address the visual assessment of projection precision by an approach integrating an appropriately designed projection precision measure directly into the projection visualization. To this end, a flexible projection precision measure is defined that allows the user to balance the degree of locality at which the measure is evaluated. Several visual mappings are designed for integrating the precision measure into the projection visualization at various levels of abstraction. The techniques are implemented in an interactive system, including methods supporting the user in finding appropriate settings of relevant parameters. We demonstrate the usefulness of the approach for visual analysis of classified and unclassified high-dimensional data sets. We show how our interactive precision quality visualization system helps to examine the preservation of original data properties in projected space.
BibTeX:
@inproceedings{Schreck*10vda,
  author = {T. Schreck and T. von Landesberger and S. Bremm},
  title = {Techniques for Precision-Based Visual Analysis of Projected Data},
  booktitle = {IS&T/SPIE Conference on Visualization and Data Analysis},
  publisher = {SPIE Press},
  year = {2010},
  pages = {7500E.1--7500E.12},
  doi = {http://dx.doi.org/10.1057/ivs.2010.2}
}
Schreck, T., (2010), "Self-Organizing Maps", pp.83-96, Intech.
Abstract: Based on the Self-Organizing Map (SOM) algorithm, development of effective solutions for visual analysis and retrieval in complex data is possible. Example application domains in- clude retrieval in multimedia data bases, and analysis in financial, text, and general high- dimensional data sets. While early work defined basic concepts for data representation and visual mappings for SOM-based analysis, recent work contributed advanced visual represen- tations of the output of the SOM algorithm, and explored innovative application concepts. In this article, we review a selection of classic and more recent approaches to SOM-based visual analysis. We argue that important improvements have been achieved which allow effective visual representation and interaction with the output of the SOM algorithm. We identify promising directions for future research, which will support new application areas and provide additional advanced visualization approaches.
BibTeX:
@inbook{Schreck10Intech,
  author = {Tobias Schreck},
  title = {Self-Organizing Maps},
  publisher = {Intech},
  year = {2010},
  pages = {83--96},
  note = {G. Matsopoulos, editor},
  doi = {http://dx.doi.org/10.5772/9171}
}
Schwenk, K., Franke, T., Drevensek, T., Kuijper, A., Bockholt, U. & Fellner, D., Lensch, H. & Seipel, S. (ed.) (2010), "Adapting Precomputed Radiance Transfer to Real-time Spectral Rendering", Eurographics 2010. Short Papers, pp.49-52, Eurographics.
Abstract: Spectral rendering takes the full visible spectrum into account when calculating light-surface interaction and can overcome the well-known deficiencies of rendering with tristimulus color models. We present a variant of the precomputed radiance transfer algorithm that is tailored towards real-time spectral rendering on modern graphics hardware. Our method renders diffuse, self-shadowing objects with spatially varying spectral reflectance properties under distant, dynamic, full-spectral illumination. To achieve real-time frame rates and practical memory requirements we split the light transfer function into an achromatic part that varies per vertex and a wavelength-dependent part that represents a spectral albedo texture map. As an additional optimization, we project reflectance and illuminant spectra into an orthonormal basis. One area of application for our research is virtual design applications that require relighting objects with high color fidelity at interactive framerates.
BibTeX:
@inproceedings{Schwenk*10EG,
  author = {K. Schwenk and T. Franke and T. Drevensek and A. Kuijper and U. Bockholt and D. Fellner},
  editor = {H. Lensch and S. Seipel},
  title = {Adapting Precomputed Radiance Transfer to Real-time Spectral Rendering},
  booktitle = {Eurographics 2010. Short Papers},
  publisher = {Eurographics},
  year = {2010},
  pages = {49-52}
}
Schwenk, K., Jung, Y., Behr, J. & Fellner, D.W., (2010), "A Modern Declarative Surface Shader for X3D", Proceedings Web3D 2010, pp.7-15.
Abstract: This paper introduces a modern, declarative surface shader for the X3D standard that allows for a compact, expressive, and implementation-independent specification of surface appearance. X3D's Material node is portable, but its feature set has become inadequate over the last years. Explicit shader programs, on the other hand, offer the expressive power to specify advanced shading techniques, but are highly implementation-dependent. The motivation for our proposal is to bridge the gap between these two worlds -- to provide X3D with renderer-independent support for modern materials and to increase interoperability with DCC tools. At the core of our proposal is the CommonSurfaceShader node. This node provides no explicit shader code, only a slim declarative interface consisting of a set of parameters with clearly defined semantics. Implementation details are completely hidden and portability is maximized. It supports diffuse and glossy surface reflection, bump mapping, and perfect specular reflection and refraction. This feature set can capture the appearance of many common materials accurately and is easily mappable to the material descriptions of other software packages and file formats. To verify our claims, we have implemented and analyzed the proposed node in three different rendering pipelines: a renderer based on hardware accelerated rasterization, an interactive ray tracer, and a path tracer.
BibTeX:
@inproceedings{Schwenk*10web3d,
  author = {Schwenk, Karsten and Jung, Yvonne and Behr, Johannes and Fellner, Dieter W.},
  title = {A Modern Declarative Surface Shader for X3D},
  booktitle = {Proceedings Web3D 2010},
  year = {2010},
  pages = {7-15},
  doi = {http://dx.doi.org/10.1145/1836049.1836051}
}
Stab, C., Breyer, M., Nazemi, K., Burkhardt, D., Hofmann, C. & Fellner, D., (2010), "SemaSun: Visualization of Semantic Knowledge Based on an Improved Sunburst Visualization Metaphor", Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online], pp.911-919, AACE.
Abstract: Ontologies have become an established data model for conceptualizing knowledge entities and describing semantic relationships between them. They are used to model the concepts of specific domains and are widespread in the areas of the semantic web, digital libraries and multimedia database management. To gain the most possible benefit from this data model, it is important to offer adequate visualizations, so that users can easily acquire the knowledge. Most ontology visualization techniques are based on hierarchical or graph-based visualization metaphors. This may result in information-loss, visual clutter, cognitive overload or context-loss. In this paper we describe a new approach of ontology visualization technique called SemaSun that is based on the sunburst visualization metaphor. We improved this metaphor, which is naturally designed for displaying hierarchical data, to the tasks of displaying multiple inheritance and semantic relations. This approach also offersincremental ontology exploring to reduce the cognitive load without losing the informational context.
BibTeX:
@inproceedings{Stab*10edmedia,
  author = {C. Stab and M. Breyer and K. Nazemi and D. Burkhardt and C. Hofmann and D. Fellner},
  title = {SemaSun: Visualization of Semantic Knowledge Based on an Improved Sunburst Visualization Metaphor},
  booktitle = {Proceedings of ED-Media 2010; World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]},
  publisher = {AACE},
  year = {2010},
  pages = {911-919}
}
Stab, C., Nazemi, K. & Fellner, D.W., (2010), "SemaTime - Timeline Visualization of Time-Dependent Relations and Semantics", Advances in Visual Computing. 6th International Symposium, ISVC 2010, pp.514-523, Springer, Berlin, Heidelberg, New York.
Abstract: Timeline based visualizations arrange time-dependent entities along a time-axis and are used in many different domains like digital libraries, criminal investigation and medical information systems to support users in understanding chronological structures. By the use of semantic technologies, the information is categorized in a domain-specific, hierarchical schema and specified by semantic relations. Commonly semantic relations in timeline visualizations are depicted by interconnecting entities with a directed edge. However it is possible that semantic relations change in the course of time. In this paper we introduce a new timeline visualization for time-dependent semantics called SemaTime that offers a hierarchical categorization of time-dependent entities including navigation and filtering features. We also present a novel concept for visualizing time-dependent relations that allows the illustration of time-varying semantic relations and affords an easy understandable visualization of complex, time-dependent interrelations.
BibTeX:
@inproceedings{Stab*10lncs,
  author = {Stab, Christian and Nazemi, Kawa and Fellner, Dieter W.},
  title = {SemaTime - Timeline Visualization of Time-Dependent Relations and Semantics},
  booktitle = {Advances in Visual Computing. 6th International Symposium, ISVC 2010},
  publisher = {Springer, Berlin, Heidelberg, New York},
  year = {2010},
  pages = {514-523},
  series = {Lecture Notes in Computer Science (LNCS); 6455}
}
Strobl, M., Schinko, C. & Torsten, U., (2010), "Euclides -- A JavaScript to PostScript Translator", Proceedings of the International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking (Computation Tools), pp.14-21.
Abstract: Offering an easy access to programming languages that are difficult to approach directly dramatically reduces the inhibition threshold. The Generative Modeling Language is such a language and can be described as being similar to Adobe's PostScript. A major drawback of all PostScript dialects is their unintuitive reverse Polish notation, which makes both -- reading and writing -- a cumbersome task. A language should offer a structured and intuitive syntax in order to increase efficiency and avoid frustration during the creation of code. To overcome this issue, we present a new approach to translate JavaScript code to GML automatically. While this translation is basically a simple infix-to-postfix notation rewrite for mathematical expressions, the correct translation of control flow structures is a non-trivial task, due to the fact that there is no concept of 'goto' in the PostScript language and its dialects. The main contribution of this work is the complete translation of JavaScript into a PostScript dialect including all control flow statements. To the best of our knowledge, this is the first complete translator.
BibTeX:
@inproceedings{Strobl*10ct,
  author = {Strobl, Martin and Schinko, Christoph and Torsten, Ullrich},
  title = {Euclides -- A JavaScript to PostScript Translator},
  booktitle = {Proceedings of the International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking (Computation Tools)},
  year = {2010},
  pages = {14-21}
}
Ullrich, T., Settgast, V. & Berndt, R., (2010), "Semantic Enrichment for 3D Documents Techniques and Open Problems", ELPUB 2010 - Publishing in the networked world: transforming the nature of communication, pp.79-88.
BibTeX:
@inproceedings{Ullrich*10elpub,
  author = {T. Ullrich and V. Settgast and R. Berndt},
  title = {Semantic Enrichment for 3D Documents Techniques and Open Problems},
  booktitle = {ELPUB 2010 - Publishing in the networked world: transforming the nature of communication},
  year = {2010},
  pages = {79-88}
}
Ullrich, T., Schiefer, A. & Fellner, D.W., (2010), "Modeling with Subdivision Surfaces", Proceeding of the 18th WSCG International Conference on Computer Graphics, Visualization and Computer Vision, pp.1-8.
BibTeX:
@article{Ullrich*10wscg,
  author = {Ullrich, Torsten and Schiefer, Andreas and Fellner, Dieter W.},
  title = {Modeling with Subdivision Surfaces},
  journal = {Proceeding of the 18th WSCG International Conference on Computer Graphics, Visualization and Computer Vision},
  year = {2010},
  pages = {1-8}
}
Ullrich, T., Schinko, C. & Fellner, D.W., Skala, V. (ed.) (2010), "Procedural Modeling in Theory and Practice", Proceeding of the 18th WSCG International Conference on Computer Graphics, Visualization and Computer Vision, pp.5-8.
Abstract: Procedural modeling is a technique to describe 3D objects by a constructive, generative description. In order to tap the full potential of this technique the content creator needs to be familiar with two worlds -- procedural modeling techniques and computer graphics on the one hand as well as domain-specific expertise and specialized knowledge on the other hand. This article presents a JavaScript-based approach to combine both worlds. It describes a modeling tool for generative modeling whose target audience consists of beginners and intermediate learners of procedural modeling techniques. Our approach will be beneficial in various contexts. JavaScript is a wide-spread, easy-to-use language. With our tool procedural models can be translated from JavaScript to various generative modeling and rendering systems.
BibTeX:
@article{Ullrich*10wscg2,
  author = {Ullrich, Torsten and Schinko, Christoph and Fellner, Dieter W.},
  editor = {Vaclav Skala},
  title = {Procedural Modeling in Theory and Practice},
  journal = {Proceeding of the 18th WSCG International Conference on Computer Graphics, Visualization and Computer Vision},
  year = {2010},
  pages = {5-8}
}
Wendt, L., Stork, A., Kuijper, A. & Fellner, D., (2010), "3D Reconstruction from Line Drawings", Proceedings VISIGRAPP 2010; International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp.65-71, INSTICC Press.
Abstract: In this work we introduce an approach for reconstructing digital 3D models from multiple perspective line drawings. One major goal is to keep the required user interaction simple and at a minimum, while making no constraints to the objects shape. Such a system provides a useful extension for digitalization of paper-based styling concepts, which today is still a time consuming process. In the presented method the line drawings are first decomposed in curves assembling a network of curves. In a second step, the positions for the endpoints of the curves are determined in 3D, using multiple sketches and a virtual camera model given by the user. Then the shapes of the 3D curves between the reconstructed 3D endpoints are inferred. This leads to a network of 3D curves, which can be used for first visual evaluations in 3D. During the whole process only little user interaction is needed, which only takes place in the pre- and post-processing phases. The approach has beenapplied on multiple sketches and it is shown that the approach creates plausible results within reasonable timing.
BibTeX:
@inproceedings{Wendt*10visapp,
  author = {L. Wendt and A. Stork and A. Kuijper and D. Fellner},
  title = {3D Reconstruction from Line Drawings},
  booktitle = {Proceedings VISIGRAPP 2010; International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
  publisher = {INSTICC Press},
  year = {2010},
  pages = {65-71}
}
Yoon, S., Scherer, M., Schreck, T. & Kuijper, A., (2010), "Sketch-Based 3D Model Retrieval using Diffusion Tensor Fields of Suggestive Contours", ACM Multimedia, pp.193-200, ACM.
Abstract: The number of available 3D models in various areas increase steadily. Effective methods to search for those 3D models by content, rather than textual annotations, are crucial. For this purpose, we propose a new approach for content based 3D model retrieval by hand-drawn sketch images. This approach to retrieve visually similar mesh models from a large database consists of three major steps: (1) suggestive contour renderings from different viewpoints to compare against the user drawn sketches; (2) descriptor computation by analyzing diffusion tensor fields of suggestive contour images or the query sketch respectively; (3) similarity measurement to retrieve the models and the most probable view-point from which a model was sketched. Our proposed sketch based 3D model retrieval system is very robust against variations of shape, pose or partial occlusion of the user draw sketches. Experimental results are presented and indicate the effectiveness of our approach for sketch-based 3D mode retrieval.
BibTeX:
@inproceedings{Yoon*10mm,
  author = {S. Yoon and M. Scherer and T. Schreck and A. Kuijper},
  title = {Sketch-Based 3D Model Retrieval using Diffusion Tensor Fields of Suggestive Contours},
  booktitle = {ACM Multimedia},
  publisher = {ACM},
  year = {2010},
  pages = {193--200}
}
Zmugg, R., Havemann, S. & Fellner, D.W., (2010), "Towards a Voting Scheme for Calculating Light Source Positions from a given Target Illumination", Eurographics Italian Chapter Conference (EG-IT 2010), pp.41-48.
Abstract: Lighting conditions can make the difference between success or failure of an architectural space. The vision of space-light co-design is that architects can control the impression of an illuminated space already at an early design stage, instead of first designing spaces and then searching for a good lighting setup. As a first step towards this vision we propose a novel method to calculate potential light source positions from a given user defined target illumination. The method is independent of the tessellation of the scene and assumes a homogeneous diffuse Lambertian material. This allows using a voting system that determines potential positions for standard light sources with chosen size and brightness. Votes are cast from an illuminated surface point to all potential positions of a light source that would yield this illumination. Vote clusters consequently indicate a more probable light source position. With a slight extension the method can also identify mid-air light source positions.
BibTeX:
@inproceedings{Zmugg*10egit,
  author = {Zmugg, René and Havemann, Sven and Fellner, Dieter W.},
  title = {Towards a Voting Scheme for Calculating Light Source Positions from a given Target Illumination},
  booktitle = {Eurographics Italian Chapter Conference (EG-IT 2010)},
  year = {2010},
  pages = {41-48}
}

2009

Augsdörfer, U.H., Dodgson, N.A. & Sabin, M.A., (2009), "Removing polar rendering artifacts in subdivision surfaces ", Journal of Graphics, GPU, & Game Tools, Vol.14(2), pp.61-76.
Abstract: A polar artifact occurs in subdivision surfaces around high valency vertices. It manifests as large polygons in an otherwise finely subdivided mesh. It is particularly noticeable in subdivision schemes that have been tuned to improve the appearance and behaviors of the limit surface. Using the bounded curvature Catmull-Clark scheme as an example, we describe three practical methods by which this rendering artifact can be removed, thereby allowing us to benefit from the improved character of such tuned schemes.
BibTeX:
@article{Augsdoerfer*09jgt,
  author = {Ursula H. Augsdörfer and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Removing polar rendering artifacts in subdivision surfaces },
  journal = {Journal of Graphics, GPU, & Game Tools},
  year = {2009},
  volume = {14},
  number = {2},
  pages = {61-76},
  doi = {http://dx.doi.org/10.1080/2151237X.2009.10129278}
}
Augsdörfer, U.H., Cashman, T.J., Dodgson, N.A. & Sabin, M.A., (2009), "Numerical checking of C1 for arbitrary degree subdivision schemes", Mathematics of Surfaces XIII, Vol.5654, pp.45-54, Springer.
Abstract: We derive a numerical method to confirm that a subdivision scheme based on quadrilateral meshes is C1 at the extraordinary points. We base our work on Theorem 5.25 in Peters and Reif's book 'Subdivision Surfaces', which expresses it as a condition on the derivatives within the characteristic ring around the EV. This note identifies instead a sufficient condition on the control points in the natural configuration from which the conditions of Theorem 5.25 can be established.
BibTeX:
@inproceedings{Augsdoerfer*09lncs,
  author = {Ursula H. Augsdörfer and Thomas J. Cashman and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Numerical checking of C1 for arbitrary degree subdivision schemes},
  booktitle = {Mathematics of Surfaces XIII},
  publisher = {Springer},
  year = {2009},
  volume = {5654},
  pages = {45-54},
  series = {Lecture Notes in Computer Science, LNCS},
  doi = {http://dx.doi.org/10.1007/978-3-642-03596-8_3}
}
Bein, M., Havemann, S., Stork, A. & Fellner, D.W., Grimm, C. & LaViola, J. (ed.) (2009), "Sketching Subdivision Surfaces", Proc. 6th Eurographics Symposium on Sketch-Based Interfaces and Modeling 2009, pp.61-68, ACM SIGGRAPH.
Abstract: We describe a 3D modeling system that combines subdivision surfaces with sketch-based modeling in order to meet two conflicting goals: ease of use and fine-grained shape control. For the excellent control, low-poly modeling is still the method of choice for creating high-quality 3D models, e.g., in the games industry. However, direct mesh editing can be very tedious and time consuming. Our idea is to include also stroke-based techniques for rapidly modeling regular surface parts. We propose a simple and efficient algorithm for converting a 2D stroke to a control polygon suitable for Catmull/Clark subdivision surfaces. We have realized a small but reasonably rich set of interactive modeling tools to assess the expressiveness of stroke-based mesh design with a number of examples.
BibTeX:
@inproceedings{Bein*09sbim,
  author = {M. Bein and S. Havemann and A. Stork and D.~W. Fellner},
  editor = {C. Grimm and J. LaViola},
  title = {Sketching Subdivision Surfaces},
  booktitle = {Proc. 6th Eurographics Symposium on Sketch-Based Interfaces and Modeling 2009},
  publisher = {ACM SIGGRAPH},
  year = {2009},
  pages = {61--68},
  doi = {http://dx.doi.org/10.1145/1572741.1572753}
}
Berndt, R., Blümel, I., Krottmaier, H., Wessel, R. & Schreck, T., (2009), "Demonstration of User Interfaces for Querying in 3D Architectural Content in PROBADO 3D", Research and Advanced Technology for Digital Libraries, pp.491-492.
Abstract: The PROBADO project is a research effort to develop Digital Library support for non-textual documents. The main goal is to contribute to all parts of the Digital Library workflow from content acquisition over semi-automatic indexing to search and presentation. PROBADO3D is a part of the PROBADO framework designed to support 3D documents, with a focus on the Architectural domain. This demonstration will present a set of specialized user interfaces that were developed for content-based querying in this document domain.
BibTeX:
@inproceedings{Berndt*09,
  author = {Berndt, Rene and Blümel, Ina and Krottmaier, Harald and Wessel, Raoul and Schreck, Tobias},
  title = {Demonstration of User Interfaces for Querying in 3D Architectural Content in PROBADO 3D},
  booktitle = {Research and Advanced Technology for Digital Libraries},
  year = {2009},
  pages = {491-492}
}
Berndt, R., Krottmaier, H., Havemann, S. & Schreck, T., (2009), "The PROBADO-Framework: Content-based Queries for Non-textual Documents", ELPUB 2009: 13th International Conference on Electronic Publishing, pp.17.
Abstract: In this paper we describe the system architecture of PROBADO, a project funded by the German Research Foundation (DFG). Its main goal is to provide a general library infrastructure for dealing with non-textual documents, in particular for content-based searching. PROBADO provides an infrastructure that allows integrating existing data repositories and content-based search engines into one common framework. The system architecture has three layers interconnected by a service-oriented architecture (SOA) currently using SOAP 1.1 as the communication protocol. The layers are: [1] a front-end layer, responsible for providing the user interface [2], a core layer, responsible for scheduling requests from the interface to different repositories, and [3] a repository wrapper layer, responsible for enabling existing repositories and search engines to interface to the system. The functionality of each layer is described in detail. The general architecture is complemented by a brief introduction to the domain-dependent functionality currently provided.
BibTeX:
@inproceedings{Berndt*09elpub,
  author = {Berndt, Rene and Krottmaier, Harald and Havemann, Sven and Schreck, Tobias},
  title = {The PROBADO-Framework: Content-based Queries for Non-textual Documents},
  booktitle = {ELPUB 2009: 13th International Conference on Electronic Publishing},
  year = {2009},
  pages = {17}
}
Berndt, R., Havemann, S. & Fellner, D.W., Spencer, S.N. (ed.) (2009), "3D Modeling in a Web Browser to Formulate Content-Based 3D Queries", Proceedings of the 14th International Conference on 3D Web Technology (Web3D 2009), pp.111-118, ACM Press.
Abstract: We present a framework for formulating domain-dependent 3D search queries suitable for content-based 3D search over the web. Users are typically not willing to spend much time to create a 3D query object. They expect to quickly see a result set in which they can navigate by further differentiating the query object. Our system innovates by using a streamlined parametric 3D modeling engine on both client and server side. Parametric tools have greater expressiveness, they allow shape manipulation through a few highlevel parameters, as well as incremental assembly of query objects. Short command strings are sent from client to server to keep the query objects on both sides in sync. This reduces turnaround times and allows asynchronous updates of live result sets.
BibTeX:
@inproceedings{Berndt*09web3d,
  author = {Rene Berndt and Sven Havemann and Dieter W.~Fellner},
  editor = {Stephen N. Spencer},
  title = {3D Modeling in a Web Browser to Formulate Content-Based 3D Queries},
  booktitle = {Proceedings of the 14th International Conference on 3D Web Technology (Web3D 2009)},
  publisher = {ACM Press},
  year = {2009},
  pages = {111--118}
}
Blümel, I., Diet, Jü. & Krottmaier, H., (2009), "Integrating Multimedia Repositories into the PROBADO Framework", Proc. Intern. Conference on Ditital Information Management (ICDIM 2008), pp.178-183.
Abstract: In this paper, we describe a digital library initiative for non-textual documents. The proposed framework will integrate different types of content-repositories -- each one specialized for a specific multimedia domain -- into one seamless system and will add features such as automatic annotation, full-text retrieval and recommender services to non-textual documents. Two multimedia domains, 3D graphics and music, will be introduced. The repositories can be searched using both textual (metadata-based) and non-textual retrieval mechanisms (e.g. using a complex sketch-based interface for searching in 3D-models or a query-by-humming interface for music). Domain-specific metadata models are developed and workflows for automated content-based data analysis and indexing proposed.
BibTeX:
@inproceedings{Bluemel*08,
  author = {Ina Blümel and Jürgen Diet and Harald Krottmaier},
  title = {Integrating Multimedia Repositories into the PROBADO Framework},
  booktitle = {Proc. Intern. Conference on Ditital Information Management (ICDIM 2008)},
  year = {2009},
  pages = {178--183},
  doi = {http://dx.doi.org/10.1109/ICDIM.2008.4746720}
}
Bremm, S., Maier, S., von Landesberger, T. & Schreck, T., (2009), "Explorative Visual Sequence Analysis (in German)", Dpunkt Datenbank Spektrum, Vol.31, pp.8-16.
BibTeX:
@article{Bremm*09dbs,
  author = {S. Bremm and S. Maier and T. von Landesberger and T. Schreck},
  title = {Explorative Visual Sequence Analysis (in German)},
  journal = {Dpunkt Datenbank Spektrum},
  year = {2009},
  volume = {31},
  pages = {8--16}
}
Bustos, B. & Schreck, T., (2009), "Encyclopedia of Database Systems", pp.1125-1128, Springer.
Abstract: This article introduces basic concepts relevant to 3D object retrieval. It gives a problem definition, surveys related fields, and presents a general process model for 3D feature extraction. Furthermore, the problem of database indexing is introduced, key 3D object retrieval applications are sketched, and future directions are outlined.
BibTeX:
@inbook{Bustos-Schreck09Springer,
  author = {B. Bustos and T. Schreck},
  title = {Encyclopedia of Database Systems},
  publisher = {Springer},
  year = {2009},
  pages = {1125--1128},
  note = {Editors: L. Liu and T. Özsu},
  doi = {http://dx.doi.org/10.1007/978-0-387-39940-9_161}
}
Cashman, T.J., Augsdörfer, U.H. & Sabin, M.A., (2009), "NURBS with extraordinary points: high-degree non-uniform subdivision surfaces", ACM Transactions on Graphics, Vol.28(3), pp.Article 46.
Abstract: We present a subdivision framework that adds extraordinary vertices to NURBS of arbitrarily high degree. The surfaces can represent any odd degree NURBS patch exactly. Our rules handle non-uniform knot vectors, and are not restricted to midpoint knot insertion. In the absence of multiple knots at extraordinary points, the limit surfaces have bounded curvature.
BibTeX:
@article{Cashman*09TOG,
  author = {Thomas J. Cashman and Ursula H. Augsdörfer and Malcolm A. Sabin},
  title = {NURBS with extraordinary points: high-degree non-uniform subdivision surfaces},
  journal = {ACM Transactions on Graphics},
  year = {2009},
  volume = {28},
  number = {3},
  pages = {Article 46},
  doi = {http://dx.doi.org/10.1145/1576246.1531352}
}
Dodgson, N.A., Augsdörfer, U.H., Cashman, T.J. & Sabin, M.A., (2009), "Deriving Box-Spline Subdivision Schemes", Mathematics of Surfaces XIII, Vol.5654, pp.106-123, Springer.
Abstract: We describe and demonstrate an arrow notation for deriving box-spline subdivision schemes. We compare it with the z-transform, matrix, and mask convolution methods of deriving the same. We show how the arrow method provides a useful graphical alternative to the three numerical methods. We demonstrate the properties that can be derived easily using the arrow method: mask, stencils, continuity in regular regions, safe extrusion directions. We derive all of the symmetric quadrilateral binary box-spline subdivision schemes with up to eight arrows and all of the symmetric triangular binary box-spline subdivision schemes with up to six arrows. We explain how the arrow notation can be extended to handle ternary schemes. We introduce two new binary dual quadrilateral box-spline schemes and one new $$ box-spline scheme. With appropriate extensions to handle extraordinary cases, these could each form the basis for a new subdivision scheme.
BibTeX:
@inproceedings{Dodgson*09lncs,
  author = { Neil A. Dodgson and Ursula H. Augsdörfer and Thomas J. Cashman and Malcolm A. Sabin},
  title = {Deriving Box-Spline Subdivision Schemes},
  booktitle = {Mathematics of Surfaces XIII},
  publisher = {Springer},
  year = {2009},
  volume = {5654},
  pages = {106-123},
  series = {Lecture Notes in Computer Science, LNCS},
  doi = {http://dx.doi.org/10.1007/978-3-642-03596-8_7}
}
Encarna ao, J.L., Fellner, D.W. & Schaub, J. (ed.) (2009), "Selected Readings in Computer Graphics 2008", Fraunhofer Verlag, Stuttgart.
BibTeX:
@book{Encarnacao*09sr,,
  editor = {Encarnaão, José L. and Fellner, Dieter W. and Schaub, Jutta},
  title = {Selected Readings in Computer Graphics 2008},
  publisher = {Fraunhofer Verlag, Stuttgart},
  year = {2009},
  series = {Selected Readings in Computer Graphics; 19}
}
Fellner, D.W., Baier, K., Wehner, D. & Toll, A. (ed.) (2009), "Jahresbericht 2008 : Fraunhofer-Institut für Graphische Datenverarbeitung IGD", Fraunhofer-Institut für Graphische Datenverarbeitung (IGD).
BibTeX:
@book{Fellner*08igd,,
  editor = {Fellner, Dieter W. and Baier, Konrad and Wehner, Detlef and Toll, Andrea},
  title = {Jahresbericht 2008 : Fraunhofer-Institut für Graphische Datenverarbeitung IGD},
  publisher = {Fraunhofer-Institut für Graphische Datenverarbeitung (IGD)},
  year = {2009}
}
Fellner, D.W., Müller-Wittig, W. & Unbescheiden, M., (2009), "Virtual and augmented reality", Technology guide: Principles, Applications, Trends, pp.250-255, Springer.
Abstract: The rapid development of microprocessors and graphics processing units (GPUs) has had an impact on information and communication technologies (lCT) over recent years. "Shaders" offer real-time visualisation of complex, computer-generated 3D models with photorealistic quality. Shader technology includes hardware and software modules which colour virtual 3D objects and model reflective properties. These developments have laid the foundations for mixed reality systems which enable both immersion into and realtime interaction with the environment. These environments are based on Milgram's mixed reality continuum where reality is a gradated spectrum ranging from real to virtual spaces.
BibTeX:
@incollection{Fellner*09vr,
  author = {D.~W. Fellner and W. Müller-Wittig and M. Unbescheiden},
  title = {Virtual and augmented reality},
  booktitle = {Technology guide: Principles, Applications, Trends},
  publisher = {Springer},
  year = {2009},
  pages = {250--255}
}
Fellner, D.W., Behr, J. & Bockholt, U., Ma, D., Gausemeier, J., Fan, X. & Grafe, M. (ed.) (2009), "Instantreality -- a framework for industrial augmented and virtual reality applications", Proc. Sino-German Workshop "Virtual Reality & Augmented Reality in Industry", Vol.2, pp.78-83, Springer.
Abstract: Rapid development in processing power, graphic cards and mobile computers open up a wide domain for Mixed Reality applications. Thereby the Mixed Reality continuum covers the complete spectrum from Virtual Reality using immersive projection technology to Augmented Reality using mobile systems like Smartphones and UMPCs. At the Fraunhofer Institute for Computer Graphics (IGD) the Mixed Reality Framework instantreality (www.instantreality.org) has been developed as a single and consistent interface for AR/VR developers. This framework provides a comprehensive set of features to support classic Virtual Reality (VR) as well as mobile Augmented Reality (AR). The goal is to provide a very simple application interface which includes the latest research results in the fields of high-realistic rendering, 3D user interaction and total-immersive display technology. The system design is based on various industry standards to facilitate application development and deployment.
BibTeX:
@inproceedings{Fellner*09vrar,
  author = {D.~W.~Fellner and J. Behr and U. Bockholt},
  editor = {D. Ma and J. Gausemeier and X. Fan and M. Grafe},
  title = {Instantreality -- a framework for industrial augmented and virtual reality applications},
  booktitle = {Proc. Sino-German Workshop "Virtual Reality & Augmented Reality in Industry"},
  publisher = {Springer},
  year = {2009},
  volume = {2},
  pages = {78--83}
}
Fünfzig, C., Ullrich, T., Fellner, D.W. & Bachelder, W.-D., (2009), "Terrain and Model Queries Using Scalar Representation With Wavelet Compression", IEEE Transactions on Instrumentation and Measurement, Vol.58, pp.1-1.
BibTeX:
@article{Fuenfzig*08,
  author = {C. Fünfzig and T. Ullrich and D.~W.~Fellner and W.-D. Bachelder},
  title = {Terrain and Model Queries Using Scalar Representation With Wavelet Compression},
  journal = {IEEE Transactions on Instrumentation and Measurement},
  year = {2009},
  volume = {58},
  pages = {1--1},
  doi = {http://dx.doi.org/10.1109/TIM.2009.2016879}
}
Havemann, S., Settgast, V., Berndt, R., Eide, Ø. & Fellner, D.W., (2009), "The Arrigo Showcase Reloaded -- Towards a Sustainable Link between 3D and Semantics", ACM Journal on Computing and Cultural Heritage (JOCCH), Vol.2(1), pp.1-13.
BibTeX:
@article{Havemann*09jocch,
  author = {Sven Havemann and Volker Settgast and René Berndt and Øyvind Eide and Dieter W.~Fellner},
  title = {The Arrigo Showcase Reloaded -- Towards a Sustainable Link between 3D and Semantics},
  journal = {ACM Journal on Computing and Cultural Heritage (JOCCH)},
  year = {2009},
  volume = {2},
  number = {1},
  pages = {1--13},
  doi = {http://dx.doi.org/10.1145/1551676.1551680}
}
Havemann, S. & Fellner, D.W., (2009), "Patterns of Shape Design", Proc. I-KNOW '09 and I-SEMANTICS '09, pp.93-106.
Abstract: A fundamental problem in processing 3D shapes is insufficient knowledge engineering. On the one hand there are numerous methods to design and manufacture 3D shapes in the real world. On the other hand, numerous digital methods for representing and processing shape have been developed in computer graphics. Most of these methods make certain assumptions about the kind of 3D objects that they will be used for: A surface smoothing algorithm, for instance, is not well suited for assemblies of rectangular blocks or for pipe networks. However, it is currently not possible to formulate the properties of a given shape explicitly in an commonly agreed way. This paper is a first step towards classifying structural descriptions of man-made shape. By listing construction principles and principles for their combination it follows a phenomenological approach. The purpose is to illustrate the inherent complexity of the domain, and to lay out the foundation for subsequent thorough knowledge engineering.
BibTeX:
@inproceedings{Havemann-Fellner09iknow,
  author = {Havemann, Sven and Fellner, Dieter W.},
  title = {Patterns of Shape Design},
  booktitle = {Proc. I-KNOW '09 and I-SEMANTICS '09},
  year = {2009},
  pages = {93-106}
}
Hofmann, C., Hollender, N. & Fellner, D.W., Cordeiro, J. & et al. (ed.) (2009), "A workflow model for collaborative video annotation -- Supporting the Workflow of Collaborative Video Annotation and Analysis performed in Educational Settings", Proceedings of the International Conference on Computer Supported Education, CSEDU*09, pp.199-204, INSTICC Press.
Abstract: There is a growing number of application scenarios for computer-supported video annotation and analysis in educational settings. In related research work, a large number of different research fields and approaches have been involved. Nevertheless, the support of the annotation workflow has been little taken into account. As a first step towards developing a framework that assist users during the annotation process, the single work steps, tasks and sequences of the workflow had to be identified. In this paper, a model of the underlying annotation workflow is illustrated considering its single phases, tasks, and iterative loops that can be especially associated with the collaborative processes taking place.
BibTeX:
@inproceedings{Hofmann*09csedu,
  author = {C. Hofmann and N. Hollender and D.~W. Fellner},
  editor = {J. Cordeiro and et al.},
  title = {A workflow model for collaborative video annotation -- Supporting the Workflow of Collaborative Video Annotation and Analysis performed in Educational Settings},
  booktitle = {Proceedings of the International Conference on Computer Supported Education, CSEDU*09},
  publisher = {INSTICC Press},
  year = {2009},
  pages = {199--204}
}
Hofmann, C., Hollender, N. & Fellner, D.W., Schwill, A. (ed.) (2009), "Prozesse und Abläufe beim kollaborativen Wissenserwerb mittels computergestützter Videoannotation", Lernen im digitalen Zeitalter: DeLFI 2009. 7. e-Learning Fachtagung Informatik der Gesellschaft für Informatik, pp.115-126, Köllen.
Abstract: Computergestützte Annotation und Analyse von Videoinhalten finden zunehmend Anwendung in unterschiedlichen Lehr-Lernszenarien. Eine Reihe von Projekten hat sich mit dem Forschungsbereich Videoannotation mit unterschiedlichen Forschungsschwerpunkten beschäftigt, diese fokussierten jedoch stets einen oder nur wenige Bestandteile des gesamten Annotationsprozesses. Bisher wurde den einzelnen Aufgaben, Prozessen und Abläufen, die einer (kollaborativen) Annotation von Videos zugrunde liegen, keine ausreichende Beachtung geschenkt. In diesem Beitrag möchten wir unter besonderer Berücksichtigung von einer Applikation in kollaborativen Lehr-Lernsituationen ein Modell präsentieren, das die Phasen, die zu erledigenden Aufgaben sowie die konkreten Abläufe innerhalb von Videoannotationsprozessen beschreibt.
BibTeX:
@inproceedings{Hofmann*09defli,
  author = {C. Hofmann and N. Hollender and D.~W. Fellner},
  editor = {A. Schwill},
  title = {Prozesse und Abläufe beim kollaborativen Wissenserwerb mittels computergestützter Videoannotation},
  booktitle = {Lernen im digitalen Zeitalter: DeLFI 2009. 7. e-Learning Fachtagung Informatik der Gesellschaft für Informatik},
  publisher = {Köllen},
  year = {2009},
  pages = {115--126}
}
Hofmann, C., Hollender, N. & Fellner, D.W., (2009), "Workflow-based Architecture for Collaborative Video Annotation", Proceedings of the 13th International Conference on Human-Computer Interaction (HCI 09), pp.33-42, Springer.
Abstract: In video annotation research, a large number of different research fields and approaches have been involved. Nevertheless, the support of the annotation workflow has been little taken into account. Previous research projects focus each on a different essential part of the whole annotation process. In this paper, we present our results concerning an analysis of the tasks and processes involved in computer-supported collaborative video annotation and analysis. First, a model of the underlying annotation workflow is going to be illustrated considering its single phases and iterative loops that can be especially associated with collaborative processes taking place. Furthermore, and as our main contribution, we derive a respective reference architecture which bases on the established workflow model.
BibTeX:
@inproceedings{Hofmann*09hcii,
  author = {C. Hofmann and N. Hollender and D.~W. Fellner},
  title = {Workflow-based Architecture for Collaborative Video Annotation},
  booktitle = {Proceedings of the 13th International Conference on Human-Computer Interaction (HCI 09)},
  publisher = {Springer},
  year = {2009},
  pages = {33--42},
  series = {LNCS 5621},
  doi = {http://dx.doi.org/10.1007/978-3-642-02774-1_4}
}
Hofmann, C., Hollender, N. & Fellner, D.W., Wandke, H. & et al. (ed.) (2009), "Task- and Process-related Design of Video Annotation Systems", Mensch und Computer 2009, pp.173-182, Gesellschaft für Informatik.
Abstract: Various research projects already followed up the design of video annotation applications. Nevertheless, collaborative application scenarios as well as the needs of users regarding the annotation workflow have been taken little into account. This paper discusses requirements for the design of video annotation systems. As our main contribution, we consider aspects that can be associated with collaborative use scenarios as well as requirements respecting the support of the annotation workflow not only considering the tasks but also the processes and sequences within. Our goals are to provide the reader with an understanding of the specific characteristics and requirements of video annotation, to establish a framework for evaluation, and to guide the design of video annotation tools.
BibTeX:
@inproceedings{Hofmann*09mc,
  author = {C. Hofmann and N. Hollender and D.~W. Fellner},
  editor = {H. Wandke and et al.},
  title = {Task- and Process-related Design of Video Annotation Systems},
  booktitle = {Mensch und Computer 2009},
  publisher = {Gesellschaft für Informatik},
  year = {2009},
  pages = {173--182}
}
Hohmann, B., Krispel, U., Havemann, S. & Fellner, D.W., Remondino, F., El-Hakim, S. & Gonzo, L. (ed.) (2009), "Cityfit: high-quality urban reconstructions by fitting shape grammars to images and derived textured point clouds", Proceedings of the 3rd ISPRS International Workshop 3D-ARCH 2009, ISPRS.
Abstract: Many approaches for automatic 3D city reconstruction exist, but they are still missing an important feature: detailed facades. The goal of the CityFit project is to reconstruct the facades of 80% of the buildings in the city of Graz fully automatically. The challenge is to establish a complete workflow, ranging from acquisition of images and LIDAR data over 2D/3D feature detection and recognition to the generation of lean polygonal facade models. The desired detail level is to represent all significant facade elements larger than 50 cm by explicit polygonal geometry. All geometry shall also carry descriptive annotations (semantic enrichment). This paper presents an outline of the workflow, important design decisions, and the current state of the project. First results were obtained by case studies of facade analysis followed by manual reconstruction. This gave important hints how to structure grammars for automatic reconstruction.
BibTeX:
@inproceedings{Hohmann*2009arch,
  author = {B. Hohmann and U. Krispel and S. Havemann and D. W. Fellner},
  editor = {F. Remondino and S. El-Hakim and L. Gonzo},
  title = {Cityfit: high-quality urban reconstructions by fitting shape grammars to images and derived textured point clouds},
  booktitle = {Proceedings of the 3rd ISPRS International Workshop 3D-ARCH 2009},
  publisher = {ISPRS},
  year = {2009}
}
von Landesberger, T., Görner, M. & Schreck, T., (2009), "Visual Analysis of Graphs with Multiple Connected Components", IEEE Symposium on Visual Analytics Science and Technology, pp.155-162, IEEE Computer Society.
Abstract: In this paper, we present a system for the interactive visualization and exploration of graphs with many weakly connected components. The visualization of large graphs has recently received much research attention. However, specific systems for visual analysis of graph data sets consisting of many components are rare. In our approach, we rely on graph clustering using an extensive set of topology descriptors. Specifically, we use the self-organizing-map algorithm in conjunction with a user-adaptable combination of graph features for clustering of graphs. It offers insight into the overall structure of the data set. The clustering output is presented in a grid containing clusters of the connected components of the input graph. Interactive feature selection and task-tailored data views allow the exploration of the whole graph space. The system provides also tools for assessment and display of cluster quality. We demonstrate the usefulness of our system by application to a shareholder network analysis problem based on a large real-world data set. While so far our approach is applied to weighted directed graphs only, it can be used for various graph types
BibTeX:
@inproceedings{Landesberger*09vast,
  author = {T. von Landesberger and M. Görner and T. Schreck},
  title = {Visual Analysis of Graphs with Multiple Connected Components},
  booktitle = {IEEE Symposium on Visual Analytics Science and Technology},
  publisher = {IEEE Computer Society},
  year = {2009},
  pages = {155--162},
  doi = {http://dx.doi.org/10.1109/VAST.2009.5333893}
}
Schreck, T., Bernard, J., Tekušová, T. & Kohlhammer, J., (2009), "Visual Cluster Analysis of Trajectory Data With Interactive Kohonen Maps", Palgrave MacMillan Information Visualization, Vol.8, pp.14-29.
Abstract: Visual-interactive cluster analysis provides valuable tools for effectively analyzing large and complex data sets. Owing to desirable properties and an inherent predisposition for visualization, the Kohonen Feature Map (or Self-Organizing Map or SOM) algorithm is among the most popular and widely used visual clustering techniques. However, the unsupervised nature of the algorithm may be disadvantageous in certain applications. Depending on initialization and data characteristics, cluster maps (cluster layouts) may emerge that do not comply with user preferences, expectations or the application context. Considering SOM-based analysis of trajectory data, we propose a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm. The framework implements the general Visual Analytics idea to effectively combine automatic data analysis with human expert supervision. It provides simple, yet effective facilities for visually monitoring and interactively controlling the trajectory clustering process at arbitrary levels of detail. The approach allows the user to leverage existing domain knowledge and user preferences, arriving at improved cluster maps. We apply the framework on several trajectory clustering problems, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.
BibTeX:
@article{Schreck*09ivs,
  author = {T. Schreck and J. Bernard and T. Tekušová and J. Kohlhammer},
  title = {Visual Cluster Analysis of Trajectory Data With Interactive Kohonen Maps},
  journal = {Palgrave MacMillan Information Visualization},
  year = {2009},
  volume = {8},
  pages = {14--29},
  doi = {http://dx.doi.org/10.1057/ivs.2008.29}
}
Settgast, V., Lancelle, M., Havemann, S. & Fellner, D.W., (2009), "Spatially Coherent Visualization of Image Detection Results using Video Textures", Proceeding of the 33th Workshop of the Austrian Association for Pattern Recognition (AAPR/OAGM), Vol.1, pp.13-23.
Abstract: Camera-based object detection and tracking are image processing tasks that typically do not take 3D information into account. Spatial relations, however, are sometimes crucial to judge the correctness or importance of detection and tracking results. Especially in applications with a large number of image processing tasks running in parallel, traditional methods of presenting detection results do not scale. In such cases it can be very useful to transform the detection results back into their common 3D space. We present a computer graphics system that is capable of showing a large number of detection results in real-time, using different levels of abstraction, on various hardware configurations. As example application we demonstrate our system with a surveillance task involving eight cameras.
BibTeX:
@article{Settgast*2009aapr,
  author = {Settgast, Volker and Lancelle, Marcel and Havemann, Sven and Fellner, Dieter W.},
  title = {Spatially Coherent Visualization of Image Detection Results using Video Textures},
  journal = {Proceeding of the 33th Workshop of the Austrian Association for Pattern Recognition (AAPR/OAGM)},
  year = {2009},
  volume = {1},
  pages = {13--23}
}
Strobl, M., Berndt, R., Havemann, S. & Fellner, D.W., Debattista, K. (ed.) (2009), "Publishing 3D Content as PDF in Cultural Heritage", VAST09: The 10th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage, pp.117-124, Eurographics.
Abstract: Sharing 3D models with embedded annotations and additional information in a general accessible way still is a major challange. Using 3D technologies must become much easier, in particular in areas such as Cultural Heritage, where archeologists, art historians, and museum curators rely on robust, easy to use solutions. Sustainable exchange standards are vital since unlike in industry, no sophisticated PLM or PDM solutions are common in CH. To solve this problem we have examined the PDF file format and developed concepts and software for the exchange of annotated 3D models in a way that is not just comfortable but also sustainable. We show typical use cases for authoring and using PDF documents containing annotated 3D geometry. The resulting workflow is eficient and suitable for experienced users as well as for users working only with standard word processing tools and e-mail clients (plus, currently, Acrobat Pro Extended).
BibTeX:
@inproceedings{Strobl*09vast,
  author = {M. Strobl and R. Berndt and S. Havemann and D.~W. Fellner},
  editor = {K. Debattista},
  title = {Publishing 3D Content as PDF in Cultural Heritage},
  booktitle = {VAST09: The 10th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage},
  publisher = {Eurographics},
  year = {2009},
  pages = {117--124},
  doi = {http://dx.doi.org/10.2312/VAST/VAST09/117-124}
}
Ullrich, T., Settgast, V., Ofenböck, C. & Fellner, D.W., Hirose, M., Schmalstieg, D., Wingrave, C.A. & Hishimura, K. (ed.) (2009), "Desktop Integration in Graphics Environments", Virtual Environments 2009; Joint Virtual Reality Conference of EGVE - ICAT - EuroVR, pp.109-112, Eurographics Association.
Abstract: In this paper, we present the usage of the Remote Desktop Protocol to integrate arbitrary, legacy applications in various environments. This approach accesses a desktop on a real computer or within a virtual machine. The result is not one image of the whole desktop, but a sequence of images of all desktop components (windows, dialogs, etc.). These components are rendered into textures and fed into a rendering framework (OpenSG). There the functional hierarchy is represented by a scene graph. In this way the desktop components can be rearranged freely and painted according to circumstances of the graphical environment supporting a wide range of display settings - from immersive environments via high-resolution tiled displays to mobile devices.
BibTeX:
@inproceedings{Ullrich*09egve,
  author = {Torsten Ullrich and Volker Settgast and Christian Ofenböck and Dieter W.~Fellner},
  editor = {Hirose, M. and Schmalstieg, D. and Wingrave, Ch. A. and Hishimura, K.},
  title = {Desktop Integration in Graphics Environments},
  booktitle = {Virtual Environments 2009; Joint Virtual Reality Conference of EGVE - ICAT - EuroVR},
  publisher = {Eurographics Association},
  year = {2009},
  pages = {109--112},
  doi = {http://dx.doi.org/10.2312/EGVE/JVRC09/109-112}
}

2008

Berndt, R., Havemann, S., Settgast, V. & Fellner, D.W., Ioannides, M. (ed.) (2008), "Sustainable Markup and Annotation of 3D Geometry", Proceedings of the 14th International Conference on Virtual Systems and Multimedia (VSMM 2008), pp.187-193, International Society on Virtual Systems and MultiMedia.
Abstract: We propose a novel general method to enrich ordinary 3D models with semantic information. Based on the Collada format this approach fits perfectly into the XML world: It allows bi-directional linking, from a web resource to a (part of) a 3D model, and the reverse direction as well. We also describe our software framework prototype for 3D-annotation by non-3D-specialists, in our case cultural heritage professionals.
BibTeX:
@inproceedings{Berndt*08vsmm,
  author = {Rene Berndt and Sven Havemann and Volker Settgast and Dieter W.~Fellner},
  editor = {Marions Ioannides},
  title = {Sustainable Markup and Annotation of 3D Geometry},
  booktitle = {Proceedings of the 14th International Conference on Virtual Systems and Multimedia (VSMM 2008)},
  publisher = {International Society on Virtual Systems and MultiMedia},
  year = {2008},
  pages = {187--193}
}
Fellner, D.W., Kamps, T., Kohlhammer, J. & Stricker, A., (2008), "Vorsprung durch Wissen", ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb, Vol.103, pp.205-208.
Abstract: For over 20 years scientists of the Fraunhofer Institute for Computer Graphics IGD would not leave knowledge management to chance. The researchers develop intelligent searching solutions and information visualization technologies. With their innovations they give companies and organizations the opportunity to react on the requirements of today's dynamic information society. ConWeaver, a software developed at the Fraunhofer IGD, offers a semantic and integrated search beyond limits of databases. The searching system extracts company knowledge automated from heterogeneous data sources and represents it in the form of multilingual, semantic knowledge meshes. The Visual Analytics Group of the Fraunhofer IGD deals with visualization of data and analysis of information. The scientists develop real-time solutions for the simulation and interactive visualization of big multidimensional amounts of data and information.
BibTeX:
@article{Fellner*08ZWF,
  author = {Dieter W.~Fellner and Thomas Kamps and Jörn Kohlhammer and Anna Stricker},
  title = {Vorsprung durch Wissen},
  journal = {ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb},
  year = {2008},
  volume = {103},
  pages = {205--208}
}
Havemann, S., Settgast, V., Berndt, R., Eide, Ø. & Fellner, D.W., (2008), "The Arrigo Showcase Reloaded -- Towards a Sustainable Link between 3D and Semantics", Proc. VAST 2008 Intl. Symp., pp.125-132, Eurographics.
Abstract: It is still a big technical problem to establish a relation between a shape and its meaning in a sustainable way. We present a solution with a markup method that allows to label parts of a 3D object in a similar way to labeling parts of a hypertext. A 3D-markup can serve both as hyperlink and as link anchor, which is the key to bi-directional linking between 3D objects and web documents. Our focus is on a sustainable 3D software infrastructure for application scenarios ranging from e-mail and internet over authoring and browsing semantic networks to interactive museum presentations. We demonstrate the workflow and the effectiveness of our tools by re-doing the Arrigo 3D showcase. We are working towards a 'best practice' example for information modeling in cultural heritage.
BibTeX:
@inproceedings{Havemann*08vast,
  author = {Sven Havemann and Volker Settgast and René Berndt and Øyvind Eide and Dieter W.~Fellner},
  title = {The Arrigo Showcase Reloaded -- Towards a Sustainable Link between 3D and Semantics},
  booktitle = {Proc. VAST 2008 Intl. Symp.},
  publisher = {Eurographics},
  year = {2008},
  pages = {125--132},
  doi = {http://dx.doi.org/10.2312/VAST/VAST08/125-132}
}
Havemann, S. & Fellner, D.W., (2008), "Progressive Combined B-reps -- Multi-Resolution Meshes for Interactive Real-time Shape Design", Journal of WSCG, Vol.16(1-3), pp.121-133.
BibTeX:
@article{Havemann*08wscg,
  author = {S. Havemann and Dieter W.~Fellner},
  title = {Progressive Combined B-reps -- Multi-Resolution Meshes for Interactive Real-time Shape Design},
  journal = {Journal of WSCG},
  year = {2008},
  volume = {16},
  number = {1-3},
  pages = {121--133}
}
Kalbe, T., Tekušová, T., Schreck, T. & Zeilfelder, F., (2008), "GPU-Accelerated 2D Point Cloud Visualization using Smooth Splines for Visual Analytics Applications", Spring Conference on Computer Graphics, pp.111-125, Comenius University, Bratislava.
Abstract: We develop an efficient point cloud visualization framework. For efficient navigation in the visualization, we introduce a spline-based technique for the smooth approximation of discrete distance field data. Implemented on the GPU, the approximation technique allows for efficient visualizations and smooth zooming in and out of the distance field data. Combined with a template set of predefined, automatically or interactively adjustable transfer functions, the smooth distance field representation allows for an effective visualization of point cloud data at random abstraction levels. Using the presented technique, sets of point clouds can be effectively analyzed for intra- and inter-point cloud distribution characteristics. The effectiveness and usefulness of our approach is demonstrated by application on various point cloud visualization problems.
BibTeX:
@inproceedings{Kalbe*08sccg,
  author = {T. Kalbe and T. Tekušová and T. Schreck and F. Zeilfelder},
  title = {GPU-Accelerated 2D Point Cloud Visualization using Smooth Splines for Visual Analytics Applications},
  booktitle = {Spring Conference on Computer Graphics},
  publisher = {Comenius University, Bratislava},
  year = {2008},
  pages = {111--125},
  doi = {http://dx.doi.org/10.1145/1921264.1921286}
}
Lancelle, M., Settgast, V. & Fellner, D., (2008), "Definitely Affordable Virtual Environment", Proc. IEEE Virtual Reality, pp.1-1, IEEE.
Abstract: The DAVE is an immersive projection environment, a foursided CAVE. DAVE stands for 'definitely affordable virtual environment'. 'Affordable' means that by mostly using standard hardware components we can greatly reduce costs compared to other commercial systems. We show the hardware setup and some applications in the accompanying video. In 2005 we buildt a new version of our DAVE at the University of Technology in Graz, Austria. Room restrictions motivated a new compact design to optimally use the available space. The back projection material with a custom shape is streched to the wooden frame to provide a flat surface without ripples.
BibTeX:
@inproceedings{Lancelle*08ieeevr,
  author = {M. Lancelle and V. Settgast and D. Fellner},
  title = {Definitely Affordable Virtual Environment},
  booktitle = {Proc. IEEE Virtual Reality},
  publisher = {IEEE},
  year = {2008},
  pages = {1-1}
}
Mendez, E., Schall, G., Havemann, S., Fellner, D., Schmalstieg, D. & Junghanns, S., (2008), "Generating Semantic 3D Models of Unterground Infrastructure", IEEE Computer Graphics and Applications, Vol.28(3), pp.48-57.
Abstract: By combining two previously unrelated techniques -- semantic markup in a scene-graph and generative modeling -- a new framework retains semantic information until late in the rendering pipeline. This is a crucial prerequisite for achieving enhanced visualization effects and interactive behavior that doesn't compromise interactive frame rates. The proposed system creates interactive 3D visualizations from 2D geospatial databases in the domain of utility companies' underground infrastructure, creating urban models based on the companies' real-world data. The system encodes the 3D models in a scene-graph that mixes visual models with semantic markup that interactively filters and styles the models. The actual graphics primitives are generated on the fly by scripts that are attached to the scene-graph nodes.
BibTeX:
@article{Mendez*08ieeecga,
  author = {E. Mendez and G. Schall and S. Havemann and D. Fellner and D. Schmalstieg and S. Junghanns},
  title = {Generating Semantic 3D Models of Unterground Infrastructure},
  journal = {IEEE Computer Graphics and Applications},
  year = {2008},
  volume = {28},
  number = {3},
  pages = {48--57},
  doi = {http://dx.doi.org/10.1109/MCG.2008.53}
}
Offen, L. & Fellner, D., Linsen, L., Hagen, H. & Hamann, B. (ed.) (2008), "BioBrowser -- Visualization of and Access to Macro-Molecular Structures", Visualization in Medicine and Life Sciences, pp.257-273, Springer.
Abstract: Based on the results of an interdisciplinary research project the paper addresses the embedding of knowledge about the function of different parts/structures of a macro molecule (protein, DNA, RNA) directly into the 3D model of this molecule. Thereby the 3D visualization becomes an important user interface component when accessing domain-specific knowledge -- similar to a web browser enabling its users to access various kinds of information. In the prototype implementation -- named Biobrowser -- various information related to bio-research is managed by a database using a fine-grain access control. This also supports restricting the access to parts of the material based on the user privileges. The database is supplied by a SOAP web service so that is it possible (after identifying yourself by a login procedure of course) to query, to change, or to add some information remotely by using the 3D model of the molecule. All these actions are performed on sub structures of themolecules. These can be selected either by an easy query language or by just picking them in the 3D model with the mouse.
BibTeX:
@incollection{Offen-Fellner06vmls,
  author = {Offen, Lars and Fellner, Dieter},
  editor = {Lars Linsen and Hans Hagen and Bernd Hamann},
  title = {BioBrowser -- Visualization of and Access to Macro-Molecular Structures},
  booktitle = {Visualization in Medicine and Life Sciences},
  publisher = {Springer},
  year = {2008},
  pages = {257-273},
  series = {Mathematics + Visualization},
  doi = {http://dx.doi.org/10.1007/978-3-540-72630-2}
}
Schreck, T., Fellner, D. & Keim, D., (2008), "Towards automatic feature vector optimization for multimedia applications", SAC '08: Proceedings of the 2008 ACM symposium on Applied computing, pp.1197-1201, ACM.
Abstract: We systematically evaluate a recently proposed method for unsupervised discrimination power analysis for feature selection and optimization in multimedia applications. A series of experiments using real and synthetic benchmark data is conducted, the results of which indicate the suitability of the method for unsupervised feature selection and optimization. We present an approach for generating synthetic feature spaces of varying discrimination power, modelling main characteristics from real world feature vector extractors. A simple, yet powerful visualization is used to communicate the results of the automatic analysis to the user.
BibTeX:
@inproceedings{Schreck*08sac,
  author = {Tobias Schreck and Dieter Fellner and Daniel Keim},
  title = {Towards automatic feature vector optimization for multimedia applications},
  booktitle = {SAC '08: Proceedings of the 2008 ACM symposium on Applied computing},
  publisher = {ACM},
  year = {2008},
  pages = {1197--1201},
  doi = {http://dx.doi.org/10.1145/1363686.1363964}
}
Schreck, T., Bernard, J., Tekušová, T. & Kohlhammer, J., (2008), "Visual Cluster Analysis in Trajectory Data Using Editable Kohonen Maps", IEEE Symposium on Visual Analytics Science and Technology, pp.3-10, IEEE Computer Society.
Abstract: Visual-interactive cluster analysis provides valuable tools for effectively analyzing large and complex data sets. Due to desirable properties and an inherent predisposition for visualization, the Kohonen Feature Map (or Self-Organizing Map, or SOM) algorithm is among the most popular and widely used visual clustering techniques. However, the unsupervised nature of the algorithm may be disadvantageous in certain applications. Depending on initialization and data characteristics, cluster maps (cluster layouts) may emerge that do not comply with user preferences, expectations, or the application context. Considering SOM-based analysis of trajectory data, we propose a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm. The framework implements the general Visual Analytics idea to effectively combine automatic data analysis with human expert supervision. It provides simple, yet effective facilities for visually monitoring and interactively controlling the trajectory clustering process at arbitrary levels of detail. The approach allows the user to leverage existing domain knowledge and user preferences, arriving at improved cluster maps. We apply the framework on a trajectory clustering problem, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.
BibTeX:
@inproceedings{Schreck*08vast,
  author = {T. Schreck and J. Bernard and T. Tekušová and J. Kohlhammer},
  title = {Visual Cluster Analysis in Trajectory Data Using Editable Kohonen Maps},
  booktitle = {IEEE Symposium on Visual Analytics Science and Technology},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {3--10}
}
Schreck, T., Schuessler, M., Zeilfelder, F. & Worm, K., (2008), "Butterfly Plots for Visual Analysis of Large Point Cloud Data", International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, pp.33-40, University of West Bohemia, Plzen.
Abstract: Visualization of 2D point clouds is one of the most basic yet one of the most important problems in many visual data analysis tasks. Point clouds arise in many contexts including scatter plot analysis, or the visualization of high-dimensional or geo-spatial data. Typical analysis tasks in point cloud data include assessing the overall structure and distribution of the data, assessing spatial relationships between data elements, and identification of clusters and outliers. Standard point-based visualization methods do not scale well with respect to the data set size. Specifically, as the number of data points and data classes increases, the display quickly gets crowded, making it difficult to effectively analyze the point clouds. We propose to abstract large sets of point clouds to compact shapes, facilitating the scalability of point cloud visualization with respect to data set size. We introduce a novel algorithm for constructing compact shapes that enclose all members of a given point cloud, providing good perceptional properties and supporting visual analysis of large data sets of many overlapping point clouds. We apply the algorithm in two different applications, demonstrating the effectiveness of the technique for large point cloud data. We also present an evaluation of key shape metrics, showing the efficiency of the solution as compared to standard approaches.
BibTeX:
@inproceedings{Schreck*08wscg,
  author = {T. Schreck and M. Schuessler and F. Zeilfelder and K. Worm},
  title = {Butterfly Plots for Visual Analysis of Large Point Cloud Data},
  booktitle = {International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision},
  publisher = {University of West Bohemia, Plzen},
  year = {2008},
  pages = {33--40}
}
Steiner, M., Reiter, P., Ofenböck, C., Settgast, V., Ullrich, T., Lancelle, M. & Fellner, D.W., (2008), "Intuitive Navigation in Virtual Environments", Proceedings of Eurographics Symposium on Virtual Environments, Vol.14, pp.5-8.
Abstract: We present several novel ways of interaction and navigation in virtual worlds. Using the optical tracking system of our four-sided Definitely Affordable Virtual Environment (DAVE), we designed and implemented navigation and movement controls using the user's gestures and postures. Our techniques are more natural and intuitive than a standard 3D joystick-based approach, which compromises the immersion's impact.
BibTeX:
@inproceedings{Steiner*08egve,
  author = {Steiner, Markus and Reiter, Philipp and Ofenböck, Christian and Settgast, Volker and Ullrich, Torsten and Lancelle, Marcel and Fellner, Dieter W.},
  title = {Intuitive Navigation in Virtual Environments},
  booktitle = {Proceedings of Eurographics Symposium on Virtual Environments},
  year = {2008},
  volume = {14},
  pages = {5-8},
  doi = {http://dx.doi.org/10.2312/PE/VE2008Posters/005-008}
}
Ullrich, T., Techmann, T. & Fellner, D.W., (2008), "Web-based Algorithm Tutorials in Different Learning Scenarios", World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-Media), Vol.20, pp.5467-5472.
Abstract: The combination of scripting languages with web technologies offers many possibilities in teachings. This paper presents a scripting framework that consists of a Java and JavaScript engine and an included editor. It allows editing scripts and source code online, writing new applications, modifying existing applications and starting them from within the editor by a simple mouse click. This framework is a good basis for online tutorials. Included scripts that are ready to run are able to replace simple Java applets without drawbacks but with much more possibilities. Furthermore these scripts are perfect in different teaching scenarios: demo applications can be started via web browser and can be modified just in time. This modification can be done during lecture or within a drill-and-practice session. Examples in the context of computer graphics illustrate the usefulness of our framework in lectures.
BibTeX:
@inproceedings{Ullrich*08edmedia,
  author = {Ullrich, Torsten and Techmann, Torsten and Fellner, Dieter W.},
  title = {Web-based Algorithm Tutorials in Different Learning Scenarios},
  booktitle = {World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-Media)},
  year = {2008},
  volume = {20},
  pages = {5467-5472}
}
Ullrich, T., Settgast, V. & Fellner, D.W., (2008), "Semantic Fitting and Reconstuction", Journal on Computing and Cultural Heritage (JOCCH), Vol.1, pp.1-20.
Abstract: The current methods to describe the shape of three-dimensional objects can be classified into two groups: methods following the composition of primitives approach and descriptions based on procedural shape representations. As a 3D acquisition device returns an agglomeration of elementary objects (e.g. a laser scanner returns points), the model acquisition pipeline always starts with a composition of primitives. Due to the semantic information carried with a generative description, a procedural model provides valuable metadata that make up the basis for digital library services: retrieval, indexing, and searching. An important challenge in computer graphics in the field of cultural heritage is to build a bridge between the generative and the explicit geometry description combining both worlds-the accuracy and systematics of generative models with the realism and the irregularity of real-world data. A first step towards a semantically enriched data description is a reconstruction algorithm based on decreasing exponential fitting. This approach is robust towards outliers and multiple dataset mixtures. It does not need a preceding segmentation and is able to fit a generative shape template to a point cloud identifying the parameters of a shape.
BibTeX:
@article{Ullrich*08jocch,
  author = {Torsten Ullrich and Volker Settgast and Dieter W.~Fellner},
  title = {Semantic Fitting and Reconstuction},
  journal = {Journal on Computing and Cultural Heritage (JOCCH)},
  year = {2008},
  volume = {1},
  pages = {1--20},
  doi = {http://dx.doi.org/10.1145/1434763.1434769}
}
Ullrich, T., Settgast, V. & Fellner, D.W., (2008), "Semantic Fitting and Reconstruction", ACM Journal on Computing and Cultural Heritage, Vol.1(2), pp.20.
Abstract: The current methods to describe the shape of three-dimensional objects can be classified into two groups: methods following the composition of primitives approach and descriptions based on procedural shape representations. As a 3D acquisition device returns an agglomeration of elementary objects (e.g. a laser scanner returns points), the model acquisition pipeline always starts with a composition of primitives. Due to the semantic information carried with a generative description, a procedural model provides valuable metadata that make up the basis for digital library services: retrieval, indexing, and searching. An important challenge in computer graphics in the field of cultural heritage is to build a bridge between the generative and the explicit geometry description combining both worlds-the accuracy and systematics of generative models with the realism and the irregularity of real-world data. A first step towards a semantically enriched data description is a reconstruction algorithm based on decreasing exponential fitting. This approach is robust towards outliers and multiple dataset mixtures. It does not need a preceding segmentation and is able to fit a generative shape template to a point cloud identifying the parameters of a shape.
BibTeX:
@article{Ullrich*08Joch,
  author = {Ullrich, Torsten and Settgast, Volker and Fellner, Dieter W.},
  title = {Semantic Fitting and Reconstruction},
  journal = {ACM Journal on Computing and Cultural Heritage},
  year = {2008},
  volume = {1},
  number = {2},
  pages = {20}
}
Ullrich, T., Settgast, V. & Fellner, D.W., (2008), "Distance Visualization for Geometric Analysis", Proceedings of the Conference on Virtual Systems and MultiMedia Dedicated to Digital Heritage (VSMM), pp.334-340.
Abstract: The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. Although the problem to visualize some distances may sound simple, the creation of a good scene setup including the geometry, materials, colors, and the representation of distances is challenging. Our contribution to this problem is an application which optimizes the work-flow to visualize distances. We propose a new classification scheme to group typical scenarios. For each scenario we provide reasonable defaults for color tables, material settings, etc. Completed with predefined file exporters, which are harmonized with commonly used rendering and viewing applications, the presented application is a valuable tool. Based on web technologies it works out-of-the-box and does not need any configuration or installation. All users who have to analyze and document 3D geometry will stand to benefit from our new application.
BibTeX:
@inproceedings{Ullrich*08vsmm,
  author = {Torsten Ullrich and Volker Settgast and Dieter W.~Fellner},
  title = {Distance Visualization for Geometric Analysis},
  booktitle = {Proceedings of the Conference on Virtual Systems and MultiMedia Dedicated to Digital Heritage (VSMM)},
  year = {2008},
  pages = {334-340}
}
Ullrich, T., Ulrich, U.K. & Fellner, D.W., (2008), "Compilation of procedural models", Proceedings of the 13th International Symposium on 3D Web Technology (Web3D 2008), pp.75-81, ACM.
Abstract: Scripting techniques are used in various contexts. The field of application ranges from layout description languages (PostScript), user interface description languages (XUL) and classical scripting languages (JavaScript) to action nodes in scene graphs (VRMLScript) and web-based desktop applications (AJAX). All these applications have an increase of scripted components in common -- especially in computer graphics. As the interpretation of a geometric script is computationally more intensive than the handling of static geometry, optimization techniques, such as justin- time compilation, are of great interest. Unfortunately, scripting languages tend to support features such as higher order functions or self-modification, etc. These language characteristic are difficult to compile into machine/byte-code. Therefore, we present a hybrid approach: an interpreter with an integrated compiler. In this way we speed up the script evaluation without having to remove any language features e.g. the possibility of self-modifications. We demonstrate its usage at XGML -- a dialect of the generative modeling language GML, which is characterized by its dynamic behavior.
BibTeX:
@inproceedings{Ullrich*08web3d,
  author = {Torsten Ullrich and Ulrich Krispel Ulrich and Dieter W. Fellner},
  title = {Compilation of procedural models},
  booktitle = {Proceedings of the 13th International Symposium on 3D Web Technology (Web3D 2008)},
  publisher = {ACM},
  year = {2008},
  pages = {75--81},
  doi = {http://dx.doi.org/10.1145/1394209.1394226}
}

2007

Bustos, B., Fellner, D., Havemann, S., Keim, D.A., Saupe, D. & Schreck, T., (2007), "Foundations of 3D Digital Libraries: Current Approaches and Urgent Research Challenges", DELOS Network of Excellence on Digital Libraries;Pre-Proceedings of the First International Workshopon Digital Libraries Foundations, pp.7-12.
Abstract: 3D documents are an indispensable data type in many important application domains such as Computer Aided Design, Simulation and Visualization, and Cultural Hritage, to name a few. The 3D document type can represent arbitrarily complex information by composing geometrical, topological, structural, or material properties, among others. It often is integrated with meta data and annotation by the various application systems that produce, process, or consume 3D documents. We argue that due to the inherent complexity of the 3D data type in conjunction with and imminent pervasive usage and explosion of available content, there is pressing need to address key problems of the 3D data type. These problems need to be tackled before the 3D data type can be fully supported by Digital Library technology in the sense of a generalized document, unlocking its full potential. If the problems are addressed appropriately, the expected benefits are manifold and may lead to radically improvedproduction, processing, and consumption of 3D content.
BibTeX:
@inproceedings{Bustos*07,
  author = {B. Bustos and D. Fellner and S. Havemann and D.~A. Keim and D. Saupe and T. Schreck},
  title = {Foundations of 3D Digital Libraries: Current Approaches and Urgent Research Challenges},
  booktitle = {DELOS Network of Excellence on Digital Libraries;Pre-Proceedings of the First International Workshopon Digital Libraries Foundations},
  year = {2007},
  pages = {7-12}
}
Bustos, B., Keim, D., Saupe, D. & Schreck, T., (2007), "Content-Based 3D Object Retrieval", IEEE Computer Graphics and Applications, Special Issue on 3D Documents, Vol.27 (4), pp.22-27.
Abstract: 3D objects are an important multimedia data type with many applications in domains such as Computer Aided Design, Simulation, Visualization, and Entertainment. Advancements in production, acquisition, and dissemination technology contribute to growing repositories of 3D objects. Consequently, there is a demand for advanced searching and indexing techniques to make effective and efficient use of such large repositories. Methods for automatically extracting descriptors from 3D objects are a key approach to this end. In this paper, we survey techniques for searching for similar content in databases of 3D objects. We address the basic concepts for extraction of 3D object descriptors which in turn can be used for searching and indexing. We sketch the wealth of different descriptors by two recently proposed schemes, and discuss methods for benchmarking the qualitative performance of 3D retrieval systems.
BibTeX:
@article{Bustos*07cga,
  author = {B. Bustos and D. Keim and D. Saupe and T. Schreck},
  title = {Content-Based 3D Object Retrieval},
  journal = {IEEE Computer Graphics and Applications, Special Issue on 3D Documents},
  year = {2007},
  volume = {27 (4)},
  pages = {22--27},
  doi = {http://dx.doi.org/10.1109/MCG.2007.80}
}
Bustos, B., Keim, D., Saupe, D., Schreck, T. & Tatu, A., (2007), "Methods and User Interfaces for Effective Retrieval in 3D Databases (in German)", Dpunkt Datenbank Spektrum, Vol.20, pp.23-32.
BibTeX:
@article{dbs07,
  author = {B. Bustos and D. Keim and D. Saupe and T. Schreck and A. Tatu},
  title = {Methods and User Interfaces for Effective Retrieval in 3D Databases (in German)},
  journal = {Dpunkt Datenbank Spektrum},
  year = {2007},
  volume = {20},
  pages = {23--32}
}
Fellner, D.W., Saupe, D. & Krottmaier, H., (2007), "Guest Editors' Introduction: 3D Documents", IEEE CG&A, Vol.27(4), pp.20-21.
BibTeX:
@article{Fellner*07ieeecga,
  author = {D. W. Fellner and D. Saupe and H. Krottmaier},
  title = {Guest Editors' Introduction: 3D Documents},
  journal = {IEEE CG&A},
  year = {2007},
  volume = {27},
  number = {4},
  pages = {20-21},
  doi = {http://dx.doi.org/10.1109/MCG.2007.83}
}
Fellner, D.W., Saupe, D. & Krottmaier, H. (ed.) (2007), "IEEE CG&A", IEEE.
BibTeX:
@inbook{Fellner*07ieeecga2,,
  editor = {D. W. Fellner and D. Saupe and H. Krottmaier},
  title = {IEEE CG&A},
  publisher = {IEEE},
  year = {2007},
  doi = {http://dx.doi.org/10.1109/MCG.2007.83}
}
Fünfzig, C., Ullrich, T., Fellner, D. & Bachelder, W.-D., (2007), "Empirical Comparision of Data Structures for Line-Of-Sight Computation", International Symposium of Intelligent Signal Processing, pp.291-296.
Abstract: Line-of-sight (LOS) computation is important for interrogation of heightfield grids in the context of geo information and many simulation tasks like electromagnetic wave propagation and flight surveillance. Compared to searching the regular grid directly, more advanced data structures like a 2.5d kd-tree offer better performance. We describe the definition of a 2.5d kd-tree from the digital elevation model and its use for LOS computation on a point-reconstructed or bilinear-reconstructed terrain surface. For compact storage, we use a wavelet-like storage scheme which saves one half of the storage space without considerably compromising the runtime performance. We give an empirical comparison of both approaches on practical data sets which show the method of choice for CPU computation of LOS.
BibTeX:
@inproceedings{Fuenfzig*07,
  author = {C. Fünfzig and T. Ullrich and D. Fellner and W.-D. Bachelder},
  title = {Empirical Comparision of Data Structures for Line-Of-Sight Computation},
  booktitle = {International Symposium of Intelligent Signal Processing},
  year = {2007},
  pages = {291--296}
}
Hao, M., Keim, D., Dayal, U. & Schreck, T., (2007), "Multi-Resolution Techniques for Visual Exploration of Large Time-Series Data", Eurographics/IEEE-VGTC Symposium on Visualization, pp.27-34, Eurographics Association.
Abstract: Time series are a data type of utmost importance in many domains such as business management and service monitoring. We address the problem of visualizing large time-related data sets which are difficult to visualize effectively with standard techniques given the limitations of current display devices. We propose a framework for intelligent time- and data-dependent visual aggregation of data along multiple resolution levels. This idea leads to effective visualization support for long time-series data providing both focus and context. The basic idea of the technique is that either data-dependent or application-dependent, display space is allocated in proportion to the degree of interest of data subintervals, thereby (a) guiding the user in perceiving important information, and (b) freeing required display space to visualize all the data. The automatic part of the framework can accommodate any time series analysis algorithm yielding a numeric degree of interest scale. We apply our techniques on real-world data sets, compare it with the standard visualization approach, and conclude the usefulness and scalability of the approach.
BibTeX:
@inproceedings{Hao*07eurovis,
  author = {M. Hao and D. Keim and U. Dayal and T. Schreck},
  title = {Multi-Resolution Techniques for Visual Exploration of Large Time-Series Data},
  booktitle = {Eurographics/IEEE-VGTC Symposium on Visualization},
  publisher = {Eurographics Association},
  year = {2007},
  pages = {27--34}
}
Hao, M., Dayal, U., Keim, D. & Schreck, T., (2007), "A Visual Analysis of Multi-Attribute Data Using Pixel Matrix Displays", IS&T/SPIE Conference on Visualization and Data Analysis, pp.649504.1-649504.12, SPIE Press.
Abstract: Charts and tables are commonly used to visually analyze data. These graphics are simple and easy to understand, but charts show only highly aggregated data and present only a limited number of data values while tables often show too many data values. As a consequence, these graphics may either lose or obscure important information, so different techniques are required to monitor complex datasets. Users need more powerful visualization techniques to digest and compare detailed multi-attribute data to analyze the health of their business. This paper proposes an innovative solution based on the use of pixel-matrix displays to represent transaction-level information. With pixelmatrices, users can visualize areas of importance at a glance, a capability not provided by common charting techniques. We present our solutions to use colored pixel-matrices in (1) charts for visualizing data patterns and discovering exceptions, (2) tables for visualizing correlations and finding root-causes, and (3) time series for visualizing the evolution of long-running transactions. The solutions have been applied with success to product sales, Internet network performance analysis, and service contract applications demonstrating the benefits of our method over conventional graphics. The method is especially useful when detailed information is a key part of the analysis.
BibTeX:
@inproceedings{Hao*07vada,
  author = {M. Hao and U. Dayal and D. Keim and T. Schreck},
  title = {A Visual Analysis of Multi-Attribute Data Using Pixel Matrix Displays},
  booktitle = {IS&T/SPIE Conference on Visualization and Data Analysis},
  publisher = {SPIE Press},
  year = {2007},
  pages = {649504.1--649504.12},
  doi = {http://dx.doi.org/10.1117/12.706151}
}
Havemann, S., Hopp, A. & Fellner, D., Fröhlich, B., Blach, R. & v. Liere, R. (ed.) (2007), "A Single Chip DLP Projector for Stereoscopic Images of High Color Quality and Resolution", Proc. 13th EG Symposium on Virtual Environments,10th Immersive Projection Technology, pp.21-26, Eurographics.
Abstract: We present a novel stereoscopic projection system. It combines all the advantages of modern single-chip DLP technology -- attractive price, great brightness, high contrast, superior resolution and color quality -- with those of active stereoscopy: invariance to the orientation of the user and an image separation of nearly 100 With a refresh rate of 60 Hz per eye (120 Hz in total) our system is flicker-free even for sensitive users. The system permits external projector synchronisation which allows to build up affordable stereoscopic multi-projector systems, e.g., for immersive visualisation.
BibTeX:
@inproceedings{Havemann*07egve,
  author = {S. Havemann and A. Hopp and D. Fellner},
  editor = {B. Fröhlich and R. Blach and R. v.~Liere},
  title = {A Single Chip DLP Projector for Stereoscopic Images of High Color Quality and Resolution},
  booktitle = {Proc. 13th EG Symposium on Virtual Environments,10th Immersive Projection Technology},
  publisher = {Eurographics},
  year = {2007},
  pages = {21--26},
  series = {IPT-EGVE}
}
Havemann, S., Settgast, V., Lancelle, M. & Fellner, D.W., (2007), "3D-Powerpoint -- A Design Tool for Digital Exhibitions of Cultural Artifacts", Proc. VAST 2007 Intl. Symp., pp.39-46, Eurographics.
Abstract: We describe first steps towards a suite of tools for CH professionals to set up and run digital exhibitions of cultural 3D artifacts in museums. Both the authoring and the presentation views shall finally be as easy to use as, e.g., Microsoft Powerpoint. But instead of separated slides our tool uses pre-defined 3D scenes, called "layouts", containing geometric objects acting as placeholders, called "drop targets". They can be replaced quite easily, in a drag-and-drop fashion, by digitized 3D models, and also by text and images, to customize and adapt a digital exhibition to the style of the real museum. Furthermore, the tool set contains easy-to-use tools for the rapid 3D modeling of simple geometry and for the alignment of given models to a common coordinate system. The technical innovation is that the tool set is not a monolithic application. Instead it is completely based on scripted designs, using the OpenSG scene graph engine and the GML scripting language. This makes it extremely flexible: Anybody capable of drag-and-drop can design 3D exhibitions. Anybody capable of GML scripting can create new designs. And finally, we claim that the presentation setup of our designs is 'grandparent-compliant', meaning that it permits to the public audience the detailed inspection of beautiful cultural 3D objects without getting lost or feeling uncomfortable.
BibTeX:
@inproceedings{Havemann*07vast,
  author = {Havemann, Sven and Settgast, Volker and Lancelle, Marcel and Fellner, Dieter W.},
  title = {3D-Powerpoint -- A Design Tool for Digital Exhibitions of Cultural Artifacts},
  booktitle = {Proc. VAST 2007 Intl. Symp.},
  publisher = {Eurographics},
  year = {2007},
  pages = {39-46},
  doi = {http://dx.doi.org/10.2312/VAST/VAST07/039-046}
}
Havemann, S. & Fellner, D.W., (2007), "Seven Research Challenges of Generalized 3D Documents", IEEE Computer Graphics and Applications, Vol.27(3), pp.70-76.
Abstract: The rapid evolution of information and communication technology has always been a source for challenging new research questions in computer science. The vision of the emerging research field of semantic 3D is to establish the notion of generalized 3D documents that are full members of the family of generalized documents. This means that access would be content-based rather than based on metadata. The purpose of this article is to highlight the research issues that impede the realization of this vision today. The seven research challenges include: (1) '3D data set' can have many meanings, (2) a sustainable 3D file format, (3) representation-independent stable 3D markup, (4) representation-independent 3D query operations, (5) documenting provenance and processing history, (6) consistency between shape and meaning, and (7) closing the semantic gap.
BibTeX:
@article{Havemann-Fellner07ieeecga-SevResChal,
  author = {Havemann, Sven and Fellner, Dieter W.},
  title = {Seven Research Challenges of Generalized 3D Documents},
  journal = {IEEE Computer Graphics and Applications},
  year = {2007},
  volume = {27},
  number = {3},
  pages = {70-76},
  doi = {http://dx.doi.org/10.1109/MCG.2007.67}
}
Hopp, A., Fellner, D. & Havemann, S., (2007), "Cube 3D$^2$ -- Ein single Chip DLP stereo Projektor", IFF-Wissenschaftstage, pp.77-86, Fraunhofer-Institut für Fabrikbetrieb und -automatisierung.
Abstract: Der Artikel beschreibt die erfolgreiche Entwicklung eines stereoskopiefähigen Digitalprojektors nach Zielvorgaben speziell für die Bereiche VR/AR. Dabei wurde nicht von den vorhandenen Technologien ausgegangen um ein VR/AR System zusammenzustellen, sondern explizit Zielvorgaben für ein solches System entwickelt um die Vor- und Nachteile bekannter Technologien zu minimieren. Dies führte zur Entwicklung eines völlig neuen 3D Projektors.
BibTeX:
@inproceedings{Hopp*07,
  author = {A. Hopp and D. Fellner and S. Havemann},
  title = {Cube 3D$^2$ -- Ein single Chip DLP stereo Projektor},
  booktitle = {IFF-Wissenschaftstage},
  publisher = {Fraunhofer-Institut für Fabrikbetrieb und -automatisierung},
  year = {2007},
  pages = {77-86}
}
Krottmaier, H., Kurth, F., Steenweg, T. & Appelrath, H-J.and Fellner, D.W., (2007), "PROBADO -- A Generic Repository Integration Framework", Research and Advanced Technology for Digital Libraries, ECDL 2007, Vol.4675, pp.518-521, Springer.
Abstract: The number of newly generated multimedia documents (e.g. music, e-learning material, or 3D-graphics) increases year by year. Today, the workflow in digital libraries focuses on textual documents only. Hence, considering content-based retrieval tasks, multimedia documents are not analyzed and indexed sufficiently. To facilitate content-based retrieval and browsing, it is necessary to introduce recent techniques for multimedia document processing into the workflow of nowadays digital libraries. In this short paper, we introduce the PROBADO-framework which will (a) integrate different types of content-repositories -- each one specialized for a specific multimedia domain -- into one seamless system, and (b) will add features available in text-based digital libraries (such as automatic annotation, full-text retrieval, or recommender services) to non-textual documents. Existing libraries will benefit from the framework since it extends existing technology for handling textual documents with features for dealing with the non-textual domain.
BibTeX:
@incollection{Krottmaier*07ecdl,
  author = {Krottmaier, H. and Kurth, F. and Steenweg, T. and Appelrath, H-J.and Fellner, D. W.},
  title = {PROBADO -- A Generic Repository Integration Framework},
  booktitle = {Research and Advanced Technology for Digital Libraries, ECDL 2007},
  publisher = {Springer},
  year = {2007},
  volume = {4675},
  pages = {518-521},
  series = {Lecture Notes in Computer Science},
  doi = {http://dx.doi.org/10.1007/978-3-540-74851-9_57}
}
Krottmaier, H., Ball, R. (ed.) (2007), "Die Systemarchitektur von PROBADO: Der allgemeine Zugriff auf Repositorien mit nicht-textuellen Inhalte", Wissenschaftskommunikation der Zukunft, 4. Konferenz der Zentralbibliothek Forschungszentrum Jülich, pp.169-176.
Abstract: Zentrale Fachbibliotheken und Fachinformationszentren haben bislang die Informationsversorgung mit Textdokumenten, vor allem Zeitschriftenartikel, für ihre Kunden aus Forschung, Lehre und Wirtschaft sicherstellen können. Exemplarisch sei hier auf die klassischen Bibliotheksdienstleistungen wie Dokumentlieferdienste, virtuelle Fachbibliotheken, aber auch auf digitale Bibliotheken verwiesen. Die Erfüllung dieses Versorgungsauftrages befindet sich allerdings durch neue mediale Formate und Multimedia-Objekte (z.B. Musik, Architekturmodelle und ELearning-Material) im Umbruch, die in der Praxis immer öfter von den Benutzern benötigt und angefordert werden und daher zur umfassenden Informationsversorgung über ein Bibliotheksportal dazugehören, das sowohl die Anforderungen einer text-basierten wie auch audio-visuellen Suche integriert.
BibTeX:
@inproceedings{Krottmaier07,
  author = {H. Krottmaier},
  editor = {Rafael Ball},
  title = {Die Systemarchitektur von PROBADO: Der allgemeine Zugriff auf Repositorien mit nicht-textuellen Inhalte},
  booktitle = {Wissenschaftskommunikation der Zukunft, 4. Konferenz der Zentralbibliothek Forschungszentrum Jülich},
  year = {2007},
  pages = {169--176}
}
Leeb, R., Settgast, V., Fellner, D. & Pfurtscheller, G., (2007), "Self-paced exploration of the Austrian National Library through thought", International Journal of Bioelectromagnetism, Vol.9(4), pp.237-244.
Abstract: The results of a self-paced Brain-Computer Interface (BCI) are presented which are based on the detection of senorimotor electroencephalogram rhythms during motor imagery. The participants were given the task of moving through a virtual model of the Austrian National Library by performing motor imagery. This work shows that five participants which were trained in a synchronous BCI could sucessfully perform the asynchronous experiment.
BibTeX:
@article{Leeb*07bci,
  author = {R. Leeb and V. Settgast and D. Fellner and G. Pfurtscheller},
  title = {Self-paced exploration of the Austrian National Library through thought},
  journal = {International Journal of Bioelectromagnetism},
  year = {2007},
  volume = {9},
  number = {4},
  pages = {237-244}
}
Sabin, M.A., Cashman, T.J., Augsdörfer, U.H. & Dodgson, N.A., (2007), "Bounded Curvature Subdivision Without Eigenanalysis", Mathematics of Surfaces XII, Vol.4647, pp.391-411, Springer.
Abstract: It has long been known how to achieve bounded curvature at extraordinary points of a subdivision scheme by using eigenanalysis and then adjusting the mask of each extraordinary point. This paper provides an alternative insight, based on the use of second divided differences, and applies it to three familiar schemes. A single concept is shown to work in three different contexts. In each case a bounded curvature variant results, with a very simple and elegant implementation.
BibTeX:
@inproceedings{Malcolm*07lncs,
  author = {Malcolm A. Sabin and Thomas J. Cashman and Ursula H. Augsdörfer and Neil A. Dodgson},
  title = {Bounded Curvature Subdivision Without Eigenanalysis},
  booktitle = {Mathematics of Surfaces XII},
  publisher = {Springer},
  year = {2007},
  volume = {4647},
  pages = {391-411},
  series = {Lecture Notes in Computer Science, LNCS},
  doi = {http://dx.doi.org/10.1007/978-3-540-73843-5_24}
}
Rauch, C., Krottmaier, H. & Tochtermann, K., (2007), "File-Formats for Preservation: Evaluating the Long-Term Stability of File-Formats", Openness in Digital Publishing, pp.101-106.
Abstract: While some file-formats become unreadable after short periods, others remain interpretable over a long-term. Among the over 1.000 file-formats, some are better and some are less suited for long-term preservation. A standardized process for evaluating the stability of a file-format is described in this paper and its practical use is shown with file-formats for 3D-objects. Recommendations to users of 3D-applications are given in the last section of this article. Some of the results are used in PROBADO, a sophisticated search engine for non-traditional objects (such as 3D-documents, music etc.).
BibTeX:
@inproceedings{Rauch*07elpub,
  author = {C. Rauch and H. Krottmaier and K. Tochtermann},
  title = {File-Formats for Preservation: Evaluating the Long-Term Stability of File-Formats},
  booktitle = {Openness in Digital Publishing},
  year = {2007},
  pages = {101--106}
}
Schreck, T., Tekušová, T., Kohlhammer, Jö. & Fellner, D., (2007), "Trajectory-based visual analysis of large financial time series data", ACM SIGKDD Explor. Newsl., Vol.9(2), pp.30-37, ACM.
Abstract: We present a novel stereoscopic projection system. It combines all the advantages of modern single-chip DLP technology - attractive price, great brightness, high contrast, superior resolution and color quality - with those of active stereoscopy: invariance to the orientation of the user and an image separation of nearly 100 With a refresh rate of 60 Hz per eye (120 Hz in total) our system is flicker-free even for sensitive users. The system permits external projector synchronisation which allows to build up affordable stereoscopic multi-projector systems, e.g., for immersive visualisation.
BibTeX:
@article{Schreck*07sigkdd,
  author = {Tobias Schreck and Tatiana Tekušová and Jörn Kohlhammer and Dieter Fellner},
  title = {Trajectory-based visual analysis of large financial time series data},
  journal = {ACM SIGKDD Explor. Newsl.},
  publisher = {ACM},
  year = {2007},
  volume = {9},
  number = {2},
  pages = {30-37},
  doi = {http://dx.doi.org/10.1145/1345448.1345454}
}
Schreck, T. & Panse, C., (2007), "A New Metaphor for Projection-Based Visual Analysis and Data Exploration", IS&T/SPIE Conference on Visualization and Data Analysis, pp.64950L.1-64950L.12, SPIE Press.
Abstract: In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
BibTeX:
@inproceedings{Schreck-Panse07vda,
  author = {T. Schreck and C. Panse},
  title = {A New Metaphor for Projection-Based Visual Analysis and Data Exploration},
  booktitle = {IS&T/SPIE Conference on Visualization and Data Analysis},
  publisher = {SPIE Press},
  year = {2007},
  pages = {64950L.1--64950L.12},
  doi = {http://dx.doi.org/10.1117/12.697879}
}
Settgast, V., Ullrich, T. & Fellner, D.W., (2007), "Information Technology for Cultural Heritage", IEEE Potentials, Vol.26(4), pp.38-43.
Abstract: Information technology applications in the field of cultural heritage include various disciplines of computer science. The work flow from archaeological discovery to scientific preparation demands multidisciplinary cooperation and interaction at various levels. This article describes the information technology pipeline from the computer science point of view. The description starts with the model acquisition. Computer vision algorithms are able to generate a raw three-dimensional (3D) model using input data such as photos and scans. In the next step, computer graphics methods create an accurate, highlevel model description. Besides geometric information, each model needs semantic metadata to perform digital library tasks such as storage, markup, indexing, and retrieval. A structured repository of virtual artifacts completes the pipeline - at least from the computer science point of view.
BibTeX:
@article{Settgast*07ieeepot,
  author = {Settgast, Volker and Ullrich, Torsten and Fellner, Dieter W.},
  title = {Information Technology for Cultural Heritage},
  journal = {IEEE Potentials},
  year = {2007},
  volume = {26},
  number = {4},
  pages = {38-43},
  doi = {http://dx.doi.org/10.1109/MP.2007.4280332}
}
Ullrich, T., Fünfzig, C. & Fellner, D.W., (2007), "Two Different Views On Collision Detection", IEEE Potentials, Vol.26(1), pp.26-30.
Abstract: In this article, we present two algorithms for precise collision detection between two potentially colliding objects. The first one uses axis-aligned bounding boxes (AABB) and is a typical representative of a computational geometry algorithm. The second one uses spherical distance fields originating in image processing. Both approaches address the following challenges of collision detection algorithms: just in time, little resources, inclusive etc. Thus both approaches are scalable in the information they give in collision determination and the analysis up to a fixed refinement level, the collision time depends on the granularity of the bounding volumes and it is also possible to estimate the time bounds for the collision test tightly.
BibTeX:
@article{Ullrich*07ieeepot,
  author = {Ullrich, Torsten and Fünfzig, Christoph and Fellner, Dieter W.},
  title = {Two Different Views On Collision Detection},
  journal = {IEEE Potentials},
  year = {2007},
  volume = {26},
  number = {1},
  pages = {26-30},
  doi = {http://dx.doi.org/10.1109/MP.2007.343037}
}
Ullrich, T., Settgast, V., Krispel, U. & Fünfzig, Christophand Fellner, D.W., (2007), "Distance Calculation between a Point and a Subdivision Surface", VMV 2007 Conf. Proc., pp.161-169.
Abstract: This article focuses on algorithms for fast computation of the Euclidean distance between a query point and a subdivision surface. The analyzed algorithms include uniform tessellation approaches, an adaptive evalution technique, and an algorithm using Bezier conversions. These methods are combined with a grid hashing structure for space partitioning to speed up their runtime. The results show that a pretessellated surface is sufficient for small models. Considering the runtime, accuracy and memory usage an adaptive onthe-fly evaluation of the surface turns out to be the best choice.
BibTeX:
@inproceedings{Ullrich*07vmv,
  author = {Ullrich, T. and and Settgast, Volker and Krispel, U. and Fünfzig, Christophand Fellner, Dieter W.},
  title = {Distance Calculation between a Point and a Subdivision Surface},
  booktitle = {VMV 2007 Conf. Proc.},
  year = {2007},
  pages = {161-169}
}
Ullrich, T. & Fellner, D.W., (2007), "Robust Shape Fitting and Semantic Enrichment", CIPA 2007 Conf. Proc., pp.727-732.
Abstract: A robust fitting and reconstruction algorithm has to cope with two major problems. First of all it has to be able to deal with noisy input data and outliers. Furthermore it should be capable of handling multiple data set mixtures. The decreasing exponential approach is robust towards outliers and multiple data set mixtures. It is able to fit a parametric model to a given point cloud. As parametric models use a description which may not only contain a generative shape but information about the inner structure of an object, the presented approach can enrich measured data with an ideal description. This technique offers a wide range of applications.
BibTeX:
@inproceedings{Ullrich-Fellner07cipa,
  author = {Ullrich, T. and Fellner, Dieter W.},
  title = {Robust Shape Fitting and Semantic Enrichment},
  booktitle = {CIPA 2007 Conf. Proc.},
  year = {2007},
  pages = {727-732}
}
Ullrich, T. & Fellner, D.W., (2007), "Client-side Scripting in Blended Learning Environment", ERCIM NEWS, Vol.71(71), pp.43-44.
Abstract: The computer graphics tutorial CGTutorial was developed by the Institute of Computer Graphics and Knowledge Visualization at Graz University of Technology in Austria. It combines a scripting engine and a development environment with Java-based Web technology. The result is a flexible framework which allows algorithms to be developed and studied without the need to install libraries or set up compiler configurations. Together with already written example scripts, the framework is ready to use. Each example script is a small runnable demonstration application that can be started directly within a browser. Using a scripting engine that interprets Java and JavaScript on a client, the demos can be modified and analysed by the user and then restarted. This combination of scripting engines and Web technology is thus a perfect environment for blended learning scenarios.
BibTeX:
@article{Ullrich-Fellner07ercim,
  author = {Ullrich, T. and Fellner, Dieter W.},
  title = {Client-side Scripting in Blended Learning Environment},
  journal = {ERCIM NEWS},
  year = {2007},
  volume = {71},
  number = {71},
  pages = {43-44}
}

2006

Augsdörfer, U.H., Dodgson, N.A. & Sabin, M.A., (2006), "Tuning subdivision by minimising Gaussian curvature variation near extraordinary vertices ", Computer Graphics Forum, Vol.25(3), pp.263-272.
Abstract: We present a method for tuning primal stationary subdivision schemes to give the best possible behaviour near extraordinary vertices with respect to curvature variation. Current schemes lead to a limit surface around extraordinary vertices for which the Gaussian curvature diverges, as demonstrated by Karciauskas et al. [KPR04]. Even when coefficients are chosen such that the subsubdominant eigenvalues, $, equal the square of the subdominant eigenvalue, $, of the subdivision matrix [DS78] there is still variation in the curvature of the subdivision surface around the extraordinary vertex as shown in recent work by Peters and Reif [PR04] illustrated by Karciauskas et al. [KPR04]. In our tuning method we optimise within the space of subdivision schemes with bounded curvature to minimise this variation in curvature around the extraordinary vertex. To demonstrate our method we present results for the Catmull-Clark [CC78], 4-8 [Vel01, VZ01] and 4-3 [PS03] subdivision schemes. We compare our results to previous work on the tuning of these schemes and show that the coefficients derived with this method give a significantly smaller curvature variation around extraordinary vertices.
BibTeX:
@article{Augsdoerfer*06cgf,
  author = {Ursula H. Augsdörfer and Neil A. Dodgson and Malcolm A. Sabin},
  title = {Tuning subdivision by minimising Gaussian curvature variation near extraordinary vertices },
  journal = {Computer Graphics Forum},
  year = {2006},
  volume = {25},
  number = {3},
  pages = {263-272},
  doi = {http://dx.doi.org/10.1111/j.1467-8659.2006.00945.x}
}
Bustos, B., Keim, D., Saupe, D., Schreck, T. & Vranić, D., (2006), "An Experimental Effectiveness Comparison of Methods for 3D Similarity Search", Springer International Journal on Digital Libraries, Special Issue on Multimedia Contents and Management, Vol.6(1), pp.39-54, Springer.
Abstract: Methods for content-based similarity search are fundamental for managing large multimedia repositories, as they make it possible to conduct queries for similar content, and to organize the repositories into classes of similar objects. 3D objects are an important type of multimedia data with many promising application possibilities. Defining the aspects that constitute the similarity among 3D objects, and designing algorithms that implement such similarity definitions is a difficult problem. Over the last few years, a strong interest in 3D similarity search has arisen, and a growing number of competing algorithms for the retrieval of 3D objects have been proposed. The contributions of this paper are to survey a body of recently proposed methods for 3D similarity search, to organize them along a descriptor extraction process model, and to present an extensive experimental effectiveness and efficiency evaluation of these methods, using several 3D databases.
BibTeX:
@article{Bustos*06ijdl,
  author = {B. Bustos and D. Keim and D. Saupe and T. Schreck and D. Vranić},
  title = {An Experimental Effectiveness Comparison of Methods for 3D Similarity Search},
  journal = {Springer International Journal on Digital Libraries, Special Issue on Multimedia Contents and Management},
  publisher = {Springer},
  year = {2006},
  volume = {6},
  number = {1},
  pages = {39--54},
  doi = {http://dx.doi.org/10.1007/s00799-005-0122-3}
}
Fellner, D.W. & Havemann, S., Spiliopoulou, M., Kruse, R., Borgelt, C., Nürnberger, A. & Gaul, W. (ed.) (2006), "Striving for an adequate vocabulary: Next Generation 'Metadata'", From Data and Information Analysis to Knowledge Engineering, pp.13-20, Springer.
BibTeX:
@incollection{Fellner-Havemann05gfkl,
  author = {Fellner, Dieter W. and Havemann, Sven},
  editor = {M. Spiliopoulou and R. Kruse and C. Borgelt and A. Nürnberger and W. Gaul},
  title = {Striving for an adequate vocabulary: Next Generation 'Metadata'},
  booktitle = {From Data and Information Analysis to Knowledge Engineering},
  publisher = {Springer},
  year = {2006},
  pages = {13-20},
  series = {Studies in Classification, Data Analysis, and Knowledge Organization},
  doi = {http://dx.doi.org/10.1007/3-540-31314-1_2}
}
Fünfzig, C., Ullrich, T. & Fellner, D.W., (2006), "Hierarchical Spherical Distance Fields for Collision Detection", IEEE CG&A, Vol.26(1), pp.64-74.
Abstract: This article presents a fast collision detection technique for all types of rigid bodies, demonstrated using polygon soups. The new approach uses spherical distance fields, which are stored in a compact representation.
BibTeX:
@article{Fuenfzig*06ieeecga,
  author = {Fünfzig, Ch. and Ullrich, T. and Fellner, Dieter W.},
  title = {Hierarchical Spherical Distance Fields for Collision Detection},
  journal = {IEEE CG&A},
  year = {2006},
  volume = {26},
  number = {1},
  pages = {64-74},
  doi = {http://dx.doi.org/10.1109/MCG.2006.17}
}
Havemann, S., Settgast, V., Krottmaier, H. & Fellner, D.W., (2006), "On the Integration of 3D Models into Digital Cultural Heritage Libraries", Proc. VAST 2006 Intl. Symp., pp.161-169, Eurographics.
Abstract: This paper discusses the integration of 3D data in the traditional CH workflow, which is a complex issue with many different aspects. First, the notion '3D data' must be defined appropriately, since 3D may range from raw datasets of individual artifacts to complete virtual worlds including storytelling and animations. Second, a suitable 3D format must be identified among the various, and very different, possible options. Third, the chosen format needs to be supported by all tools and technologies used in the CH tool chain: all the way from the field excavation over presentation in museum exhibitions, over secondary exploitation and database access, to the sustainable longtime archival of digitized artifacts. An integrated solution to this complex problem will be possible only through the tight combination of two basic technologies: 3D scenegraphs and XML
BibTeX:
@inproceedings{Havemann*06vast,
  author = {Havemann, Sven and Settgast, Volker and Krottmaier, Harald and Fellner, Dieter W.},
  title = {On the Integration of 3D Models into Digital Cultural Heritage Libraries},
  booktitle = {Proc. VAST 2006 Intl. Symp.},
  publisher = {Eurographics},
  year = {2006},
  pages = {161--169}
}
Keim, D., Nietzschmann, T., Schelwies, N., Schneidewind, J., Schreck, T. & Ziegler, H., (2006), "A Spectral Visualization System for Analyzing Financial Time Series Data", Eurographics/IEEE-VGTC Symposium on Visualization, pp.195-202, Eurographics Association.
Abstract: Visual data analysis of time related data sets has attracted much research interest recently, and a number of sophisticated visualization methods have been proposed in the past. In financial analysis, however, the most important and most common visualization techniques for time series data is the traditional line- or bar chart. Although these are intuitive and make it easy to spot the effect of key events on a asset's price, and its return over a given period of time, price charts do not allow the easy perception of relative movements in terms of growth rates, which is the key feature of any price-related time series. This paper presents a novel Growth Matrix visualization technique for analyzing assets. It extends the ability of existing chart techniques by not only visualizing asset return rates over fixed time frames, but over the full spectrum of all subintervals present in a given time frame, in a single view. At the same time, the technique allows a comparison of subinterval return rates among groups of even a few hundreds of assets. This provides a powerful way for analyzing financial data, since it allows the identification of strong and weak periods of assets as compared to global market characteristics, and thus allows a more encompassing visual classification into "good" and "poor" performers than existing chart techniques. We illustrate the technique by real-world examples showing the abilities of the new approach, and its high relevance for financial analysis tasks.
BibTeX:
@inproceedings{Keim*06eurovis,
  author = {D. Keim and T. Nietzschmann and N. Schelwies and J. Schneidewind and T. Schreck and H. Ziegler},
  title = {A Spectral Visualization System for Analyzing Financial Time Series Data},
  booktitle = {Eurographics/IEEE-VGTC Symposium on Visualization},
  publisher = {Eurographics Association},
  year = {2006},
  pages = {195--202},
  doi = {http://dx.doi.org/10.2312/VisSym/EuroVis06/195-202}
}
Keim, D., Mansmann, F., Schneidewind, J. & Schreck, T., (2006), "Monitoring Network Traffic with Radial Traffic Analyzer", IEEE Symposium on Visual Analytics Science and Technology, pp.123-128, IEEE Computer Society.
Abstract: Extensive spread of malicious code on the Internet and also within intranets has risen the user's concern about what kind of data is transferred between her or his computer and other hosts on the network. Visual analysis of this kind of information is a challenging task, due to the complexity and volume of the data type considered, and requires special design of appropriate visualization techniques. In this paper, we present a scalable visualization toolkit for analyzing network activity of computer hosts on a network. The visualization combines network packet volume and type distribution information with geographic information, enabling the analyst to use geographic distortion techniques such as the HistoMap technique to become aware of the traffic components in the course of the analysis. The presented analysis tool is especially useful to compare important network load characteristics in a geographically aware display, to relate communication partners, and to identify the type of network traffic occurring. The results of the analysis are helpful in understanding typical network communication activities, and in anticipating potential performance bottlenecks or problems. It is suited for both off-line analysis of historic data, and via animation for on-line monitoring of packet-based network traffic in real time.
BibTeX:
@inproceedings{Keim*06vast,
  author = {D. Keim and F. Mansmann and J. Schneidewind and T. Schreck},
  title = {Monitoring Network Traffic with Radial Traffic Analyzer},
  booktitle = {IEEE Symposium on Visual Analytics Science and Technology},
  publisher = {IEEE Computer Society},
  year = {2006},
  pages = {123--128},
  doi = {http://dx.doi.org/10.1109/VAST.2006.261438}
}
Lancelle, M., Offen, L., Ullrich, T., Techmann, T. & Fellner, D.W., (2006), "Minimally Invasive Projector Calibration for 3D Applications", Proc. GI Workshop Virtuelle und Erweiterte Realität, pp.193-201.
Abstract: Addressing the typically time consuming adjustment of projector equipment in VR installations we propose an easy to implement projector calibration method that effectively corrects images projected onto planar surfaces and which does not require any additional hardware. For hardware accelerated 3D applications only the projection matrix has to be modified slightly thus there is no performance impact and existing applications can be adopted easily.
BibTeX:
@inproceedings{Lancelle*06giarvr,
  author = {Lancelle, Marcel and Offen, Lars and Ullrich, Torsten and Techmann, Torsten and Fellner, Dieter W.},
  title = {Minimally Invasive Projector Calibration for 3D Applications},
  booktitle = {Proc. GI Workshop Virtuelle und Erweiterte Realität},
  year = {2006},
  pages = {193--201}
}
Müller, K., Reusche, L. & Fellner, D.W., (2006), "Extended Subdivision Surfaces: Building a bridge between NURBS and Catmull-Clark Surfaces", ACM Transactions on Graphics, Vol.25(2), pp.268-292.
BibTeX:
@article{Mueller*05tog,
  author = {Kerstin Müller and Lars Reusche and Dieter W. Fellner},
  title = {Extended Subdivision Surfaces: Building a bridge between NURBS and Catmull-Clark Surfaces},
  journal = {ACM Transactions on Graphics},
  year = {2006},
  volume = {25},
  number = {2},
  pages = {268-292},
  doi = {http://dx.doi.org/10.1145/1138450.1138455}
}
Posch, K.-C. & Fellner, D., (2006), "Schwerpunktbildung rechnet sich: Informatik an der TU Graz führt österreichischen IT-Wettbewerb an", Forschungsjournal der Technischen Universität Graz, Vol.SS06, pp.17.
BibTeX:
@article{Posch-Fellner06,
  author = {K.-C. Posch and D. Fellner},
  title = {Schwerpunktbildung rechnet sich: Informatik an der TU Graz führt österreichischen IT-Wettbewerb an},
  journal = {Forschungsjournal der Technischen Universität Graz},
  year = {2006},
  volume = {SS06},
  pages = {17}
}
Schreck, T., Keim, D. & Panse, C., (2006), "Visual Feature Space Analysis for Unsupervised Effectiveness Estimation and Feature Engineering", IEEE International Conference on Multimedia and Expo, pp.925-928, IEEE.
Abstract: The feature vector approach is one of the most popular schemes for managing multimedia data. For many data types such as audio, images, or 3D models, an abundance of different feature vector extractors are available. The automatic (unsupervised) identification of the best suited feature extractor for a given multimedia database is a difficult and largely unsolved problem. We here address the problem of comparative unsupervised feature space analysis. We propose two interactive approaches for the visual analysis of certain feature space characteristics contributing to estimated discrimination power provided in the respective feature spaces. We apply the approaches on a database of 3D objects represented in different feature spaces, and we experimentally show the methods to be useful (a) for unsupervised comparative estimation of discrimination power and (b) for visually analyzing important properties of the components (dimensions) of the respective feature spaces. The results of the analysis are useful for feature selection and engineering.
BibTeX:
@inproceedings{Schreck*06icme,
  author = {T. Schreck and D. Keim and C. Panse},
  title = {Visual Feature Space Analysis for Unsupervised Effectiveness Estimation and Feature Engineering},
  booktitle = {IEEE International Conference on Multimedia and Expo},
  publisher = {IEEE},
  year = {2006},
  pages = {925--928},
  doi = {http://dx.doi.org/10.1109/ICME.2006.262671}
}
Schreck, T., Keim, D. & Mansmann, F., (2006), "Regular Treemap Layouts for Visual Analysis of Hierarchical Data", Spring Conference on Computer Graphics, pp.184-191, Comenius University, Bratislava.
Abstract: Hierarchical relationships play an utmost important role in many application domains. The appropriate visualization of hierarchically structured data sets can contribute towards supporting the data analyst in effectively analyzing hierarchic structures using visualization as a user friendly means to communicate information. Information Visualization has contributed a number of useful techniques for visualization of hierarchically structured data sets. Yet, the support for certain regularity requirements as arising from many data element types has to be improved. In this paper, we analyze an existing variant of the popular TreeMap family of hierarchical layout algorithms, and we introduce a novel TreeMap algorithm supporting space efficient layout of hierarchical data sets providing global regular layouts. We detail our algorithm, and we present applications on a real-world data set as well as experiments performed on a synthetic data set, showing its applicability and usefulness.
BibTeX:
@inproceedings{Schreck*06sccg,
  author = {T. Schreck and D. Keim and F. Mansmann},
  title = {Regular Treemap Layouts for Visual Analysis of Hierarchical Data},
  booktitle = {Spring Conference on Computer Graphics},
  publisher = {Comenius University, Bratislava},
  year = {2006},
  pages = {184--191},
  doi = {0.1145/2602161.2602183}
}

2005

Berndt, R., Fellner, D.W. & Havemann, S., (2005), "Generative 3D Models: A Key to More Information within Less Bandwidth at Higher Quality", Proc. Web3D 2005 Intl. Symp., pp.111-122, ACM Siggraph.
Abstract: This paper proposes a novel, yet extremely compact shape representation method. Its main feature is that 3D shapes are represented in terms of functions instead of geometric primitives. Given a set of -- typically only a few -- specific parameters, the evaluation of such a function results in a model that is one instance of a general shape. Particularly important for the web context with client systems of widely varying rendering performance is the support of a semantic level-of-detail superior to any low-level polygon reduction scheme. The shape description language has the power of a full programming language, but it has an extremely simple syntax. It serves as a 'mesh creation/manipulation language'. It is designed to facilitate the composition of more complex modeling operations out of simpler ones. Thus, it allows to create high-level operators which evaluate to arbitrarily complex, parameterized shapes. The underlying low-level shape representation is a boundary representation mesh in combination with Catmull/Clark subdivision surfaces.
BibTeX:
@inproceedings{Berndt*05web3d,
  author = {Berndt, Rene and Fellner, Dieter W. and Havemann, Sven},
  title = {Generative 3D Models: A Key to More Information within Less Bandwidth at Higher Quality},
  booktitle = {Proc. Web3D 2005 Intl. Symp.},
  publisher = {ACM Siggraph},
  year = {2005},
  pages = {111-122},
  doi = {http://dx.doi.org/10.1145/1050491.1050508}
}
Bustos, B., Keim, D., Saupe, D., Schreck, T. & Vranić, D., (2005), "Feature-based Similarity Search in 3D Object Databases", ACM Computing Surveys, Vol.37, pp.345-387, ACM.
Abstract: The development of effective content-based multimedia search systems is an important research issue due to the growing amount of digital audio-visual information. In the case of images and video, the growth of digital data has been observed since the introduction of 2D capture devices. A similar development is expected for 3D data as acquisition and dissemination technology of 3D models is constantly improving. 3D objects are becoming an important type of multimedia data with many promising application possibilities. Defining the aspects that constitute the similarity among 3D objects and designing algorithms that implement such similarity definitions is a difficult problem. Over the last few years, a strong interest in methods for 3D similarity search has arisen, and a growing number of competing algorithms for content-based retrieval of 3D objects have been proposed. We survey feature-based methods for 3D retrieval, and we propose a taxonomy for these methods. We also present experimental results, comparing the effectiveness of some of the surveyed methods.
BibTeX:
@article{Bustos*05csur,
  author = {B. Bustos and D. Keim and D. Saupe and T. Schreck and D. Vranić},
  title = {Feature-based Similarity Search in 3D Object Databases},
  journal = {ACM Computing Surveys},
  publisher = {ACM},
  year = {2005},
  volume = {37},
  pages = {345--387},
  doi = {http://dx.doi.org/10.1145/1118890.1118893}
}
Bustos, B., Keim, D. & Schreck, T., (2005), "A pivot-based index structure for combination of feature vectors", ACM Symposium on Applied Computing, Multimedia and Visualization Track, pp.1180-1184, ACM Press.
Abstract: We present a novel indexing schema that provides efficient nearest-neighbor queries in multimedia databases consisting of objects described by multiple feature vectors. The benefits of the simultaneous usage of several (statically or dynamically) weighted feature vectors with respect to retrieval effectiveness have been previously demonstrated. Support for efficient multi-feature vector similarity queries is an open problem, as existing indexing methods do not support dynamically parameterized distance functions. We present a solution for this problem relying on a combination of several pivot-based metric indices. We define the index structure, present algorithms for performing nearest-neighbor queries on these structures, and demonstrate the feasibility by experiments conducted on two real-world image databases. The experimental results show a significant performance improvement over existing access methods.
BibTeX:
@inproceedings{Bustos*05sac,
  author = {B. Bustos and D. Keim and T. Schreck},
  title = {A pivot-based index structure for combination of feature vectors},
  booktitle = {ACM Symposium on Applied Computing, Multimedia and Visualization Track},
  publisher = {ACM Press},
  year = {2005},
  pages = {1180--1184},
  doi = {http://dx.doi.org/10.1145/1066677.1066945}
}
Dayal, U., Hao, M., Keim, D. & Schreck, T., (2005), "Importance Driven Visualization Layouts for Large Time-Series Data", IEEE Symposium on Information Visualization, pp.27-34, IEEE Computer Society.
Abstract: Time series are an important type of data with applications in virtually every aspect of the real world. Often a large number of time series have to be monitored and analyzed in parallel. Sets of time series may show intrinsic hierarchical relationships and varying degrees of importance among the individual time series. Effective techniques for visually analyzing large sets of time series should encode the relative importance and hierarchical ordering of the time series data by size and position, and should also provide a high degree of regularity in order to support comparability by the analyst. In this paper, we present a framework for visualizing large sets of time series. Based on the notion of inter time series importance relationships, we define a set of objective functions that space-filling layout schemes for time series data should obey. We develop an efficient algorithm addressing the identified problems by generating layouts that reflect hierarchy and importance based relationships in a regular layout with favorable aspect ratios. We apply our technique to a number of real world data sets including sales and stock data, and we compare our technique with an aspect ratio aware variant of the well known TreeMap algorithm. The examples show the advantages and practical usefulness of our layout algorithm.
BibTeX:
@inproceedings{Dayal*05infovis,
  author = {U. Dayal and M. Hao and D. Keim and T. Schreck},
  title = {Importance Driven Visualization Layouts for Large Time-Series Data},
  booktitle = {IEEE Symposium on Information Visualization},
  publisher = {IEEE Computer Society},
  year = {2005},
  pages = {27--34},
  doi = {http://dx.doi.org/10.1109/INFVIS.2005.1532148}
}
Gerth, B., Berndt, R., Havemann, S. & Fellner, D.W., (2005), "3D Modeling for Non-Expert Users with the Castle Construction Kit v0.5", Proc. VAST 2005 Intl. Symp., pp.49-57, Eurographics.
Abstract: We present first results of a system for the ergonomic and economic production of three-dimensional interactive illustrations by non-expert users like average CH professionals. For this purpose we enter the realm of domaindependent interactive modeling tools, in this case exemplified with the domain of medieval castles. Special emphasis is laid on creating generic modeling tools that increase the usability with a unified 3D user interface, as well as the efficiency of tool generation. On the technical level our system innovates by combining two powerful but previously separate approaches, the Generative Modeling Language (GML) and the OpenSG scene graph engine.
BibTeX:
@inproceedings{Gerth*05vast,
  author = {Gerth, Björn and Berndt, René and Havemann, Sven and Fellner, Dieter W.},
  title = {3D Modeling for Non-Expert Users with the Castle Construction Kit v0.5},
  booktitle = {Proc. VAST 2005 Intl. Symp.},
  publisher = {Eurographics},
  year = {2005},
  pages = {49-57},
  doi = {http://dx.doi.org/10.2312/VAST/VAST05/049-057}
}
Halm, A., Offen, L. & Fellner, D., (2005), "BioBrowser: A Framework for Fast Protein Visualization", Proc. EUROGRAPHICS -- IEEE VGTC Symposium on Visualization, pp.287-294, Eurographics.
Abstract: This paper presents a protein visualization system called BioBrowser, which provides high quality images at interactive frame rates for molecules of extreme size and complexity. This is achieved by a shift in the tessellation approach: triangle meshes are not produced a priori on a 'just-in-case' basis. Instead, tessellation happens 'justin- time' given a certain camera position, image size and interaction demand. Thus, our approach is based on multiresolution meshes and on new extensions of graphics hardware. The paper shows how to reduce geometric data by using subdivision surfaces for ribbon structures and molecular surfaces and by using billboards instead of spheres consisting of triangles. It also shows how to use fragment shaders to create a three dimensional appearance and realistic sphere intersections. The combination of these approaches leads to an image quality not yet seen in interactive visualization environments for molecules of that size/complexity. All the above methods are combined to gain a high performance configurable visualization system on standard hardware.
BibTeX:
@inproceedings{Halm*05eurovis,
  author = {Halm, A. and Offen, L. and Fellner, D.},
  title = {BioBrowser: A Framework for Fast Protein Visualization},
  booktitle = {Proc. EUROGRAPHICS -- IEEE VGTC Symposium on Visualization},
  publisher = {Eurographics},
  year = {2005},
  pages = {287-294},
  doi = {http://dx.doi.org/10.2312/VisSym/EuroVis05/287-294}
}
Havemann, S. & Fellner, D.W., Tochtermann, K. & Maurer, H. (ed.) (2005), "Managing Procedural Knowledge", Proc. 5th International Conference on Knowledge Management (I-KNOW'05), pp.248-255, Springer.
Abstract: Procedural knowledge is one of the most valuable assets of individuals as well as academic institutions and commercial companies. The ability to satisfy an order relies on the knowledge how similar tasks have been performed in the past. Thus the preservation of this knowledge is critical. Procedural knowledge takes many different forms, which makes it very hard to reason about it. We propose a method to reduce it to its very essence. This method is very simple, and as such it is not new. But we argue that it is worthwhile to take a fresh look on an existing technology from a new point of view, because it may solve the problem of knowledge preservation that has become apparent in this form only recently. Although the technique is known for a long time, it appears that its potential for the management of procedural knowledge has not been realized so far. It is a also very elegant method since we can show that it serves both as a theoretical device to better understand the nature of processes, but it can also be directly operationalized to derive a new generation of user-friendly tools that support the preservation of procedural knowledge.
BibTeX:
@inproceedings{Havemann-Fellner05iknow,
  author = {Havemann, Sven and Fellner, Dieter W.},
  editor = {K. Tochtermann and H. Maurer},
  title = {Managing Procedural Knowledge},
  booktitle = {Proc. 5th International Conference on Knowledge Management (I-KNOW'05)},
  publisher = {Springer},
  year = {2005},
  pages = {248-255}
}
Havemann, S., (2005), "Generative Mesh Modeling".
BibTeX:
@phdthesis{Havemann05:PhD,
  author = {Sven Havemann},
  title = {Generative Mesh Modeling},
  school = {Institute of Computer Graphics, Faculty of Computer Science, Braunschweig Technical University, Germany},
  year = {2005},
  note = {available from //diglib.eg.org/EG/DL/dissonline/doc/havemann.pdf}
}
Kim, H., Albuquerque, G., Havemann, S. & Fellner, D.W., (2005), "Tangible 3D: Hand Gesture Interaction for Immersive 3D Modeling", Proc. Virtual Environments 2005, pp.191-199, Eurographics.
Abstract: Most of all interaction tasks relevant for a general three-dimensional virtual environment can be supported by 6DOF control and grab/select input. Obviously a very efficient method is direct manipulation with bare hands, like in real environment. This paper shows the possibility to perform non-trivial tasks using only a few well-known hand gestures, so that almost no training is necessary to interact with 3D-softwares. Using this gesture interaction we have built an immersive 3D modeling system with 3D model representation based on a mesh library, which is optimized not only for real-time rendering but also accommodates for changes of both vertex positions and mesh connectivity in real-time. For performing the gesture interaction, the user's hand is marked with just four fingertipthimbles made of inexpensive material as simple as white paper. Within our scenario, the recognized hand gestures are used to select, create, manipulate and deform the meshes in a spontaneous and intuitive way. All modeling tasks are performed wirelessly through a camera/vision tracking method for the head and hand interaction.
BibTeX:
@inproceedings{Kim*05egve,
  author = {Kim, Hyosun and Albuquerque, Georgia and Havemann, Sven and Fellner, Dieter W.},
  title = {Tangible 3D: Hand Gesture Interaction for Immersive 3D Modeling},
  booktitle = {Proc. Virtual Environments 2005},
  publisher = {Eurographics},
  year = {2005},
  pages = {191-199},
  doi = {http://dx.doi.org/10.2312/EGVE/IPT_EGVE2005/191-199}
}
Sabin, M.A., Augsdörfer, U.H. & Dodgson, N.A., (2005), "Artifacts in Box-Spline Surfaces", Mathematics of Surfaces XI, Vol.3604, pp.350-363, Springer.
Abstract: Certain problems in subdivision surfaces have provided the incentive to look at artifacts. Some of these effects are common to all box-spline surfaces, including the tensor product B-splines widely used in the form of NURBS, and these are worthy of study. Although we use the subdivision form of box- and B-splines as the mechanism for this study, and also apply the same mechanism to the subdivision schemes which are not box-splines, we are looking at problems which are not specific to subdivision surfaces, but which afflict all Box- and B-splines.
BibTeX:
@inproceedings{Malcolm*05lncs,
  author = {Malcolm A. Sabin and Ursula H. Augsdörfer and Neil A. Dodgson},
  title = {Artifacts in Box-Spline Surfaces},
  booktitle = {Mathematics of Surfaces XI},
  publisher = {Springer},
  year = {2005},
  volume = {3604},
  pages = {350-363},
  series = {Lecture Notes in Computer Science, LNCS},
  doi = {http://dx.doi.org/10.1007/11537908_21}
}
Ullrich, T. & Fellner, D.W., (2005), "Computer Graphics Courseware", Eurographics 2005 -- Education Papers, pp.11-17, Eurographics Association.
Abstract: A lot of courseware tools suffer from the almost mutually exclusive goals of ease of usability on the one hand and extensibility and flexibility on the other hand. In most cases the tools are either ready-to-use applications (e.g. a virtual lab) or complex tool sets which need a long period of domain-specific adjustment. This paper presents the courseware environment AlgoViz which primarily addresses this problem. The AlgoViz project provides a software collection which is currently focused on the visualization of fundamental computer graphics algorithms and geometric modeling concepts. The intention is to build a collection of components, that can easily be combined to new applications. Supporting a purely visual programming paradigm, AlgoViz offers the possibility to create new demonstration applications without having to write a single line of source code. To demonstrate its potential AlgoViz comes with a variety of examples already forming a valuable computer graphics tutorial.
BibTeX:
@inproceedings{Ullrich-Fellner05eg,
  author = {Ullrich, T. and Fellner, Dieter W.},
  title = {Computer Graphics Courseware},
  booktitle = {Eurographics 2005 -- Education Papers},
  publisher = {Eurographics Association},
  year = {2005},
  pages = {11-17},
  doi = {http://dx.doi.org/10.2312/Conf/EG2005/Education/011-017}
}

Created by JabRef on 19/06/2015.

 
logo_hyperwave
© 2008-2010 Computer Graphics & Knowledge Visualization, TU Graz Powered by Hyperwave