|CALL FOR PARTICIPATION|
| Conference Papers
Papers can be up to a maximum of eight (8) pages in length, including references and full-color figures throughout. We encourage the use of digital video to support the submission, particularly if part or all of the work covers interactive techniques. At least one author of an accepted paper must attend the conference to present the work, and authors will also be required to present a very brief one-minute summary of their talk at the opening papers preview session.
All papers accepted at IEEE InfoVis will appear in a special issue (Nov/Dec 2007) of IEEE Transactions on Visualization and Computer Graphics (TVCG) containing the conference proceedings. This special issue will be published before the conference, the publication date aimed at is October 1st. Papers (including supplemental material) may undergo a revision and review cycle after initial notification of review results in order to ensure they are acceptable for publication and presentation in the journal.
The paper formatting page provides details and guidelines for preparing a proper submission. Authors must follow the style guidelines specified there.
To submit your paper, please click here.
Online submission instructions will available here soon.
Abstracts due (mandatory): Wednesday, March 21, 2007 5:00pm PDT
Full papers due: Saturday, March 31, 2007 5:00pm PDT
Information for Authors
InfoVis papers typically fall into five categories: technique, system, design study, evaluation, and model. We discuss these categories below as a guide to authors. However, papers can include elements of more than one of these categories, and are not required to fit within these five.
Technique papers introduce novel techniques or algorithms that have not previously appeared in the literature, or that significantly extend known techniques or algorithms, for example by scaling to datasets of much larger size than before or by generalizing a technique to a larger class of uses.
The technique or algorithm description provided in the paper should be complete enough that a competent graduate student in visualization could implement the work, and the authors should create a prototype implementation of the methods. Relevant previous work must be referenced, and the advantage of the new methods over it should be clearly demonstrated. There should be a discussion of the tasks and datasets for which this new method is appropriate, and its limitations. Evaluation through informal or formal user studies, or other methods, will often serve to strengthen the paper, but are not mandatory.
Past work examples include: algorithms for layout and navigation of trees, graphs, and networks (graph visualization papers should go beyond pure graph layout to include navigation and interactive techniques); interaction techniques for infovis; browsing and navigation techniques in large information spaces; geometric or graphics algorithms for increased scalability of existing techniques; and techniques for visualizing very high dimensional (100+ D) spaces. This list is not exhaustive, and we welcome submissions in these and all other areas of infovis.
System papers present a blend of algorithms, technical requirements, user requirements, and design that solves a major problem. The system that is described is both novel and important, and has been implemented. The rationale for significant design decisions is provided, and the system is compared to documented, best-of-breed systems already in use. The comparison includes specific discussion of how the described system differs from and is, in some significant respects, superior to those systems. For example, the described system may offer substantial advancements in the performance or usability of infovis systems, or novel capabilities. Every effort should be made to eliminate external factors (such as advances in processor performance, memory sizes or operating system features) that would affect this comparison. For further suggestions, please review "How (and How Not) to Write a Good Systems Paper" by Roy Levin and David Redell, and "Empirical Methods in CS and AI" by Toby Walsh.
Design study papers explore the choices made when applying infovis techniques in an application area, for example relating the visual encodings and interaction techniques to the requirements of the target task. Although a significant amount of application domain background information can be useful to provide a framing context in which to discuss the specifics of the target task, the primary focus of the case study must be the infovis content. The results of the design study, including insights generated in the application domain, should be clearly conveyed. Describing new techniques and algorithms developed to solve the target problem will strengthen a design study paper, but the requirements for novelty are less stringent than in a Technique paper. The work will be judged by the design lessons learned, and on which future contributors can build. We invite submissions on any application area.
Past work examples include design lessons learned in, novel applications for, and new encodings for bioinformatics, data mining and databases, software development, finance and commerce, telecommunications and networking, information retrieval from large text corpora, and computer-supported cooperative work.
Evaluation papers present an empirical comparative study of infovis techniques or systems. Authors are not necessarily expected to implement the systems used in these studies themselves; the research contribution will be judged on the validity and importance of the experimental results as opposed to the novelty of the systems or techniques under study. The conference committee appreciates the difficulty and importance of designing and performing rigorous experiments, including the definition of appropriate hypotheses, tasks, data sets, selection of subjects, measurement, validation and conclusions. The goal of such efforts should be to move from mere description of experiments, toward prediction and explanation. We do suggest that potential authors who have not had formal training in the design of experiments involving human subjects may wish to partner with a colleague from an area such as psychology or human-computer interaction who has experience with designing rigorous experimental protocols and statistical analysis of the resulting data. Other novel forms of evaluation are also encouraged.
Past work examples include empirical comparisons of user performance with different visual representations or visualization systems, field studies and usability analyses of visualization designs, and identification and testing of new evaluation metrics and methods.
Model papers present new interpretations of the foundational theory of information visualization. Implementations are usually not relevant for papers in this category. Papers should focus on basic advancement in our understanding of how infovis techniques complement and exploit properties of human vision and cognition.
Past work examples include new taxonomies, extensions to Bertin's theories of visual encoding, analysis of metaphors, task analysis, perception, cognitive models, visual comparisons and indications of causality.
Papers can be up to a maximum of eight (8) pages in length, including full-color figures. The length of the paper should be commensurate with its contributions: for example, a useful idea presented completely and concisely in four pages is more likely to be accepted than the same idea presented in eight pages. The length limit includes figures, which may be in color because the proceedings will be printed in full color throughout, as well as references. Authors may be asked to decrease a paper's length as a condition of acceptance.
Authors may submit an accompanying video illustration to help explain their paper. (Note that this video is separate and distinct from the formal Video submission category.) Accompanying videos of accepted papers can be included on the conference DVD. Videos are limited to a maximum 5 minutes in length and 50 MB in size. Authors may submit accompanying videos through the electronic paper submission system.
Papers Review Process Papers are peer-reviewed. The program committee will consist of senior reviewers on the program committee who will recruit additional external reviewers. Each paper will be read by two senior reviewers on the program committee and two additional external reviewers.
All papers submitted to InfoVis must be original, unpublished work. Any paper that has been previously published in equivalent or substantially similar form by any other conference or in any other journal will be rejected at an early stage of the review process. Furthermore, a paper identical to or substantially similar to one submitted to InfoVis should not be under consideration for another conference or journal during the review process. If you have a previously published paper or one that is under review that you would like to distinguish from your InfoVis submission, don't hesitate to clarify the distinction in the body of your paper. However, it is not acceptable to submit substantially the same paper to multiple conferences or journals, intending to withdraw the paper from the other venues as soon as the paper is accepted by one of them; at the very least, this will waste the time of program committee members and reviewers involved with the withdrawn papers.
A paper is considered published if it has appeared in a peer-reviewed journal or in published meeting proceedings that are commercially available afterward to nonattendees. Note that work described in the Interactive Posters, Contest Entries, or Late-Breaking Hot Topics venues from previous InfoVis symposia is thus not considered formally published, and may be resubmitted provided it has substantial additional new material.
Submissions are treated as confidential communications during the review process, so submission does not constitute public disclosure of any ideas therein. Submissions should contain no information or materials that will be proprietary or confidential at the time of publication (at the conference), and should cite no publications that are proprietary or confidential at the time of publication.