University of California, San Diego, La Jolla, CA 92093
EMAIL: hollan@ucsd.edu PHONE: +1.858.534.8156
J i m    H o l l a n
Distinguished Professor of Cognitive Science & Professor of Computer Science and Engineering
Co-Director, UC San Diego Design Lab, Atkinson Hall 1601C
Short Bio
After completing a Ph.D. at the University of Florida and a postdoctoral fellowship in artificial intelligence at Stanford University, I was on the faculty at the University of California, San Diego for a decade. Along with Ed Hutchins and Don Norman, I led the Intelligent Systems Group in the Institute for Cognitive Science. I left UCSD to become Director of the MCC Human Interface Laboratory and subsequently established the Computer Graphics and Interactive Media Research Group at Bellcore. In 1993, I moved to the University of New Mexico as Chair of the Computer Science Department. In 1997, I returned to UCSD as Professor of Cognitive Science and Ed Hutchins and I created the Distributed Cognition and Human-Computer Interaction Research Group. In 2014 I joined with Don Norman and Scott Klemmer to found the UC San Diego Design Lab.

Career Overview
In the first part of my career I explored dynamic graphical interfaces to support simulation-based training. The Steamer System included one of the first object-oriented graphics editors and a series of seminal training systems. The science that accompanied the development efforts made significant contributions to the understanding of direct manipulation interfaces and played an influential role in initiating mental models research.

The next phase of my research focused on multimodal interfaces to high-functionality systems. I led the Human Interface Lab at MCC, one of the largest HCI research lab in the world, in designing and building a multimodal interface prototyping environment. We were among the first to demonstrate integration of gestures, graphics, and natural language within a common interface development framework. One significant contribution was a hybrid software architecture that combined neural networks with symbolic representations using an integrated knowledge base. Other work begun at MCC on history-enriched digital objects and collaborative filtering continued when I moved to Bellcore. This work resulted in a series of early demonstrations of the effectiveness of collaborative filtering.

At Bellcore I started the Computer Graphics and Interactive Media research group to explore information visualization. Among other efforts, I initiated and led the first large scale project to explore multiscale information visualization. When I moved to the University of New Mexico and subsequently returned to UCSD, this became an expanded multi-institutional (Bellcore, University of New Mexico, New York University, University of Maryland, University of Michigan, and UCSD) effort that enabled the exploration of zoomable multiscale interfaces. The resulting system, Pad++, has been widely used by the research community and was licensed non-exclusively to Sony for $500,000. My work on multiscale interfaces and visualization has continued, focusing primarily on information navigation of complex web-based domains, personal collections of scientific documents, and tools to assist analysis of video and other time-based activity data. Supported by funding from NSF and Intel, we implemented Dynapad, the third generation of our multiscale visualization software. The approach views interface design as the creation of a physics for information that is specifically designed to exploit our perceptual abilities, reduce cognitive costs by restructuring tasks, and increase the efficacy and pleasure of interaction.

Upon returning to UC San Diego in 1997, I was the founding co-director, with Ed Hutchins, of the Distributed Cognition and Human-Computer Interaction Laboratory. Creation of the lab was motivated by the belief that distributed cognition is a particularly fertile framework for understanding cognitive, social, and technical systems. A central image for us was environments in which people pursue their activities in collaboration with the elements of the social and material world. Our core research efforts were directed at understanding such environments: what we really do in them, how we coordinated our activity in them, and what role technology should play in them. The lab's focus was on developing the theoretical and methodological foundations engendered by this broader view of cognition, extending the reach of cognition to encompass interactions between people as well as interactions with resources in the environment.

Earlier Research Grops
Distributed Cognition and Human Computer Interaction Research Group, UC San Diego
I was founding co-director of the Distributed Cognition and Human Computer Interaction Research Group. Our lab has been one of the leaders in the shift in cognitive science toward a view of cognition as a property of systems that are larger than isolated individuals. This extends the reach of cognition to encompass interactions between people as well as interactions with resources in the environment. Members of the Dcog-HCI lab are dedicated to developing the theoretical and methodological foundations engendered by this broader view of cognition and interaction. We are united in the belief that distributed cognition promises to be a particularly fertile framework for designing and evaluating augmented environments and digital artifacts. A central image for us is environments in which people pursue their activities in collaboration with the elements of of the social and material world. Our core research efforts are directed at understanding such environments: what we really do in them, how we coordinated our activity in them, and what role technology should play in them.

Computer Graphics and Interactive Media Research Group, Bellcore
I established the Computer Graphics and Interactive Media Research Group at Bellcore. Research focused on information visualization and construction of 3D visualization and interface prototyping environments. Projects included unified graphical interfaces to heterogeneous databases, visualization of network and switching activity, visualization of software systems and programmer activities, information filtering, prototyping and exploration of interactive animations, and empirical studies of history-enriched digital objects (Bellcore Video Recommender, one of the earliest demonstrations of the effectiveness of collaborative filtering). Additional efforts were concerned with theories of telecommunications and exploration of alternatives to imitating face-to-face interactions for supporting informal communication.

Human Interface Laboratory, MCC
As Director of the Human Interface Laboratory (annual budget $5M) at MCC, I coordinated the efforts of approximately 40 researchers. Areas of research were graphics, knowledge editing, natural language, neural networks, computer supported cooperative work, and new metaphors for interaction design. Our goal was to develop the foundations for principled and efficient construction of collaborative interfaces to high-functionality systems. Research within the laboratory was coordinated around the construction of an integrated interface prototyping environment and its application to challenging interface problems. The vision was to evolve a set of human interface tools (HITS) into a general user interface design environment (GUIDE). HITS and GUIDE were experimental vehicles for grounding, motivating, and coordinating the lab's scientific and technological efforts. They served as prototypes supporting the rapid implementation, exploration, and demonstration of new human interface concepts.

Intelligent Systems Group and Future Technologies Group, UCSD/NPRDC
In my earlier work at UCSD, in collaboration with Ed Hutchins and Don Norman, I served as Director of the Intelligent Systems Group. Our research group was concerned with application of artificial intelligence and cognitive science to the design of human computer interfaces and development of graphical simulation-based training systems. At NPRDC I was head of the Future Technologies Group and in collaboration with Ed Hutchins and Michael Williams led efforts to build advanced training systems (Moboard, Semnet, and Steamer). I was PI on a number of research projects: Theory of Graphic Representation, Declarative and Procedural Representation, Steamer: An Advanced Intelligent Computer-Assisted Instruction System (in collaboration with Larry Stead, Bruce Roberts, and Al Stevens at BBN), Qualitative Interfaces to Quantitative Process Models, AI-Based Tools for Building Simulations, and Computation via Direct Manipulation.

Software Systems
A major portion of my intellectual activity is devoted to the design and implementation of software systems. Such systems are fundamental to my research. I find creating software and sharing it with the students and the wider research community to frequently have a more significant impact than traditional forms of academic publication. Software is an artifact that can mediate very productive interactions and collaborations. In addition to the software systems mentioned above (Steamer, Moboard, Semnet, and HITS) here I provide brief descriptions of recent software systems I have developed.

Pad++: Zoomable Multiscale Visualization Software
I led the effort to develop Pad++. This software has made possible the first serious exploration of multiscale interfaces. It consists of 164,714 lines of code and was licensed to Sony for $500,000. My main collaborator in developing Pad++ was Ben Bederson and much of the elegance of the software is due to Ben. The development of the software benefitted from interactions with Ken Perlin and Jon Myer at New York University, George Furnas at the University of Michigan, and feedback from a surprising number of the 4500 people who downloaded the code.

STkPad: Scheme-Based Zoomable Multiscale Visualization Software
STkPad is a version of Pad++ developed at UCSD. It consists of 97,737 lines of code. It's main features are the replacement of the Tcl scripting language used in Pad++ with Scheme and integration of MySQL, a relational database. The database serves to store STkPad content and to provide a mechanism to support collaborative applications. The research focus for STkPad is to explore shared activity histories and image-based information navigation. I developed it in collaboration with two recent postdocs, David Fox and Ron Hightower.

Dynapad: Scheme-Based Zoomable Multiscale Visualization Software
Dynapad is the third generation of our multiscale interface and visualization software. The name Dynapad was chosen to reflect the software's heritage from our earlier Pad++ and STkPad software as well as ideas from Dynabook and Sketchpad. It makes scale a first-class parameter of objects, supports navigation in multiscale workspaces, and provides especially effective mechanisms to maintain interactivity while rendering large numbers of graphical objects. Dynapad employs Scheme to provide a high-level programming interface to the multiscale graphical and interaction facilities in the C++ rendering substrate. Dynapad implements multiscale graphical objects that are interactive (e.g., they can be scaled or moved via user interaction) and dynamic (e.g., they can have behaviors that result from the running of attached code). Behaviors can be associated with an object, a set of objects, or a region of the multiscale workspace and are triggered by user actions, the behavior of other objects, various events, or timer interrupts. It is build using PLT Scheme. and was the basis for our exploration of informational physics and development of piles and lenses to support image-based access and organization of information.

ChronoViz: Navigation, Visualization, and Analysis of Time-Based Data
ChronoViz, developed with Adam Fouse, is designed to facilitate annotation, navigation, and analysis of multiple streams of video and other time-coded data. ChronoViz is unique in being an open-source system that was specifically designed both to visualize multiple data streams simultaneously and to be easily extended. A Python plugin architecture enables incorporation of additional types of data as well as new analysis and visualization facilities. As an example of ChronoViz's extensibility, members of my lab recently extended ChronoViz to incorporate eye-movement data from pilots in a high-fidelity simulator along with multiple streams of video, field notes recorded with digital pens, and simulator data. This allows analysts to move between eye fixations on a specific instrument and the associated video segments in which they occurred or to touch a paper note annotation and have videos positioned at the time that note was taken. ChronoViz's plugin architecture facilitated inclusion not only of eye-movement data, computing fixation locations, and association of fixations with instruments, but also computing transition probabilities between instrument fixations.