The next phase of my research focused on multimodal interfaces to high-functionality systems. I led the Human Interface Lab at MCC, one of the largest HCI research lab in the world, in designing and building a multimodal interface prototyping environment. We were among the first to demonstrate integration of gestures, graphics, and natural language within a common interface development framework. One significant contribution was a hybrid software architecture that combined neural networks with symbolic representations using an integrated knowledge base. Other work begun at MCC on history-enriched digital objects and collaborative filtering continued when I moved to Bellcore. This work resulted in a series of early demonstrations of the effectiveness of collaborative filtering.
At Bellcore I started the Computer Graphics and Interactive Media research group to explore information visualization. Among other efforts, I initiated and led the first large scale project to explore multiscale information visualization. When I moved to the University of New Mexico and subsequently returned to UCSD, this became an expanded multi-institutional (Bellcore, University of New Mexico, New York University, University of Maryland, University of Michigan, and UCSD) effort that enabled the exploration of zoomable multiscale interfaces. The resulting system, Pad++, has been widely used by the research community and was licensed non-exclusively to Sony for $500,000. My work on multiscale interfaces and visualization has continued, focusing primarily on information navigation of complex web-based domains, personal collections of scientific documents, and tools to assist analysis of video and other time-based activity data. Supported by funding from NSF and Intel, we implemented Dynapad, the third generation of our multiscale visualization software. The approach views interface design as the creation of a physics for information that is specifically designed to exploit our perceptual abilities, reduce cognitive costs by restructuring tasks, and increase the efficacy and pleasure of interaction.
Upon returning to UC San Diego in 1997, I was the founding co-director, with Ed Hutchins, of the Distributed Cognition and Human-Computer Interaction Laboratory. Creation of the lab was motivated by the belief that distributed cognition is a particularly fertile framework for understanding cognitive, social, and technical systems. A central image for us was environments in which people pursue their activities in collaboration with the elements of the social and material world. Our core research efforts were directed at understanding such environments: what we really do in them, how we coordinated our activity in them, and what role technology should play in them. The lab's focus was on developing the theoretical and methodological foundations engendered by this broader view of cognition, extending the reach of cognition to encompass interactions between people as well as interactions with resources in the environment.