Information Perceptualization: an Overview

Christopher Walsh

Fall 2013

Information perceptualization is an interdisciplinary field of research concerning communications between humans and their environment. Broadly, this includes the physiological processes of perception, and the organization of sense data into information; specifically, while there exists a history of mechanisms by which humans have made sense of data [1], most prevalent research focuses on human interface design (HID) between humans and computers.

From the perspective of a media artist, I seek to understand the basis and boundaries of my aesthetic decisions, and how this work relates to methods used in scientific disciplines. This paper is intended as a sketch of the general principles and underlying processes of Information Perceptualization, while referencing interesting examples which demonstrate complementary aesthetic/artistic and scientific concerns. A general philosophy entails that all human activities are related pursuits, and each discipline should strive to integrate their methods into this whole: as in the humanist philosophy of Ernst Cassirer,

“Language, myth, religion, art, science, history are the constituents, the various sectors of this circle. A ‘philosophy of man’ would therefore be a philosophy which would give us insight into the fundamental structure of each of these human activities, and which at the same time would enable us to understand them as an organic whole” (Cassirer 68).

Such aims provide a reference for the goals of Information Perceptualization (IP): to fully integrate the human element into the activities of knowledge and social organization. The subject is of critical interest, as society is rapidly shifting with technology, fragmented by specialization, fundamentally altering its evolutionary trajectory. Humans themselves are equipped with highly specialized sensory organs, tuned for survival, which under current environmental conditions are being underutilized:

“Our world has grown to include abstract data spaces and virtual realities. To solidify and extend our presence in these new environments it may be necessary to engage as many of our senses and in as much detail as possible. Virtual environment technology allows us to optimize reality to suit our perceptual systems. Perhaps we can use this technology to solidify and extend perceptions that are only tenuously available in our natural experience” (Hollander).

Thus, a central theme of IP is to involve the entire computing capacity of the human mind by sensory representations of information.

“As computer performance increases, the major bottleneck could be the human-computer interface. The bandwidth of this interface will be bound by the characteristics of human perception, and hence the quest for a new presentation paradigm has commenced in different scientific fields” (Jovanov).

We may extend this notion to suppose that the quality and amount of information available to the human mind correlates to its capacity for insight and discovery. To this extent, Information perceptualization has two fundamental bases: the structure of information, and the various mediums through which information is transmitted in respect to human language and physiology. A signal can transmit through different mediums in parallel to reach the mind from multiple sense domains, which constitutes multimodal perception [3]. Patterns can be isomorphic between senses, according to the structure of mental models. For instance, signal intensity cross-correlates with loudness, brightness, temperature, energy; signal duration is generally similar, except that one sense, hearing, may have greater clarity or internal representations of time than vision. In the other sense, vision has a greater spatial resolution than hearing or touch. As we continue through the perceptual categories of the mind, a general schema for sensory representations with their possible advantages and combinations begins to appear. This general scheme relates to the types of variables our physiology naturally perceives, and thus the forms in which we represent information. In this regard, the signal formation begins with the study of a system to determine formalized relationships, which are then translated into spatial, temporal, visual, audible, tactile variations corresponding to general mental models we use to interpret the world intuitively. Perhaps the most obvious example is a simple graph, which represents variations of some variable over time: using two spatial dimensions, horizontal and vertical direction for time and intensity, one can immediately perceive the general character of the information by differentiation.

The advent of signal processing for electrical circuits led humans to systematically define waveforms and their transformations. Eventually, through the theories of electronic music, the transduction of electrical waveforms into acoustic pressure waves by means of electromagnets and vibrating surface areas, engineers and artists such as Pierre Schaeffer, conceived of a ‘musical object’, an abstracted model of sound dynamics in space and time; and John Cage, who defined sound according to its fundamental compositional components proposed “a total sound-space, the limits of which are ear-determined only, the position of a particular sound in this space being the result of five determinants: frequency or pitch, amplitude or loudness, overtone structure or timbre, duration, and morphology [envelope] (how the sound begins, goes on, and dies away). By the alteration of any one of these determinants, the position of the sound in sound-space changes” (Cage, 2). Further studies reveal another element, of granularity, which by means of envelopes of sound shorter than 50ms (what human physiology can distinguish) their juxtaposition forms a unified texture. Granularity naturally occurs with rain showers, in materials such as sand, and is significant in electronic sound synthesis techniques [4]. And yet again, the digital synthesis of sound enables a new parameter, of spatial location and motion. Such involves multi-channel audio systems to form spatial distributions of sound sources, which are then blended with amplitude phasing techniques to create perceptions of motion and location. Further, the limits of audio-spatial perceptions are being experimented on at the Human Interface Technology Lab at the University of Washington: “Virtual environments may help us to understand and eventually extend the domain of auditory perception. [Hollander and Furness] performed two experiments which verified the ability of subjects to recognize geometric shapes and alphanumeric characters presented by sequential excitation of elements in a ‘virtual speaker array’” (Hollander). The robustness of a human’s mental model of an environment is supported by the crossover between the spatial-temporal specialties of each sense domain: wherein sound is especially temporal, and vision is especially spatial; “Although vision dominates audition for processing spatial information, audition often dominates vision for processing temporal information”(Guttman). Altogether, frequency, amplitude, timbre, duration, envelope, grain, and spatiality provide the basic dimensions for graphing information sonically. The generalized field of research in this sensory domain is called Auditory Display, or Sonification, which is comprehensively surveyed in The Sonification Handbook [5].

Due to the crossover between senses, with their shared internal mental models, much of the terminology from acoustic dynamics translates into the visual field; thus, information perceptualization plays a significant role in the progress of both sonifications and visualizations in the range of their possible mappings.

Information, as an abstracted form, precedes particular transmitting mediums; while in other terms, information often derives from the special characteristics of the phenomena one might use for transmission; wherein, the dynamics of the information present in one medium may not necessarily map ‘continuously’, in a one-to-one fashion to the properties of a corresponding medium. For example, photons of an electromagnetic wave are distinct from the molecular collisions of sound waves: this fact is most interesting, and the goal of information perceptualization is not a unification of physical phenomena, as might be the case for physical sciences, but a unification, or rather an integration of perceptual processes in communicating, in this case, the differentiation of the phenomena.

Considering this example [6], a visual depiction of the scale of the universe, a vast range of physical structures grouped by scale are conveyed through various dimensions of visual perception: such that each specialized region of the brain, for motion, color, size, shape, can simultaneously function to produce a differentiated view of the information. Thus, we may achieve a communication by the possible channels, or dimensions of information which correspond to specialized regions of the brain, and are generally grouped by sense organ functions: such as vision, hearing, touch, taste. Information perceptualization seeks to involve these specialized functions where they integrate and overlap, to maximize human processing capabilities, creating a robust and dynamic internal mental model of information. One can imagine an augmentation of this visual depiction of the universe by including sonic information where attention, or extra channels of neural response are available. Such as in sonified data characteristic of each object: consider the sounds of this theoretical particle [6], the sun [7], or earthquakes [8], which could appear in response to viewer interaction.

In this simple example, much of the known universe is compressed into your two-dimensional computer screen. Imagine, with each added variable, corresponding to a separate but integrated neural channel, with each sense domain, the informational apparition more fully enters reality, and becomes synonymous with the immersion of direct learning experience. As we extend the sense-range of scientific instruments, such as with electron microscopes [9] and devices such as the Hubble telescope [10], which collect data from phenomena that exist beyond the human sensory range, and through analysis of the system, differentiable parts are mapped to variations in color, scale, and so forth; wherein, the virtual sensory range of the human is constantly expanding, evolving the internal mental models of reality; reciprocally, the forms of action that are possible within a ‘human’ perspective expand, as a new potential for species behavior is generated. Further, the systems which are formed from observed data, and the theories which follow, or perhaps precede observations, become visceral and malleable objects of the imagination. To understand the universe, one might change fundamental forces to see how matter organizes accordingly; in a similar manner, researcher and musician Bob L. Sturm composed the motion of particles [11]; and in similar capacity, another MATSB group modeled a sonification of the cosmic microwave background radiation from the beginning of the universe [12]. While the lofty exclaims of human evolution do carry weight, a more immediate result of perceptualization ventures is a compelling introduction to mind-expanding subjects, which should be standard knowledge in our society with unanimous computer access. And much as the various sensory organs are channels to a singular mental model, a unified perception, the multitude of specialized scientific disciplines, humanities and social interfaces, each with their own modality, senses the world and contributes to the knowledge of society, which reciprocally provides a frame of reference for human experiences. Thus, we might view the overlap of sensory-domains as analogous to that of disciplinary specializations in society, and their interactions as necessary for an accurate, comprehensive, and robust mental model of reality.