This paper was given at the Special Effects/Special Affects: Technologies of the Screen symposium held at the University of Melbourne, 25/3/2000.

* * *

The electronic viewing screen is re-presentational technology. It does not generate its own material- in fact it is incapable of this (1). It works via remote sensation – telaesthesia. Technologies of memory also work telaethesically and it is these two distinct applications that converge in a dialogue between the viewer and the electronic viewing screen to form the viewing experience. The screen as technology “crafts” the electronic visual form, and memory as technology “crafts” the narrativisation of content. An interval then is necessarily created from these remote sensations- the interval between the remote and the local – and it is this interval that forms the grammar of the interactive viewing experience.

This telaetshesic grammar is applicable not only to the electronic screen viewing experience, but by extension, all scopic sensation. That all physiological vision is relayed and interpreted is not a new idea (2) but that this applies to theoretical screen space is a little less assumed. This presentation speaks specifically of the screen experience but is consistent with forming narrative in any situation where vision is framed and performed.

Unfortunately, to consider audio technology within this field is beyond the scope of this paper.

A vision signal is relayed to the electronic screen via a variety of methods- usually cable or radio wave, and the screen interprets this vision signal and then retransmits it via light. This essential function is, of course, largely transparent, and the viewer is interacting only with the resultant light transmission. The screen performs its interpretation based on a hardwired set of parameters. Thus, a television screen may be able to interpret PAL or NTSC video signals and the Computer Monitor a digital RGB signal or an Analog RGB signal or a combination of these signals. Even a screen displaying “static” is based on the attempted interpretation of an undelivered signal (and static is an interesting term for that obviously dynamic display, speaking more of the narrative and interpretive functions of the apparatus and the viewer than the interface). The interpretive and transmissive capabilities of the screen form an integral part of the viewing experience. The same vision source sent to a screen capable of displaying “black and white” or of displaying colour will affect the viewing experience in the same way that the viewing experience is mediated by a computer monitor displaying what it is functionally capable of, relying on a combination of its physical size and the amount of video RAM available to it. This affects both the scale and the palette of colours available to render the information sent to it. The screen is an example of interpretive technology in addition to being re-presentational technology.

To clarify, I use “re-presentational” in the sense that building on its interpretive attribute, the screen merely re-presents a supplied signal within the bounds of its function. There are however some very convenient links if the word re-present is also applied in terms of the temporal function of the screen. The screen both displays a moment- and is locally, the present or immediate moment, but this is a re-presented present for both apparatus and content.

The screen apparatus cannot take a signal from a remote source and interpret and retransmit this signal instantaneously, but does so in that very interesting space of time we refer to as “realtime” (3). Of course it takes time for that vision signal to leave its originating source and travel to the screen and be interpreted via a process of demodulating, decoding, and recoding to be then subsequently retransmitted as photons. This time is measurable as an interval and it is this interval that designates the telaethesic presence of the screen as a re-presentative device. One arguable exception to this rule is the phenomena of video feedback, in which the screen is still remote but acts as a hyper re-presenter and simultaneous generator of algorithmic vision, a cascade of generative limens.

This re-presentation as televisual has been ushered in via our acceptance of Einstein’s physics, a system that has given us relativity. Space and time are no longer absolute. Past, present and future are relative to each other and to where we are in space. The viewer’s screen interaction is telaethesic even if only applied to the form of the screen, because it takes time to render a “picture” and that moment, that frame, occupies both time and space. The functional characteristics of the screen do, in part, determine the physical and cognitive rendering of the screen space.

The progressive scan display of the computer monitor typically refreshes (reframes) at least 75 times per second and does so from left to right, top to bottom. The domestic television with its interlacing refresh model, on the other hand, renders only 30 new full screen pictures per second for an NTSC signal and only 25 frames per second for a PAL or SECAM signal, but does so in a way that utilises frame blending. This is achieved by not rendering from left to right , top to bottom, but by breaking the frame down into smaller units- fields- which are distinguished spatially, on assigned odd or even numbered horizontal lines within the frame. The screen apparatus draws one of these fields and then the other, but following that, the alternate field of the next frame. This, of course, has the effect of blurring distinctions between the frames, and particularly the temporal edges of the frame, and takes advantage of the phenomena of persistence of vision in the eye. The human eye’s “native” refresh rate is approximately 20 milliseconds, corresponding to a frame rate of one fiftieth of a second. This is the same interval as the time taken for one field (or half a frame) to render in a PAL or SECAM frame or five-sixths of the time required to render an NTSC field.

The interval of the screen render is equal to, or faster than the eye’s refresh interval providing for a machine vision -one that does not “see” (except in binary calculations), one that is imperceptible to the human eye. Using the video camera as a prosthetic vision device, one can, by controlling the shutter speed (the refresh rate) of the camera, see barreling on a video display, see the spaces “between” the smallest increment of electronic vision. This is a transcoding of the screen’s native refresh rate to correspond to an offset rate for the human eye’s native refresh rate. The refresh rates remain constant but are mediated by the prosthetic vision of the camera that imagines but does not see (4). The image is also rendered according to its context within the frame or field of vision, and machine vision is no exception. The principles of relativity are brought to the fore here where the target picture elements are rendered according to the surrounding pixels. This is the same for interpolating digital data in image processing applications such as Adobe PhotoshopT. The user has three interpolation options, “nearest neighbor”, “bilinear”, and “bicubic”, each generating a successively larger sample of the surrounding pixels to interpolate the required data.

To return to the frame now, this blurring of the temporal edge of the frame through interlacing is in addition to the blurring of the spatial edge of the frame. The time/space dichotomy falls neatly within Einstein’s model of relativity where story and narrative transpiring outside the spatial frame of the screen is often used to great effect. This applies equally to the screen as apparatus where we have overscan and underscan frames and then further demarcations of “picture” and “title safe” areas within the frame but signifying frame edges. There are a series of rehearsal edges before we meet the frame’s “true” edge and the particular screen object we are viewing determines where that frame edge is for that interaction.

In the interlaced screen model, each frame is giving way to the next in a cooperative way- an example of open architecture. There are dynamic and integrated changes occurring throughout the frame for the majority of the time it is present on screen. Conversely, in the progressive scan model, one frame is pushed aside in order to make way for the following frame. The transition in this space is a wipe rather than the dissolve of the analog screen. The current frame, starting with the top left corner (in the same way paper based reading is privileged in the western world), is discarded, literally superseded by the subsequent frame information for that pixel. The space between the frames then – the grammar of the screen experience- is rendered in quite distinct ways by the formal properties of the screen, and this rendering is performed relative to the eye’s functional capabilities in terms of luminance, chrominance and refresh rates. This is also what allows for supposedly imperceptible digital video compression, which is a form of “lossy” compression. It is compression where data that is more severely compressed has less “quality” or is further away from the veracity of the source data. The algorithms that are at the core of these compression engines treat the distinct properties of vision in sympathy with the intended audience’s capabilities. Colour information for the frame is compressed more than the brightness component of that same data corresponding to the human eye’s acuity for each of these properties of vision. Analog video’s version of this is to merely reduce the overall resolution of the frame as a whole, usually measured by the number of horizontal lines of video in a frame.

This addressing of formal and functional aspects of the screen is also though, the grammar of narrative, the often cited suture(5) as the building blocks of storytelling. This space too is mediated by interval across space and time. This grammar is part of a consistent structure, and it is a structure that occurs across time. The vision transmitted via light has come from another device which in the first instance is a storage or buffering device. The optical or digital “capturing” of this material in another realtime is categorically of the past. It cannot be of this performative moment, as it belongs to another. All of the narrative generated to this point is remote and it is this that allows for a local experience that is of the present moment. All of the telaethesic properties of the screen apparatus and content are focused on this present moment- the viewer’s interaction with the resultant “work”. The moment of the retransmission of both form and content, of narrative and story, is the last moment of the remote. The stage is set, the lights have dimmed, one of many narratives is resolved and the story can begin. The performative moment, the present moment of the here and now, has arrived.

The story and narrative that are recorded in this new moment are of course particular to the viewer. The resonance that forms the interactive dialogue for the viewer through the viewer’s memory rehearsal is similar to the screen’s rehearsal period. The moment is local because the viewer has brought to the experience the telaethesic properties of memory and subsequent memory narratives and stories. Memory is treated in this model as generative. The architecture of memory as archive is object based. The technology of memory described here- the generative memory experience- is one in which a memory is not retrieved but conjured. Phantom-like it becomes imbricated with this present moment, tailored for it and symbiotically attached to it. It is therefore relative and again relies on these intervals of space and time in order to synthesise the viewer with the screen. Identification as a mainstay of time-based narrative requires this customisation in order to perform this necessarily realtime dialogue.

These two instances of re-presentation, one of form and the other of content, are each a function of the screen’s distinct properties. They are folded within the viewer’s complementary layering of memory and story to form the viewing experience and it is here that the present moment, local, “realtime” encounter happens. Here is the interface. The viewer is interacting with recorded (6) and re-recorded vision. The screen is a performative device in addition to its roles as re-presenter and interpreter.

I will now briefly examine two installations of mine from 1998 that work overtly with this model of screen space and interaction.

This first installation work – “self remembering- promises” (7) -uses a screen black calibrated to the ambient light level in the site and presents extended periods of the three monitors just “shooting black”. The effect is that the viewer firstly questions the functionality of the monitor. Is it switched on? Is this yet another of these new media installations that don’t work? But primarily the viewer is interacting with the screen apparatus, not overtly the screen content. And then just as the “average” viewing time for such a work is over, there is maybe six frames of new vision rendered on one of the three screens. The interaction begins anew. Immediately a narrative is conjured and some form of reading is applied to the work. The imagery is evocative but not clear, figurative but temporally molecular. There does not appear to be a rhythm or key or legend in order to read that temporality. And again, more vision, different but similar, so its not a loop…yet. (In fact there are 27 different figurative moments in the piece ranging from 6 frames duration to four seconds). A review of the piece in the The Age (Melbourne) by Peter Timms is contained by the intentions of the work through his narrativisation of it.

On either side of three video screens, Verdon arranges strips of back- lit film, artfully printed with fragments of text and blurred images, as though seen from a moving car. Most of the time the screens are blank, with just the occasional flicker of light, but every now and then the image of a boy fleetingly appears, as though caught in the headlights. Before he has had time to lift his head and look back at us, he is gone.

It is a simple, elliptical and achingly sad statement about transitoriness, irresolution and the existential distances between people: about gaps, absences and opportunities missed.(8)

Timm’s reading of the work is just that. A reading of his present moment realtime interaction with the installation.

Kylie Message in a catalog essay on the work, conjures in a different space and successfully distinguishes form from content by specifically considering the screen apparatus and memory narrative.

Verdon is presenting the space of narrative as process, and localising it within the spectatorial body. And yet, it is not a sense of the disembodied or detached classical body he presents, but a contemporary spectator whose relation to the technology they look upon, cannot be easily separated. Drawing the spectator into identifying not with the content exclusively, but with the screen space itself, Verdon illustrates this complex relationship whereby the conditions for viewing, that is, the position of screen as facilitator or viewing apparatus, are internalised by the archetypally distinct viewer. Neither is this a cause and effect relationship, because as the technology is perceived as an extension of the internal life of the spectatorial body, the user’s body “simultaneously and reciprocally” comes to function “as a prosthesis for that virtual environment.”

What, may, ideally, occur here, is that the reader-spectator identifies with the space of the screen itself. In the same way that the reader identifies with the narrative structure of stories in order to progress or move, the spectator identifies with the screen in order to, likewise, move through the story. The status of the screen reflects intimately the status of narrative structure. They both function as facilitators that require a certain invisibility to be fully affective. Both narrative structure and the screen are liminal sites of transgression. They are both framing devices (of seduction) and the mode of transport (the conveyance). It is through identifying with the movement that they present (rather than the characters or content), that the space “beyond” may be accessed.

In a straight reversal, Verdon suggests that the space of storytelling is localised not within the narrative that the spectators perceive or read, but within themselves. This is indicative of his broader concerns of memory, and the way this affects interaction and reading/viewing practices. Indeed, he suggests that it is memory itself that provides the shocks necessary for engagement to occur and be propelled. As such, memory is located also at this tenuous site of liminal existence; this point where internal becomes externalised (and vice versa), and the machine and body pass, converge or divert. Indeed, the story he is constructing through this series is an illustration of this non-constant continuum that memory is in the never-completed process of constructing. As such, memory is freed from its essentially internal couching.

Whereas for Proust, the (internal) memory is stimulated by the madeleine as it signifies the elsewhere of memory, for Verdon the location of the memory remains firmly material. It is not just signified by the biscuit, it “is” the biscuit. Or, in other words, the biscuit is the elsewhere of memory. Just as the narrative structure itself is the beyond of the story. In this situation, the formal structures of narrative, computer screen, and memory are not only the things they speak, present or signify, they are also the path and vehicle of movement; the trajectory.(9)

The second of the “self remembering” installations, “self remembering-home(10) was mounted in an informal space – The Gold Vaults in Victoria’s Old Treasury building. This was where, in the 1850’s and 60’s, yield from the Victorian gold fields was stockpiled and traded by the government of the day. The installation works with the memory store but renders it generatively.

The site is a spatial mapping of open narrative with a 63 metre corridor saturated with red flourescent light and the gold vaults off the corridor functioning as molecular moments in that narrative, bathed in blue light and featuring monitors displaying mnemonic loops in a room echoing with 19th century history, intercut with live video feed from the corridor in the heavily tinted blue of the security camera aesthetic. The installation worked with a circular narrative structure whereby spatially, the viewer moved through a series of liminal vision moments. These began with the entry to the space via an elevator opening onto the red corridor, the viewer’s optic centre scanning around a neurological colour wheel in order to orientate themselves in this strange red field, trying to make white balances that cognised, then through the doorway into one of several vaults, through a large open lightbox structure, a super scaled film frame, onto to one or two monitors playing out a solitary memory loop of a hand opening and closing over a variety of vision measuring devices and childhood signifiers. This screen image then cuts to realtime vision of the corridor behind them, completing the cycle and bringing the viewer back to the entry point, dislocated across both space and time.

Both of these works aimed to treat the screen space as an amalgam of both apparatus and media- form and content. The viewing experience in these installations is heightened in that it is not clear as to where the edges of the experience are. In self remembering – promises the screen form must be considered before screen content is revealed, and in self remembering – home the viewers are transported spatially and temporally via the screen but are also encouraged to create for themselves a local realtime screen experience.

Screen space is crafted as a telaethesic experience that nonetheless does have a local intimacy provided by the performative nature of the technologies of screen and memory. The screen technology’s attributes and stratagies of re-presentation, interpretation and performance are closely interwoven with technologies of memory to form a temporal narrative experience that is generated remotely but performed locally.

Endnotes

  1. A physical (but not logical) exception to this rule is hardwired data within industrial vision machines where colour bars or other diagnostic patterns may be generated. These however, are produced by an engine within the apparatus, not the screen itself.
  2. Sacks, O. 1994, The Case of the Colour Blind Painter, pg 25
  3. Paul Virilio in The Vision Machine writes of many kinds of time- “Extensive time”, “Intensive time”, “Real time”, “Delayed time”, “Microscopic time”, and “Exposure time” but of these, only one is experiential time locally, all the other designations are remote time.
  4. Virilio, P. 1994, The Vision Machine, pg 73
  5. Heath, S. 1994, Narrative Space, pg. 403
  6. Even if vision is being fed from a live camera, there is an optical to electrical signal movement across space and occupying time
  7. This work was first shown at the National Gallery of Victoria in Australia in January, 1998 as part of the New Q Exhibition for Midsumma Festival and will be seen again in an extended form in July 1999 as self remembering – premises at Bendigo Art Gallery in Victoria, Australia
  8. Timms, P. 21.01.98, “Diverse Definitions of Gay Art”, The Age
  9. Message, K, 1999, Through the Looking Glass and into the Space of the Screen: Narrative, Electronic Space and the Museum.
  10. This work was first shown as part of Melbourne’s Next Wave Festival in May 1998 at the Gold Vaults, Old Treasury Building, Melbourne, Australia.

About The Author

James Verdon is an artist and Coordinator of Electronic Design and Interactive Media, Arts Department, Swinburne University of Technology.

Related Posts