Vous êtes sur la page 1sur 17

THE METAPHORICAL SLIPPAGES OF NOISE

ashley scarlett *** The following paper will explore noise as it is conceptualized in classical information theory and experienced through imaging technologies in an effort to expose its technological articulation, delineate its theoretical tendencies and critique its shaky (n)ontological foundations. The purpose of this exercise is not only develop a detailed map of noise, as it is articulated through photography and diagnostic imaging, but also to expose a number ambiguities and critical short-comings within the contemporary conceptualization of noise as a random and intrusive variable within the transformative economics of communication systems. :: CHANNELLING NOISE AND RUMOURS :: In his seminal piece, A Mathematical Theory of Communication, C.E. Shannon (1948) describes noise as a variable that interrupts and distorts the accurate transmission of communicative signals. Noise arises in the event that the received signal is not necessarily the same as that sent out by the transmitter (19). While he acknowledges these signals are frequently transmitted in order to convey meaning (as messages), the erroneous or noisy articulation of a signal is, at its most basic level, a problem of engineering rather than semantics. Noise is caused by and indicates the mechanical or technical failure of one or several components within a specified communication system. While philosophers may debate and deconstruct the felicity of semantic communication (throwing the efficacy of all communication into question, does the letter ever arrive?), Shannon argues that if the technological weaknesses of communication systems could be strengthened, or in some cases fixed, communicative noise would not exist; it would be ousted by the superior functions of increasingly perfect communication technologies. 1 In addition to this, and drawing upon Dretske, the semantic value of the signal or message is not necessarily contingent upon the information possessed by the signal. As a result, an exploration of noise that relies upon the semantics of fault-inducing variables will either miss the point or come too late. A quick digression on Shannon's canonical constitution of communication systems will enable further discussion of the properties of noise and failings of technology. According to Shannon, communication systems are comprised of five essential elements. These are: (1) An information source, which emits the signal (message) that is to be communicated, through a channel, to a presumed destination. Shannon explains that these signals can be natural or synthetic and also exhibit, or are articulated, through a number of different, formal qualities such as audio, visual, continuous or digital. The form of communication system that is employed will be dictated by the type of signal that is being captured and expressed. (2) The communication also requires a form of transmitter, that will capture the original signal and translate it into a (potentially different) signal that is compatible with the communication channel that is being employed. Drawing upon digital photography, the sensor in a digital camera does not only capture light, as in the days of film, but it also transforms the light into digital signals (bits of data) that can be read by the camera's
1 Shannon's ontological grounding suggests that there is in fact an absolute reality out there that could be perfectly captured and communicated if there were a technology effective enough to do so.

:: ADJECTIVE OR NOUN ::

computational system. (3) The third component is the channel, this is the method or medium through which the signal is delivered from the transmitter to the receiver. (4) Having arrived through the channel to the fourth component, the receiver, the inverse function of the transmitter is performed as the signal is once again translated, as the receiver reconstructs the original signal (message) from that which was sent through the channel. What is important to note here is that the reconstruction of the original signal does not necessarily imply an exact replica, but is instead contingent about the formal and material qualities of the original message, its destination and medium of (re)presentation. This will be discussed further below and becomes important insofar as these constraints determine whether a system is successful or not, and, most importantly in this case, whether or not disturbances in reception are the result of noise. This point is not explained in Shannon's work, as noise is repeatedly figured as a result of technological limitation, but will become important in future discussions regarding the ontological constitution of noise. (5) The final component of the communication system is the signal's destination, which Shannon states is the person (or thing) for whom the message is intended (2). While the destination, or non-destination as it were, is frequently the area of interest within philosophical discussions of communication, semantics and felicity, this appears to be of little concern to Shannon, who is more interested in the status of the message and the level of its comparative formal accuracy than in what happens when the message reaches its destination. Within Shannon's consideration of communication, noise is a chance variable that intrudes upon the communication system, diminishing its capacity to accurately transmit and reproduce a particular signal (message). Noise, whether it is noticeable or not, is present within every communication channel (1983) and may interfere at any moment throughout the communicative process, from the moment of transmission through to the process of inversion carried out by the receiver. In each case, its appearance and distortion of the original/intended message is chalked up to either technological limitation (as in the technology at hand was incapable of properly transmitting the selected information/signal) or mechanical error (as in the technology did not work properly.) What becomes important then to Shannon is determining the statistical (stochastic) nature of noise in an effort to begin building appropriate corrective devices, that would function in collaboration with the communication system, in order to negate noisy error and fill-in the correct or intended data. While redundancy and efficiency/effectiveness become Shannon's concern in this case, he is after all concerned with noise as an engineering problem, he fails to delineate the nature or ontological value of noise; apart from its negative impact within the processes of communication it is, in and of itself, (or as such) a non-element. This sentiment of non-existence is arguably corroborated (or better explained) by Dretske (1983), through his consideration of information and non or mis-information. According to Dretske, information refers to the non-semantic and causally determined (or inter-relational) truth/reality that lies behind a known state of affairs (115). While he repeatedly refers to the the semantically based purposes of information as the fashion in which it is accessed and comes to be known (as such), Dretske clearly states that information is not held within the semantics of the signifier but that it is instead information, and the series of probability relations that give rise to a unique bit of information (rather than other bits of information,) that precedes the (interpretive) signification process; informative stimuli exist regardless of the semantical meanings that are extracted and attributed to them. In this sense, Dretske's perspective draws upon a school of materialist thought that recognized the discursive and articulatory importance of language, insofar as it enables being to be social, but, unlike the post-structuralists, he is grounded in a perspective that does not take language and traceable semantics as the core of being; semantical language does not facilitate

the greater, or lesser, realness of objects. 2 According to Dretske, and largely as a result of his conceptualization of information as a matter of probabilistic becoming rather than meaning, there can be no such things as misinformation. If information is the result of a series of emergences, then it cannot be falsely articulated, as this falseness would require that it not be articulated at all. Either it exists or it doesn't. It should quickly be noted that what is generally perceived to be misinformation, such as a faulty rumours or scientific truth that is proven wrong, is not the result of incorrect information, per se, but is instead the result of faulty and often discursively inclined semantics; the information was either attributed meaning falsely, or it was claimed to exist when it didn't. This constitution of information and the impossibility of non-information has significant theoretical and correlational consequences for the conceptualization and evaluation of noise. Begging the semantic question of information, (a task that he sees as being necessary if we are to cut to the chase and face ontological philosophy head-on) Dretske defines noise as an objective commodity that exists independently from the epistemic activities of meaning-making agents. Commoditized (or perhaps tokenized would be more fitting) noise agents interrupt the transformational processes that translate the emergent and material phenomena of raw information into the legible and mediated informational output of receivers. This interruption typically involves a corruption of the original or intended information, replacing it with non-correlational signals and signifiers. While noise might have a referential form, such as the discoloured pixels that frequently takeover mobile phone images or the crackle of the snowy television screen, according to Shannon via Dretske, it is in fact indicative of (or referring to) an absence of information. Noise inducing elements enter into the communication system and either corrupt the system's capacity to accurately capture and transmit the communicative information or they stand in as affectively disruptive markers for a theoretical space (in the data set) that lacks information. (This will become clearer through the upcoming discussion of imaging noise and the transformation of spaceotemporally complex processes into a two-dimensional, visual plane.) Alan Liu's assertion that noise is a non-hermeneutical element appears to corroborate this assertion. And yet despite an array of arguments claiming that noise signals a technological failure and loss of information (or informational potential), it is also, in many cases and in a sense paradoxically, an additive element. While noise signifies a loss of the original or intended information, it provides information of alternate, imperceptible or technologically induced 3, phenomena, such as subatomic particles leftover from the big bang and thermal heat. In other words, while noise misconstrues or obscures the idealized and desired signal, it is arguably still information (a material and probabilistically derived result), just information of a different order. In this case, noise is not entropic, and nor is it devoid of value, it is instead an affective entity that, if anything, marks a point of resistance within hegemonically inclined articulations of technology, communication and reality. While it is in someways a product of the system, in the sense that some forms of noise are born out of technological error while others are incorporated into the signal in an effort to complete the receiver's data set, noise also marks a point of rupture from the expected as it demonstrates the presence of intentionally and
2 Realness in this case refers to the notion in STS that articulatory incorporation into the the world of language does not make things real in a constructivist fashion, things exist outside of language, but it does make things more real as the come to exist upon multiple increasing planes. 3 As will become clearer in upcoming sections, a significant amount of noise is the result of stray subatomic or photon particles, that are not visible to the eye but are made visible through their interaction with photographic sensors. In addition to this, a number of noisy pixels are the result of chemical reactions in the sensor, such as heat transfer. In this case while the initial signal is disrupted the end result is still indicative of naturally occurring phenomena... and signals such things as the presence of technology... is this a failure of the technology, or the technology being indicative of a world that would otherwise be ignored/forcefully erased?

unintentionally overlooked actors within the field of communication. The irony in this case is that while noise relies on a sense of realism that dictates the reality that is not accurate communicated, it also challenges this notion of realism as noise exposes elements of reality that would otherwise go unnoticed. As Latour (2005) might argue, in addition to noise as an articulatory device for largely invisible actors, its appearance and disruptions might also have the capacity to spur the deconstruction (and destruction) of technological blackboxes. Blackboxes are frequently established around technologies under the guise of added ease and efficiency whilst also simultaneously shrouding their inner functions and discursive tendencies. The break-down associated with noise, necessitates that technologies be pried apart, examined and considered in order for their corruptions to be rectified. Whilst the status of noise, as a positive, negative or otherwise oriented force, is currently difficult to ascertain, it does serve to draw the reflexive attention of professional and lay-users toward technological functions and worldly elements that frequently go unnoticed. Regardless of how one decides to conceptualize the linguistic definition of noise, as a technological failing, an informational loss or an affective entity, what is clear is that it is an evasive trope to assess definitively. The remaining sections of this paper will explore how the classical conceptualization of noise proposed by Shannon and Dretske is or is not expressed within imaging technologies and modes of communication. While the paper will try to focus upon the tangible and material practices that surround noise, one of its foci will be to draw specific elements out of these practices and demonstrate how, despite the ease with which noise is located and acted upon, its ontological and epistemological grounding remains evasive at best and problematically ignored at worst. While this paper is intended to serve as a means of laying a sturdy foundation in the informational (analytical), material and mathematical constitutions of noise (and will therefore not delve deeply into the ontological and epistemological implications of noise within these settings), this section will function to begin drawing out and exposing some of the areas of conceptual weakness, confusion and contraction.

:: NOISE AND THE IMAGE ::


As mentioned above, noise permeates all communication systems; it impacts and degrades all modes of signal exchange, such as audio, visual and electronic, presenting significant challenges to engineers, professionals and lay-users alike. The following section will explore noise and the impact of noise within the realm of imaging technologies and practices. Imaging does not only present an accessible means of beginning to delineate the specificities of how noise functions and is conceptualized, but it will also serve as a theoretical laboratory, through which to begin exploring a number of philosophical and aesthetic aporia that have emerged as a result of noisy communication. DISRUPTING NATURE'S PENCIL Photography, right from the moment of its messy inception, has always been inextricably linked to discourses of scientific realism and technological remediation (Rosenblum 1997). Hinging upon the direct trace of worldly light, and a desire (born out of the renaissance) for increasingly accurate representations of reality, photography served to empirically erase the handiwork of the author and, as a result, immediately became the preferred idiom for truthful, transparent and immediate documentations of the real. This

conceptualization of photography and purpose for its use is apparent in most photographic practices, as the image has been employed not only as a tool of science but also a reliable means of identity formation (for better or worse), memory capture, meaning-making and education. In addition to this realist discourse, and as Muybridge's early experiments regarding the camera's ability to capture realities invisible to the naked eye, photography has also repeatedly been figured as a prosthetic tool, whose relationship to nature is so close that it surpasses the human eye (the authoritative figure of empirical realism) in its ability to objectively observe and assess the wonders of the natural world. According to Charles Saunders Peirce, the reason for this is that photographs (or more generally photographically derived images) belong to a class of indices, as their mode of production forces them to adhere, point by point, to nature, without exception (Rose 2005). While elements of this relationship have certainly been challenged, particularly in light of digitization, the purportedly direct or mnemonic relationship between photograph and nature has permeated the history of the photographic image, serving repeatedly as the quality that makes photography ontologically and epistemologically distinct; even with early challenges posed by digitization, presumed photographic realism has remained largely in tact as the image continues to be used as a means of documenting and disseminating discursively derived visions of reality. Paired with this notion of realism is the historically established project of remediation. According to Bolter and Grusin (2000), remediation refers to a historical trajectory of competing and corresponding logics, which they term immediacy and hypermediacy. Immediacy refers to a historical program and desire within visual culture to erase the visibility of mediation (making the image transparent) and create a sense of presence... as close as possible to our daily visual experiences (3). Hypermediacy, or the hyper-mediated origins of photography, is harder to delineate within the early days of the medium but, as Bolter (2009) acknowledges, has become a more prominent rhetoric within the digital era. The logic of hypermediacy counterbalances the erasure facilitated by immediacy by acknowledging the occurrence of representation, and making it explicitly visible. Hypermediacy multiplies the signs of mediation and in this way tried to reproduce the rich sensorium of human experience (15). It reminds the viewer both of the presence of media and of the desire for immediacy, propelling a deep rooted need to develop increasingly accurate methods of representation. While hypermediacy might not be a crucial, or even necessarily a relevant, notion within discussions of noise, the concept of remediation and documented desire for ever increasing immediacy within visual culture, does potentially provide a way of beginning to explain the affective, socio-cultural and commercial disturbances that arise in response to noise. *** Like in most, if not all, communication systems, noise is a frequent and obtrusive actor within various instances of photographic practice (whether visibly so, or not). Imaging noise typically takes the form of a series or wash of inaccurate, grainy speckles. The tonal value, or visual appearance, of these speckles spans the full tonal spectrum and are only limited by the type of tonal imaging-process being used. For example, black and white imaging technologies would exhibit noise, in black and white, whereas colour photography would exhibit grayscale noise as well as coloured noise. While varying forms of noise appear within a broad range of imaging systems, from amateur-level digital photography to medical and diagnostic imaging technologies such as MRIs and X-Rays, the grainy and speckled aesthetic that is so frequently seen within images is common to all of these systems and, likely as a result, is also the most widely explored form of visual(ized) noise. (Bovik 2005) The fundamental trouble with photographic noise is that it frequently obfuscates the camera's

ability to produce an accurate (life-like and therefore transparent) and reliable image of reality.4 In other words, and perhaps quite obviously, noise inscribes the film or digital negative with a series of disturbances that diminish the quality of the resulting, visual, image. This noisy decrease in quality has many causes and implications, (not least of which is a discursively and commercially established sense of desirable photo-realistic aesthetics,) and yet, it is repeatedly explained away within superficial terms as simply an instance of blatant technological failure or limitation. Drawing upon the early conceptualizations of photography as a tool of realism and project of remediation, concerns regarding noise in the image should come as no surprise. While at times merely an aesthetic and technological disturbance, noise within the photographic channel and image do have the capacity to significantly diminish the truth claims of the photographic image (Beutel 2000). This has become acutely apparent within medical and diagnostic imaging, where interference and fluctuations caused by noise have singlehandedly facilitated the repeated misdiagnosis of a number of serious medical conditions. As will likely come as no surprise, this precarious phenomenon has spurred significant research into the conceptualization, causes and means of correcting imaging-noise (Beutel 2000, Bovik 2005, Sonka 2005). While much of this research is focused within the fields of medicine and medical diagnostics, it has also had a significant impact within the realm of amateur and professional photography as noise is an increasingly important factor within commercial areas of technological development and marketing (Gizmodo.) An interesting phenomenon that appears within the research and literature on imaging noise is a fairly rigid divide between the focus that the commercial and medical sectors have adopted in terms of noise (and its extinction.) Within the commercial sector, (perhaps as a result of a drive to maintain or accelerate sales,) resources are largely employed in an effort to develop (superior) digital technologies that reduce noise intake within the capture to storage phases, while the medical imaging field has focused largely upon mapping out a series of mathematically determined types of noise in an effort to build algorithms that can counter/mask their visual impact. Drawing upon these two alternate fields of research into noise, the following section of this paper will serve to explore how noise comes to be, or infiltrates, imaging systems in the first place and how it is conceptualized (within both engineering and mathematical terms).

:: NOISY APPEARANCES ::
Photographic noise is not an inherently discrete phenomenon, nor is it a product of digitization. While, as will be discussed further below, the majority of current visual-noise rhetoric revolves around delinquent pixels, noise was also a significant actor within film-based, or analogue photography. While film damage, and light leakage are two fairly common (though under-recognized) causes for noise in analogue photography, the most frequent cause is as a result of shooting with fast (800+ ISO) film (Freeman 2008). Film's ISO level effectively determines how much light, or photon particles, are required in order to adequately expose the file. The higher the ISO, the less light is required, making the film faster than its lower counterparts. The ability to expose film adequately despite minimal natural light made possible by the increasing the size, or amount, of the granules of silver-halide (the chemical elements that react to photons and as a result chemically inscribes the film with the image,) within the emulsion that covers unexposed film. An increase in the size, or area of the silverhalide granules, facilitates a greater chemical reaction without the need for extra light. The
4 The irony here is that the presentation or increased visibility of imaging noise is regularly as a result of the drive to produces an accurate image despite being in a non-ideal environment (ie. Low-light); this will be discussed further below.

trouble is that the more sensitive the granule, the more likely it is to respond to photons or other particles in an unreliable and inaccurate fashion. Unlike in cases where the film grain is small and therefore goes largely unnoticed despite it's corruption, the larger granules are more visible and therefore play a larger role within the aesthetic of the developed print. Concisely put, film with a high ISO value is employed in low-light settings and essentially sacrifices depth, exactitude and accuracy in favour of capturing an image, any image, of at least approximate likeness. ISO and sensitivity levels are also a common cause of noise within digital photosystems. This being said, the communication system that digital imaging is implicated in is significantly different from that of film photography, despite their similar end result (an image), and therefore enables the presentation of noise through a variety of different (though metaphorically similar) technological means. What is interesting is that noise has become an increasingly relevant concern within the digital age, despite the fact that digital images are purportedly less noisy than film images (Freeman 2008). One explanation for this is that the number of technologically mediated interactions with the original, photo-genic, data/information have drastically increased. As will become clear in the section below, not only is there a more complicated series of cascading technological transformations within the original capture phase of digital (rather than film) photography, but there are also more opportunities for the practitioner to readily interact with or alter the technological elements of the photographic process, opening it to greater risk of noisy interferences. Regarding the latter of these two points, this increase in human//data interaction is the result, not only of an increase in the number of functional options offered by individual cameras, (such as landscape, portrait and leica,) but also as a result of the arguable democratization of post-processing options. While post-processing software (offered for example by the programs Photoshop, Lightbox or iPhoto) frequently offers tools to reduce the effects of noise (noise-ninjas,) it also, depending on what types of transformations are employed, has the capacity to reveal and exacerbate noisy effects.

RAW DATA NOISE VISUALIZATION A point of ambiguity that is introduced and becomes increasingly significant as photography is further incorporated into digitally mediated realms, is whether post-processing

or post-capture noise is imaging or computational noise. The technological convergence of camera and computer does not only problematize the process of locating the fundamental source of the noise, but it also exposes how messy a phenomenon the communication system is. Communication mediums do not give rise to discrete, finite systems but, as the history of photography suggests, are instead comprised of a complex constellation of shifting social, economic, political and technological relations. Photography can not longer mindfully be considered a matter of camera and picture as photographic practices, from the traditional practice of capturing memories: to the billions of images that are newly uploaded and circulated online every month, has arguably become a ubiquitous actor within the widest stretches and greatest depths of people's everyday lives, on and offline (Thrift 2005). While this might be the case, according to Charles Boncelet (2005), one way to quickly bypass this problem is to deal with images (and their corresponding noise) in their RAW informational form, prior to being translated into .jpgs or visual images. Although this works for Boncelet, and enables him to isolate particular causes, types and distributions of noise within image files, his evasive move makes an underhanded ontological claim that suggests that noise (as such) exists autonomously, outside of its unique enactment within particular, modally specific, communication systems. A trouble here is that to look at noise, isolated from the system in which it occurs, overlooks both its (co)articulatory structuration as such (prior to its enactment within a particular technology or system, noise typically exist as something else, such as photon) as well as the reason why it exists, as a problem, at all: its disruptive role within the performative, and cascading throes of communication. While the RAW format might expose definitive lines around which elements of the image should count, strictly speaking, as imaging noise, and while RAW files might facilitating an easier grafting of the of the rote mathematical equations that have come to determine noise onto the imaging file , these files have next to no relevance within communicative role of photographic imaging. Even if the form of communication that is employed in this case is simply a matter of engineering, the RAW format is so rarely used that using it as a grounding for further theorizing seems irrelevant to how photography is generally practised. In addition to this, while Boncelet's ontological assumption is troubling in and off itself5 what is particularly unnerving about it is that by disassociating noise from its performative body, Boncelet is able to employ mathematically definable and trusted models in order to set descriptive parameters and make authoritative claims about the noiseat-hand without any acknowledgement of how it is actually experienced. The concern here is that by abstracting noise from its medium of emergence, Boncelet (and as will become apparent below, most sources on medical imaging) is able to set a series of random mathematical parameters around noise, and then, potentially as a result, works to demonstrate how noise exhibits qualities within these parameters; the argumentation lacks reflexivity as a result of inherently tautological reasoning. *** Before delineating the various technological instances where noise infiltrates the digital system, it is important to highlight two key points that have yet to be discussed. Firstly, noise in digital photography is generally used to refer to the appearance of stray pixels within the visual frame. Stray in this case means that the tonal value of a particular (or small group of) pixel differs greatly from those surrounding it, without good reason (such as when representing fine detailing or object boundaries.) What this value discrepancy amounts to in the case of digital noise, is, for example, a number of red and blue flecks on an otherwise naturally black background. While there are a number of different forms that noise can take
5 Did Heidegger not warn us of failing to beg-the ontological question of being? Any being?

throughout the digitization and uploading process, noisy pixels are the most frequently encountered and therefor at the centre of most discussions of digital imaging noise (Bovik 2005). Secondly, while noise is visible within nearly all photo(n) derived images, whether or not it is deemed statistically significant and therefore problematic is typically determined through a low signal to noise ratio calculation. In other words, if there is a low ratio of (actual) image data to noise elements (SNR), the image has likely been (negatively) obscured as a result of corrupt or non-information (Hendee 1997). It should be noted that the parameters set around these types of ratio measurements are context/discipline specific; an individuals measuring SNR for the purpose of developing commercial cameras for lay-users will likely have a different set of basic parameters that an individuals who is measuring the noise present within a diagnostic imaging system. In this sense, while the appearance of noise within the photographic system is an inescapable phenomenon, the role and relevance that it plays within the visual frame resides within shifting, frequently context or discipline-specific, grounds. Imaging noise is not an absolute or universal science, despite Shannon's assertion that by definition it should be; the semantical parameters set around whether noise worth considering (overcomes the affective threshold) must, perhaps only at times thanks to routine, occur simultaneous to the engineering problem, if not prior to it. 6 This is somewhat reminiscent of Latour's notion that while worldly phenomena are real despite their not being linguistically articulated (such as in the famous case of the yeast,) they are made more real through their repeated incorporation into the discursively 7 derived and semantically assessed systems of meaning. Noise might exist, the particles behind and performative articulation of noise might exist, be real, but noise as noise is a semantically grounded metaphor and is therefore more than an engineering problem; it relies upon contextually derived meaning and parameters to establish its momentary boundaries. In other words, Shannon might be missing the point, or at least a point. More on this later...

:: NOISE & TECHNICS ::


As the rudimentary pinhole camera suggests, the basic technological premises behind taking a photograph are simple. The camera shutter is deployed; photon light particles bounce off of worldly objects, are capture by a lens that funnels them into the hidden light-less depths of the camera, where they hit a light sensitive surface, and cause a chemical reaction that produces an image of, hopefully exact, likeness. While an earlier section detailed how noise infiltrated analogue photography systems, the articulation of noise within digital systems is significantly different. There are a number of different points at which noise might enter the photographic system. While a number of these are technologically derived, there is also imaging noise that reflects environmental actors and limitations. Much of the environmentally derived noise is the result of photon particle speed and is called shot noise (Bilissi & Langford 2011). When the camera shutter is released, photons inadvertently enter into the light-sensitive cavity, through the lens. While the lens will have a large effect on the amount of light that is direct into the cavity, another variable is the actual speed of the original photons. Photons travel at varying speeds, through time, and as a result, not all sensor pixels will catch the available, or sufficient, photons during the period of time that the shutter is open. While photon speed tends to have a negligible effect when there is a high level
6 In addition to this, and as will be touched upon further in the upcoming discussion of medical imaging, the parameters that come to be set around the perceived type of noise determines what does and does not count as noise. 7 Discourse in the Foucaultian sense of the word.

of light intensity (insofar as there is a lot of light funnelling towards the sensor regardless of the speed of individual photons), it becomes extremely problematic in low-light settings, where there are minimal photons to begin with. In addition to shot noise, there is also dark current noise, which effectively accounts for invisible stray electrons, many of which are purportedly leftovers from the big bang, that hit the sensor and set of a heat based reaction. What is important, particularly in the case of shot noise, is that the initial number of photons that strike and are recorded by the sensor, will determine the signal to noise ratio of the image file. According to Bilissi & Langford (2011), the best possible SNR that can be achieved by a detector is the square root of the average number of photons collected. These averages are determined, described and acted upon through an understanding of Poisson distributions. A point of ambiguity with regard to shot noise is whether or not it is noise at all. Shot noise reflects the natural composition of the scene, at that moment in time and from the perspective of the camera, despite the fact that the resulting image presents vision of reality that does not match up with how the world is typically perceived with the naked eye. Returning to the discussion on Shannon and Dretske's discussions of information loss, and photographic discourses of remediation, while shot noise might present an aesthetically undesirable or uncanny scene (as it fails to live up to remediating desires,) and while it might obfuscate the clarity of the actual scene, it does arguably communicate an accurate image of worldly phenomena. In addition to this, while shot-noise does articulate the limits of a particular technology, suggesting that it might be a problem of engineering, it emerges as a result of a natural chemical reaction. The introduces a secondary area of concern and for consideration, namely, what is the status of noise in nature? Regardless of this, it is difficult to ascertain what exactly is noisy about shot noise, aside from the aesthetic bristles of discoloured (and some times hot) pixels.

Once the light enters into the digital camera, it is captured and elicits a chemically fuelled analogue to digital conversion within the camera sensor. There are a number of different means through which this conversion and further transformations carried out by the sensor facilitate the emergence of noise. Before delving into this, it is important to understand how sensors are constructed to work. Please see the figure above (sourced from Gizmodo) for a visualization of sensor design. One of the early and more frequent causes for early noise (given that sensor size and capacity had to yet to be an area of commercial focus) was a result of the space left open inbetween the microscopic lenses that lined the sensors (Buchanan 2010). As a result of these spaces, a significant amount of photons would enter into the camera but would not be

captured and transmitted by the sensors. As a result, this information would be lost and the photon-quotients. As the last section asserted, a reduction in the number of original photons results in lower signal to noise ratios, making noise more apparent within the image (file and visualization.) While the type of noise that this generates is conceptualized as shot noise, as a result of its statistical similarities, this noisy appearance was in fact the result of poor technological design and sensorial limitations, not varying photon speeds. By drawing the macro-lenses closer together, this form of sensor-induced noise has been all but extinguished. An interesting point of ambiguity introduced by the shrinking space in between macro-lenses, is what role the operational parameters of a technology, set initially by engineers, have on whether or not the aesthetic appearance of noisy elements is in fact noise. While the sensor design limited the SNRatio, and induces the appearance of a noisy aesthetic within the photo-frame, the output signal may very well reflect the original, and intended, input signal. This challenges the notion that noise is a disruptive element within the communication system as its appearance in this case is not the result of technological failure but is instead accounted for, initially and knowingly, by the design parameters. Noise is an intended element of the photographic process when employing cameras with these types of sensors. This, once again, brings the basic (perhaps ontological) status of noise into question. Not only does it call the relationship between noise and realism into question but it also demands that one consider whether noise is in fact a disruptive variable within the communication process, or an autonomous actor, with its own aesthetic and mode of articulation? Despite this ontological critique, the reality of the situation is that regardless of whether noise is an autonomous actor or a marker of technological failure, the visual output in this case is experienced as noisy and, therefore has effectively the same obfuscatory impact as noise regardless of whether it is noise in a technical sense or not. This introduces the possibility that noise is in fact, at its most basic level a phenomenological or affective disruption rather than engineering problem. This shift towards the experiential marks an important area for further inquiry as it might be better able to account for the shifting use of noise as both a marker of technological weakness and an aesthetic experience. An affective phenomenology of noise would enable the metaphor of noise to reflect both its propensity towards adjectival and noun-based existence, simultaneously. While there won't be sufficient space or time to delve into this possibility within this paper, this area of inquiry is key not only because it might offer a better philosophical exploration of the term, but also because noise is used within many domains, including computer-based art, and the view that it is merely a disruptive variable that enters into the communication system does not account for these many deployments of its aesthetic. *** While moving the sensorial macro-lenses together functioned to increase the number of photons captured by the sensor (the SNR level), and therefore diminish the noise that once entered the system through the lens' in-betweens, the new proximity of the macro-lenses, and their affiliated photodiode (sensel) sites has led to an increasing in a different form of noise, thermal noise. By moving the lenses and sensors together, there is a certain amount of energy that bleeds and travels into additional receptor sites, causing miscalculations within analogue to digital converters that transform photon-based information into camera-legible code. Looking back to Shannon's description of communication systems, this type of noise is caused at the receptor site. Interestingly enough, this type of thermal bleed has become really problematic in the case of small sensors, such as those incorporated into mobile devices. Given that this is an increasingly popular way to capture and disseminate images, building micro-sensors that produce less noise despite their small size is an increasingly significant area of inquiry (Okabe Ito 2008).

Thermal noise, originating in the sensor also emerges as a result of other, sub-particle, interactions within the sensor. According to Vaseghi (2009), regardless of the proximity of photodiodes or sensels, the heat produced within the chemical process always results in the release of at least a couple of thermal electrons; these thermal electrons are indistinguishable from the electrons freed by photon (light) absorption, and can therefore cause a distortion of the photon count at other receptor sites, causing errors within the raw data. Thermal electrons are freed at a relatively constant rate over time, which means that thermal noise increases as exposure time is increased. As was highlighted when initially introducing digital, rather than analogue, photography, one of the key sources of photographic noise in digital systems is as a result of sensorial sensitivity or ISO level. While ISO and sensitivity get used to describe the process that will be discussed briefly below, it should be noted that sensitivity is not an amendable variable within digital photography. (ISO and sensitivity represent yet another metaphorical hold-over from when photography was film-based.) The sensor doesn't actually become more sensitive but instead, raising the ISO level just boosts the signal from the sensor. While this facilitates the production of an image despite little light, boosting the signal also boosts the the noise values within the data, exacerbating noisy effects in what might otherwise have been a visually clear (though dark) scene. Upping the performance ISO and sensor capability has become the latest trend within the commercial pursuit of the perfect, and most marketable camera (Buchanan 2010). While mega-pixels used to limit the amount of data that could be effectively captured and represented, the number of receptor sites that can now fit onto a sensor is so high that mega-pixel values are fairly negligible. This being said, while many sensors are capable of processing and representing dozens of mega-pixels, many of them are limited by the ISO capacity of the sensor. Or, in other words, their sensor's ability to capture and communicate as much information as possible without also transmitting an increased amount of noise data (high SNR ratio.) A final form of imaging noise in digital cameras emerges as a result of quantization error. While this is precisely the type of noise that Boncelet (2005) evades by restricting his study to the RAW image, quantization noise is a significant actor once the RAW image is transformed into a .jpeg and is transferred to the computer. This form of noise arises effectively as a result of errors in the estimated pixel placement within the final visualization of the file. While the original (RAW) information may be present its communication is flawed insofar as it is not properly legible. Unlike the grainy speckles discussed above, quantization noise frequently results in either the absence of large swaths of information within the actual image (frequently this space is filed by neutral grey), or jumbled and indecipherable pixels. While quantization noise exists within the ambiguous space of camera-computer, its effects take place precisely when the image is transformed from the type of information delineated by Dretske (as a series of probabilities reduced into one actual event) into the type of information that becomes semantically relevant and manipulable. What comes into question in this case is: is there a significant difference between noise that is naturally derived and noise that appears as a result of computational, software-based, injustices? A key concern that arises as a result of tracing the phenomenon of noise through the photographic process is that while it is easy to locate (perhaps as a result of its photographic visualization), point a finger at, and trace back to a technological cause, there is very little discussion of what noise actually is. In fact, as the last section was intended to demonstrate, while there are a number of responsible sources for noise, the status of whether or not their effect within the photographic system is actually noise is both ambiguous and contentious. While the practitioners, developers and marketers of everyday photographic practices are effective at determining the cause of noise and ways to rectify it, through the development of new technologies and new camera, there is very little time spent, within this discourse, trying

to understand what noise is and what about it is ontologically specific or distinct.

While the latter ontological point remains largely overlooked, the medical imaging industry has taken a significantly different approach to the conceptualization of and counterresponse to visual noise. Unlike in photographic imaging, where light bounces off of worldly objects and causes its effect by travelling through a lens and imprinting a light-sensitive surface, medical imaging relies largely upon visually capturing the effects of magnetic fields, proton (dis)equilibrium, and electromagnetic radiation (rays) as they interact with the body. The physics and chemistry of medical imaging technologies and their interactions with noisy elements is too complex to describe in brief here. Suffice it to say that medical images produce visual representations, typically of the body, that exhibit similar forms of grainy, obfuscating noise as in photographs. Not only does this form of noise present in a visually similar way as photographic noise, but it also arises as a result of similar interactions between the communication system, unruly worldly elements and technological short-comings. For instance, shot noise is a significant problem within radiology as x-rays demonstrate the same type of speed fluctuations as photons 8. While the cause and aesthetic properties are similar, there is arguably more at stake in medical imaging rather than lay-photography. As was mentioned above, noisy intervention within the medical imaging process threatens faulty diagnostic work. Not only does the uniform presence of noise lower the overall contrast of the medical image, reducing the capacity to delineate individual objects within the body, but in some case it is also arranged in such a way that suggests the presence of something where there is nothing. While noise is rarely cause for misdiagnosis in extreme cases, it can be the determining factor between early and late diagnosis. While the commercial side of photography has paid a significant amount of attention to locating the source of noise within photographic systems, and assessing how better technologies might be developed for future commercial consumption, within the medical imaging field, a significant amount of attention has been paid to determining what types of mathematical distributions best characterize the various types of noise. By characterizing noise in this fashion, medical imaging technicians are not only able to get a better grasp on how to take an effective image with as little exposure risk to the subject as possible, but these mathematical descriptors are also used, in turn, to develop algorithms that can counter the effects of noise, digitally, after the image has been captured. The two standard points of departure for this type of mathematical exploration of noise is the assertion that noise within the image typically follows both Poisson and Gaussian forms of distribution (Beutel 2000). Within this formulation of noise, data (signal and noise) input follows a Poisson model, such that the information provided to the transmitter is a random but temporally contingent variable, that is independent from that which is happening around it (as in it is not correlational.) A useful analogy for Poisson distribution is that if an individual were to put three bucks of water out in the rain, the number of drops that were to land into each bucket would follow a Poisson distribution; none of the buckets correlate to each other, and the amount of rain that they catch is random, despite being contingent upon where the bucket is located and the particular time (and time period) at which it is put out. Within medical imaging both signals and noise are captured in accordance with a Poissonian form of distribution. Once the information is gathered, the values of its disparate parts follows a
8 (Where the two differ is that when shooting with a regular camera, the photographer has the option to shoot again if the first image is not ideal whereas in radiography to shoot again has the potential to put the subject at great health risk; electromagnetic radiation is not something that can be safely directed towards individuals in a repeated fashion, such that the perfect image can be attained.)

:: MEDICALIZING NOISE ::

Gaussian, (or normal bell curve,) form of distribution. There will be information around a mean and information that exists within the polar ends; the information at the polar or extreme ends is typically construed as noise (Sonka 2004). Having established that information and noise demonstrate these distributive factors, and knowing the various physical and mathematical properties of the imaging technologies and their components or actors, individuals doing research within the field of medical imaging noise, are able to develop mathematical equations and algorithms with which to uniformly define and interact with various instances of noise within their communicative channel. While these equations are unable to account for the unique visual materializations or representations of noise within specific medical images (the third shot of my knee for example), the mathematical properties or tendencies exhibited by the type of noise in question enables practitioners to mathematically isolate, through the aid of a computer, and alter these unique displays of noise such that its obfuscating, visual elements are longer as affectively apparent.9 One of the troubles with locating, assessing and diminishing noise is that noise diminishment involves the neutralization of pixels that exist outside of the normal region of a Guassian distribution. This results in a softening of the image. The reason for this is that noise, boundaries, and sharp detail within photographs are all established through the presentation of highly contrasting pixels. The irony in this case is that while the reduction of noise is intended to increase the quality of the image, this move also has the counter effect as the fine detail is also erased (Bovik 2005). One of the things that remains unclear throughout this process, particularly in light of the mathematical parameters set around noise, and the computer's capacity to locate the outlaying, or stray pixels, and provide them with new, normalized (and therefor visually imperceptible) values, is what happens to noise once it is remedied. Once the stray pixels are centralized, do they still exist as noise or not? While the altered pixels mark a point of missing information and potential technological failings, they are frequently so accurately matched so as to appear naturally derived. In this case while the original information (or lack thereof) still stands, at least as a digital element, its material presentation does not reveal its lineage or insides. In this sense, the fixed pixel is conceptually transformed into a presence that is always necessarily an absence; this is the same type of theoretical rhetoric that has been employed to describe photography since the late 19 th century. In addition to this, and as was introduced through the discussion of RAW image data, while a number of mathematical distributions and algorithms have been developed to describe the tendencies of noise, there is still very little understanding of noise itself and its particularities. While this might not seem problematic, given the theoretical ability to isolate and reduce the visual appearance of noise, failing to understand the discursive forces and articulatory fluctuations of its presentation could prove detrimental should a situation arise in which noise, or the intervention of random variables, fails to adhere to its calculated tendencies. While in imaging there seems to be very little at stake should this break-down happen, photos might be lost or diagnostics redone, there are other areas of communication (particular computer-mediated forms of communication) in which a breakdown of the system, as a result of noisy interventions could lead to significant and meaningful, negative (or maybe positive) consequences. In addition to this, and returning to the notion that noise marks the limits of technology, or perhaps an articulation of the technology itself (and its interaction/articulation within the non-human world), a directed exploration of the ontological and epistemological elements of noise might not simply serve to conceptually guard against potential future melt-downs but might also facilitate a greater understanding of technologies as meaningful
9 To explain this a bit further, medical images are translated into digital files, and the Gaussian distribution principle is used to locate the pixellated bits of information that are x number of standard deviations from the norm in order deminish their difference from the norm and effectively reduce their noisy divint impact.

actors unto themselves as opposed to human directed tools or prosthetics. Drawing upon Introna (2008), this sort of a study or perspective has the potential to facilitate the development and implementation of an object oriented ethics.

:: MOVING UPWARDS AND ONWARDS :: CLOSING REMARKS AND AREAS FOR FUTURE STUDY While its presence and visual form are easy to locate and describe, it is difficult to conclusively ascertain the ontological and epistemological groundings of noise, as it conceptualized within information theory, contemporary photography and medical imaging. As the previous section demonstrated, while it might be easy to point a finger at noise and even trace it back to its technologically mediated point of origin, much of the literature and work on imaging noise to date frames it as an ontologically and epistemologically indistinct phenomena. Alan Liu's assertion that noise is non-hermeneutical corroborates this sentiment. The trouble arises, as the last two sections demonstrated, when the various different types of noise are pried apart and examined both in practice and in relation to their medial and conceptual lineage (Shannon and Dretske.) The quick and easy ability to point and blame falls away as it becomes clear that what actually counts as noise is not clear at all. At its most basic level, is noise: a slow photon? A stray subatomic particle? Thermal Electrons? Is it the signal that doesn't adhere to the originally determined, desirable signal? (This last point sounds uncannily similar to the line of philosophical questioning the seeks to ascertain the felicitousness of semantical exchange; precisely what is ignored through all of the discussion of noise introduced throughout this paper.) is it the visually disruptive flecks on the screen? With these points of ambiguity and confusion in mind, the glaring question becomes: What exactly is noise? Is it an adjective, a word that describes the effects of undesired (though likely) infiltration and obfuscation? Or is it a noun, the things doing the infiltration? Regardless of all the talk about the technical, analytical and engineering elements of noise, what it is is never established, leading to a number of basic ambiguities, oppositions in reasoning, and points of confusion. While noise is construed as a problem of engineering, a review of the literature on noise suggests that engineering does not provide an adequate means of actually assessing the meaning and fundamental properties of noise itself. Although engineering might be able to define the parameters of a particular problem, and react to that problem in such a way that it is diminished, the actual nature of problem is never clear. While there are arguably a number of different methods for approaching this conceptual problem, there are a number of philosophical routes that I plan on taking from here in an effort to tackle the ontological, epistemological and aesthetic abyss of noise. 1.Noise is frequently framed as an indicator of technological or communicative failure; the intended signal does not arrive. When not qualified within these terms, it is described, statistically, as a variable that lies outside of the normal range. In light of this, to begin understanding the epistemological implications of noise it will be important to delineate the meaning of failure and abnormal. If noisy pixels are indicative of invisible subatomic particles, where is the failure in their visual representation? 2. What is the relationship between noise and Agamben's notion of exception? How does this relate to the notion that noise articulates the limits of technology, drawing attention to this co-constitutional role (within the event)? 3.If noise is a deviation, and exception, what happens when its inadvertent aesthetic is

reappropriated as art? Is this still noise? If not, how do the aesthetic and ontological conceptions of noise relate to one another?

WORKS CITED Beutel, J. (2000) Handbook of Medical Imaging: Physics and Psychophysics, Volume 1. Bellingham: SPIE Bilissi, E. & Langfor, M. (2011) Langford's Advanced Photography. Oxford: Focal Press Bolter, J. (2009) Writing Space: Computer, Hypertext, and the Remediation of Print. Taylor & Franci e-Library: Routledge. Bolter, J. & Grusin, R. (2000) Remediation. Cambridge, MA: MIT Press. Bovik, A. (2005) Handbook of Image and Video Processing. Burlington: Eelsevie Academic Press Deleuze, G. (1979) Cinema 2: The Time-Image. London: The Athlone Press. Dretske, F. Preci of Knowledge and the Flow of Information. Foucault, M. (1978) The History of Sexuality Vol. 1: Introduction. New York: Random House, Inc. Freeman, M. (2008) The Complete Guide to Night and Lowlight Photography. Cambridge: Sterling Press Introna, L. (2009) Ethics and the Speaking of Things. Theory Culture Society; 26; 25. Kogan, S. (2008) Electronic Noise and Fluctuations. Cambridge: Cambridge University Press Latour, B. (2005) Reassempling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Latour, B. (1990) Pandoras Hope: Essay on the Reality of Science Studies. Cambridge: Harvard Okabe D., & Ito, M. (2003) Camera Phones Changing the Definition of Picture-Worthy.Japan Media Review. Rose, G. (2005) Visual Methodologies. London: Sage. Rosenblum, N. (1997) A World History of Photography. Abberville Press. Shannon, C.E. (1948) A Mathematical Theory of Communication. The Bell System Technical Journal Vol. 27, pp. 379-423, 623-656, July, October, 1948. Sonka, M. (2004). The Handbook of Medical Imaging: Medical Image Processing and Analysis. Bellingham: SPIE

Thrift, N. (2005) Knowing Capitalism. London: Sage. Vaseghi, S. (2008) Advanced Digital Signal Processing and Noise Reduction. West Sussex: John Wiley & Sons Ltd. Zalevsky, Z. & Medlovic, D. (2004) Optical Superresolution. New York: Springer-Verlag

Vous aimerez peut-être aussi