SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Philosophy of Digital Art

The philosophy of digital art is the philosophical study of art that crucially relies on computer processing in its production or presentation. There are many kinds of digital art, including digital cinema and video, digital photography and painting, electronic music, literary works generated by so-called “chatbots”, NFT art, net art, and video games. For the full range of digital art kinds, the aim is to identify their essential features, ground their proper appreciation, and situate our understanding of them in relation to pre-existing debates in aesthetics. This first-order inquiry cannot proceed without acknowledgment of the enormous interdisciplinary and popular interest in digital media. Claims are frequently made about fundamental shifts in the way we classify, evaluate, and engage with art now that computers seem to be involved in every kind of cultural production. The so-called “digital condition” (Kittler 1999) is characterized by a loss of trust in the image, a new way of experiencing the world as indeterminate and fragmentary, and a breakdown of traditional boundaries between artist and audience, artwork and artistic process. If we are looking for evidence of the digital condition, we need to understand its conceptual structure. Here’s where the philosopher comes in.

Although technology-based art is viewed as the “final avant-garde of the twentieth-century” (Rush 2005), and digital art has been part of the mainstream art world since the late 1990s (Paul 2008), the philosophy of digital art is still an emerging subfield. Three seminal monographs, one on videogames (Tavinor 2009), one on digital cinema (Gaut 2010), and one on computer art (Lopes 2010), have been invaluable in laying the groundwork concerning philosophical questions about art and computer technology. Since these publications, further philosophical attention has been given to the digital arts, including the first published volume to focus on the aesthetics of videogames (see Robson & Tavinor, eds., 2018). It can be challenging for philosophers to keep up with the rapid rate at which digital technology develops. But a number of recent articles on Artificial Intelligence (AI) in the arts show that philosophers are well aware of and ready to meet this challenge (Atencia-Linares and Artiga 2022; Millière 2022; Moruzzi 2022; Roberts and Krueger 2022). The body of philosophical work on AI art will no doubt continue to grow, as will bodies of work on virtual reality in art and Internet art. With this growth, we can expect to to learn a great deal more about the extent and character of the digital cultural revolution.

1.1 The Digital Art World

1.2 the analog-digital distinction, 1.3 digital art: production, 1.4 digital art: presentation, 2. digital images, 3. appreciating artworks in digital media, 4.1 defining interactive works, 4.2 display variability, 4.3 interactivity and creativity, 5. locative art, other internet resources, related entries, 1. what is digital art.

In its broadest extant sense, “digital art” refers to art that relies on computer-based digital encoding, or on the electronic storage and processing of information in different formats—text, numbers, images, sounds—in a common binary code. The ways in which art-making can incorporate computer-based digital encoding are extremely diverse. A digital photograph may be the product of a manipulated sample of visual information captured with a digital camera from a “live” scene or captured with a scanner from a traditional celluloid photograph. Music can be recorded and then manipulated digitally or created digitally with specialized computer software. And a film is now the product of an extremely complex sequence of choices between analog and digital processes at the stages of image and sound capture or composition, image and sound editing, color correction or sound mastering, special effects production, and display or projection.

The complexity of the digital cinema workflow draws attention to a further difference concerning whether reliance on the digital is restricted to the way an artwork is made or extends to the display of the work. A work may be made on a computer—say, a musical work composed with Sibelius or a play written in Microsoft Word—and yet meant for apprehension in a non-digital format—say, performance on traditional musical instruments or enactment on stage. Similarly, a film could be captured and edited digitally before being printed on traditional 35mm photochemical film for projection in theaters. By contrast, works that are purely digital include a film made and projected digitally—for example, Dune (2021), a piece of music composed and played back electronically—for example, the electroacoustic works of Gottfried Michael Koenig (see Other Internet Resources ), and a work of ASCII art—an image made up from the 95 printable characters defined by the ASCII standard of 1963 and displayed on a computer monitor.

An example of ASCII art:

More recent kinds of purely digital art include Instagram art and Chatbot fiction. An example of the former is Land of Arca (2023), which is made up of narrative images created by AI and then curated by the Instagram account’s owner, IRK. An example of the latter is any of the myriad science fiction short stories with which several literary magazines were recently deluged.

Each of the examples above incorporates a computational process, to some degree, in the presentation of the work. In many ways, works belonging to digital media stand in stark contrast to those made by completely analog means.

The classical account of the analog-digital distinction is found in Nelson Goodman’s Languages of Art (1976). In fact Goodman’s account remains practically the only general account of the distinction. While David Lewis (1971) raises a series of objections to Goodman, Lewis’ alternative account applies only to the representation of numbers. And while John Haugeland (1981) returns to the general distinction, he effectively qualifies and re-frames Goodman’s account in order to overcome Lewis’s and other potential objections. A few philosophers interested in clarifying the concepts employed by cognitive scientists have recognized the need for a general account of the analog-digital distinction (e.g., Dretske 1981; Blachowicz 1997; Katz 2008; Maley 2011). But in this context, as well, Goodman’s account is the essential point of reference. In some ways, this is surprising or at least striking: As Haugeland points out, the digital is a “mundane engineering notion” (1981: 217). Yet the philosophical context in which the notion receives its fullest analysis is that of aesthetics. As is well-known, Goodman’s interests in this context center on the role of musical notation in fixing the identity of musical works. But a musical notation is also a standard example of a digital system.

On Goodman’s broad, structuralist way of thinking, representational systems in general consist of sets of possible physical objects that count as token representations. Objects are grouped under syntactic and semantic types, and interesting differences between kinds of representational system track differences in the way syntactic and semantic types relate to one another. Digital systems are distinguished by being differentiated as opposed to dense . The condition of syntactic differentiation is met when the differences between classes of token representations are limited such that it is possible for users of the system always to tell that a token belongs to at most one class. The condition of semantic differentiation is met when the extension of each type, or the class of referents corresponding to a class of token representations, differs in limited ways from the extension of any other type; so that users of the system can always tell that a referent belongs to at most one extension. Goodman provides the following example of a simple digital computer, a system that meets the conditions of both syntactic and semantic differentiation: Say we have an instrument reporting on the number of dimes dropped into a toy bank with a capacity for holding 50 dimes, where the count is reported by an Arabic numeral on a small display (Goodman 1976: 159). In this system, the syntactic types are just the numbers 0–50, which have as their instances the discrete displays, at different times, of the corresponding Arabic numerals. Both the conditions of syntactic and semantic differentiation are met because the relevant differences between instances of different numbers are both highly circumscribed and conspicuous. This means that users of the system can be expected to be able to read the display, or determine which number is instantiated on the display (syntactic differentiation) and which numerical value, or how many coins, is thereby being indicated (semantic differentiation).

Analog representation fails to be differentiated because it is dense. With an ordering of types such that between any two types, there is a third, it is impossible to determine instantiation of at most one type. Not every case involving a failure of finite differentiation is a case of density but, in practice, most are. With a traditional thermometer, for example, heights of mercury that differ to any degree count as distinct syntactic types and the kinds of things that can differ semantically. Similarly, for pictures distinguished according to regions of color, for any two pictures, no matter how closely similar, one can always find a third more similar to each of them than they are to each other. Density is a feature of any system that measures continuously varying values. That is, as long as the system in question is designed so that any difference in magnitude indicates a difference in type.

Returning to the digital, some commentators have questioned whether Goodman’s condition of (syntactic and semantic) finite differentiation is sufficient to distinguish the kind of representation in question (Haugeland 1981; Lewis 1971). John Haugeland, for example, argues that there can be differentiated schemes without the “copyability” feature that defines the practical significance of digital systems. Haugeland’s solution is to require the practical and not just the theoretical possibility of a system’s users determining type membership. In fact, however, Goodman himself would likely accept this modification. In a later work, Goodman explicitly states that finite differentiation must make it possible to determine type membership “by means available and appropriate to the given user of the given scheme” (Goodman and Elgin 1988: 125).

Whether or not a work of digital art is a work of representational art, and even with the most abstract works of digital art, there are layers of representation involved in the complex processes of their production and presentation. Most of these layers, and arguably the most important ones, are digital. Where there are analog systems involved, digital translation makes possible the realization of the values of the final work. This is perhaps best seen with paradigmatic cases of digital art. Consider the following two relatively early works:

  • Craig Kalpakjian, Corridor , 1995. Computer-generated animation on laser video disc, in the collection of the San Francisco Museum of Modern Art. The video leads us slowly down an empty office hallway that is slightly curved and evenly lit, with pale, blank walls and opaque glass windows.
  • Cory Arcangel and Paul B. Davis, Landscape Study #4 , 2002. Installation. A “reverse-engineered” video game that aims to transpose our everyday surroundings onto a video game platform. The work “plays” on a Nintendo gaming system and displays a continuously scrolling landscape with the blocky, minimalist graphics of the Mario Bros. game.

The first of these works involves digital moving imagery that is entirely generated by a computer program. At the same time, the video looks like it was or could have been recorded in an actual office setting. The particular significance of the work depends on the viewer being aware of its digital composition while at the same time being struck by its photorealistic familiarity. According to the San Francisco Museum of Modern Art (SF MoMA),

Kalpakjian thus reveals the complete artificiality of the built environments we inhabit, and their aesthetic distance from more humanistic forms. (SF MoMA n.d.)

The second work involves imagery that was initially captured digitally. Arcangel & Davis began by taking 360-degree photographs of Buffalo, New York. They scanned and modified the photographs on their computer so that the images could be coded according to the graphics capabilities of the Nintendo gaming system, and in order to give the images the distinctive look and feel of the Mario Bros. game. Arcangel & Davis then programmed the landscape imagery to scroll continuously across a TV screen, as in the Mario Bros. game. Finally, Arcangel & Davis melted the chips in a Super Mario cartridge, replacing them with their self-manufactured chips so that their landscape “game” could be run on any Nintendo system. As well as all the ways in which Arcangel & Davis’s work relies on both the technology and aesthetics of videogames, there are clearly ways in which it deliberately removes or blocks certain key features or capacities of videogames, perhaps most notably their robust interactivity. Playing a videogame essentially involves the prescribed creation of new display instances of a work. But we do not “play” Landscape Study #4 , and its imagery is fixed by the artist. The kind of interactivity typical of videogames can also be found in artworks made without computers (see Lopes 2010: 49). But this type of interactivity is most closely associated with digital art because complex interactivity is so much easier to achieve with the use of computers. This suggests a high degree of self-consciousness in Arcangel & Davis’s decision to block the interactivity of their reverse-engineered videogame. From the perspective of the philosophy of digital art, such a decision highlights the need for further discussion of the link between the nature of the digital and the nature of interactivity.

What is it about the ways in which the works by Arcangel & Davis, and by Kalpakjian, are produced that makes them digital in an appreciatively relevant sense? Computer imaging depends on the inherent programmability and automation of digital computers. Digital image capture depends on sampling and subsequently on the near-instantaneous processes of discrete coding. None of this would be possible without a series of linked systems each with finitely differentiated settings.

At the most basic level, the myriad transistors in a computer are essentially tiny digital schemes, each with two types: the “on” and “off” settings of the transistor-capacitor switch. The settings are discrete and distinguishable, as are their compliance classes, of 1s and 0s. The ubiquity of binary code in computer processing is a consequence of the fact that a digital computer is essentially a vast collection of on-off switches. A particular sequence of 1s and 0s realized at a particular time in virtue of the requisite arrangement of transistors is a binary instance of a particular number, interchangeable with all other instances of the same number and not interchangeable with any instances of different numbers. The difference between instances of one number and instances of other numbers is strictly limited to the difference in the ordering of 1s and 0s. In other words, Goodman’s condition of finite differentiation is clearly met. In turn, the numbers can refer to other values, including the light-intensity values of an image. A computation simply involves the generation of output strings of binary digits from input strings, in accordance with a general rule that depends on the properties of the strings (Piccinini 2008). The modern (digital) computer encodes both input data and assembly languages as sequences of binary digits, or bits, and allows for the internal storage of instructions. This makes the computer essentially programmable in the sense that it can be modified to compute new functions simply by being fed an appropriate arrangement of bits.

A program is a list of instructions, and instructions are strings of digits. The modern digital computer has components that serve to copy and store programs inside the machine, and to supply instructions to the computer’s processing units for implementation in the appropriate order. The outputs of a system can be dependent on specific inputs often in tandem with the conditional if-then statements. This is what is involved in a computer executing conditional branching instructions such that it can monitor and respond to its own intermediate computational states and even modify instructions based on its own processes. Such modifications are dictated by an algorithm—the program’s set of rules and operations. It is the digital computer’s capacity for branching, due to its digital programmability, that allows for the kinds of higher-level automation involved in the use of imaging applications and sequential image-generation. Our artists, Kalpakjian, and Arcangel & Davis, do not have to enter the strings of digits for every basic operation of the computer that underlies the complex operations involved in describing and manipulating images. If they did have to do this, they would never finish making their artworks. Rather, artists can rely on open-source code, libraries, or commercial software that automatically and instantaneously supplies the lines of code required for the execution of their artistic decisions.

The imaging software with which Kalpakjian works allows him to generate architectural interiors in rich detail. Arcangel & Davis do not require as much from their imaging software given that they are manipulating previously captured and scanned images. The process of scanning the photographs, just like the process involved in digital photography, involves sampling and quantization of a visual source; assigning an integer, from a finite range, to the average light-intensity measured across each small area of source-space corresponding to a cell in a grid. This process involves averaging and rounding up values, and it involves measurement, or sampling, of light intensities at (spatially and temporally) discrete intervals. Some, indeed many, of the differences in light intensity across the source image or scene (and at different times, in the case of moving imagery) are thereby dropped by the process of digital image-capture. Among some media theorists, this fact has led to deep suspicion of the digitally recorded image, prompting the feeling that the digital image is always a poor substitute for the analog. Current digital technologies for image-capture and display have such high rates of sampling frequency and resolution that the values dropped in quantization are well below the threshold of human perception. At the same time, Arcangel & Davis’s Landscape Study #4 reminds us that digital artists may choose to exploit visible pixellation for particular artistic ends.

A digitally recorded image need not appear any less richly detailed or varied in color than an analog image. All the same, in the terms of D. N. Rodowick, whereas the analog photograph is an “isomorphic transcription” of its subject, a digital photograph is a “data-output”, with a symbolically-mediated link to its subject (Rodowick 2007: 117–8). This ontological divide—described by William J. Mitchell as a “sudden and decisive rupture” in the history of photography (1994: 59), is then assumed to have aesthetic implications: Rodowick insists that the “discontinuities” in digital information “produce perceptual or aesthetic effects”. Despite this insistence, however, Rodowick goes on to acknowledge that, with enough resolution, “a digital photograph can simulate the look of a continuously produced analogical image”. This concession would seem to work against any attempt to identify the aesthetic effects of pixellation, even if “the pixel grid remains in the logical structure of the image” (Rodowick 2007: 119). But if we are to interpret Rodowick charitably, he could be implying that ontology at least partly determines appropriate appreciation; even if a digital photograph can look just like an analog photograph, its (known) digital status affects which of its perceptible features are aesthetically relevant and how we appropriately engage with them.

The media theorists’ worry about the impoverished digital image primarily refers to the production of digital images with its reliance on sampling and quantization. But there are also analogous worries about the digital presentation of images, worries about deep structural changes to analog images once they are displayed digitally—for example, on a liquid crystal display (LCD) screen or when projected digitally on a flat surface. Of course one could simply be interested in investigating these structural changes without being particularly worried about them. This shall be our approach.

The traditional method of film reel projection has been a remarkably stable and entrenched technology, remaining largely unchanged for over a century. But digital projection has almost taken over, particularly in conjunction with the networked distribution of films. Although films’ audiences may not be able to see the difference on screen between analog and digital projection, their expectations are changing—for example, about what can go wrong in the presentation of a film. A deeper assumption that has not changed, one that is almost universal among film scholars, is that films fundamentally depend on an illusion. Cinema is the art of moving images and thus its very existence depends on our being tricked into seeing a rapid succession of static images as a persistent moving image. In the philosophy of film, there is a small debate about the status of cinematic motion—whether it really is an illusion as commonly assumed. An analysis of digital projection technology reveals new complexities in this debate but ultimately provides additional reasons to stick with the popular illusionist view.

Traditional and digital projection methods could not seem more different: the former involves running a flexible film strip through a mechanical projector; the latter involves a complex array of micromirrors on semiconductor chips, which, in combination with a prism and a lamp, generate projectable images from binary code. Nevertheless, both are methods for generating the impression of a continuously illuminated, persistent moving image from a sequence of static images. Compared with traditional projection, however, digital projection includes an extra step, whereby the images in the static sequence are generated from flashes of light. In order to generate each image in the digital projector, a light beam from a high-powered lamp is separated by a prism into its color components of red, blue, and green. Each color beam then hits a different Digital Micromirror Device (DMD), which is a semiconductor chip covered with more than a million tiny, hinged mirrors. Based on the information encoded in the video signal, the DMDs selectively turn over some of the tiny mirrors to reflect the colored lights. Most of the tiny mirrors are flipped thousands of times a second in order to create the gradations of light and dark making up a monochromatic, pixellated image—a mirror that is flipped on a greater proportion of the time will reflect more light and so will form a brighter pixel than a mirror that is not flipped on for so long. Each DMD reflects a monochromatic image back to the prism, which then recombines the colors to form the projected, full-color image. This image—if it were held for long enough on the screen—would be perceived as static. In order then to produce the impression of motion in the projected, full-color image, the underlying memory array of the DMDs has to update rapidly so that all the micromirrors are released simultaneously and allowed to move into a new “address state”, providing new patterns of light modulation for successive images.

The two-stage process of digital projection, by which the moving image is created from a succession of static images that are themselves created by motion, draws attention to the metaphysical complexity of the question of how movies move. In particular, one is unlikely to determine the status of the impression of motion that makes possible the art of cinema unless one can determine the status of the imagery that is seen to move. Given that motion involves an object occupying contiguous spatial locations in successive moments of time, a moving object must be re-identifiable over time. A moving image in a film, arising as it does out of the rapid display of a succession of still images, is not obviously a persistent object that can be seen to move. Then again, perhaps it is enough that ordinary viewers identify an image—say of a moving train— as the same image, for the moving image to persist (Currie 1996). Alternatively, the moving image could be thought to persist as a second-order physical entity constituted by a sequence of flashing lights (Ponech 2006).

The second proposal immediately runs into trouble with digital projection. If the traditionally projected moving image exists as a series of flashes of light, in digital projection, other “intermediate” objects must be granted existence—for example, the stable point of light consisting of the rate of flashes, and gaps between them, of a single micromirror on the DMD. At the same time, the moving image itself must be stripped of its existence since it does not consist of flashes of light. This is due to the fact that, in digital projection, there are no gaps between frames and so no underlying, imperceptible alternation of light and dark. This leaves the realist in the awkward position of claiming that the moving image goes in and out of existence with the switch between analog and digital projection technologies.

The first proposal, on which cinematic motion is a secondary quality, threatens to destroy the distinction between the apparent and the illusory. It suggests a way of reinterpreting any case of perceptual illusion as a case involving the ascription of secondary qualities. That is, unless it can be shown that there are independent means of checking that we are mistaken about genuine illusions. But even if this can be shown, a problem remains: While there may not be an independent check for the motion of an image, there is likewise no independent check for a genuine illusion of color. Given the contrived conditions of film viewing, there is more reason to think of cinematic motion as akin to an illusory, than to a genuine, experience of color. With the introduction of digital projection, the conditions are arguably even more contrived. For it is not just movement in the image but the image itself that is constituted by rapid flashes of light. And the technology involved is far less accessible than that of a traditional mechanical projector in the sense that one cannot, just by looking at the projection device, see (roughly) how it works. In this way, an analysis of digital movie projection serves to reinforce the traditional assumption that cinema is an art of illusion. In addition, however, the analysis suggests that the illusion at the heart of cinema is particularly impenetrable—akin to an illusion of color, and thus an illusion of a mere appearance that cannot be checked (Thomson-Jones 2013).

With digital movie projection, we begin to see the importance of understanding the technology of display for understanding the nature of digital art. Another way we see its importance is in relation to images displayed on LCD screens. According to Goodman, images are essentially analog. Nevertheless, there seems to be a way for engineers to circumvent the essential analogicity of pictorial schemes by using digital technologies for encoded subphenomenal discrimination. Arguably, finite differentiation can be imposed on the scheme of all possible images displayed on high-resolution LCD screens. As we shall see, this has far-reaching implications for the ways in which we think about and properly appreciate image-based art.

Both in his earlier and in his later work in aesthetics, Goodman commits to “a special relation” between the analog and the pictorial, one that is seen when we compare “the presystematic notions of description and picture in a given culture”. Given two schemes, S and S′ , where S consists of all descriptions or predicates in a language such as English, and S′ consists of all pictures, if we were told only of the structures of S and S′ , we could distinguish the pictorial scheme by its being analog (Goodman and Elgin 1988: 130). The special relation remains, Goodman claims, despite the possibility of a digital sub-scheme made up of black and white grid patterns all of which happen to be pictures. In such a scheme, the differences between patterned types that matter for the scheme’s being digital do not include all of the differences that matter for distinguishing pictorial types. Pictures are distinguished by color, shape, and size, which vary continuously; any variation in color, shape, or size potentially results in a different picture. When we impose limits on the differences that matter for distinguishing one grid pattern in the scheme from another, we are not interpreting the grid patterns as pictures; if we were to do so, we would have to treat them as members of a syntactically dense, or analog, scheme.

Goodman’s insight about grid patterns and pictures suggests an immediate difficulty for explaining the digital status of images displayed on LCD screens: Clearly it will not be sufficient to point out that such images are pixellated, and therefore made up of small identical building blocks that impose a lower limit on the differences between display-instances. Remember that pictures are defined by color, shape, and size, which vary continuously. This means there is going to be vagueness at the limits of types – even though the physical pixels of an LCD screen are such that there are gaps between the possible shapes, sizes, and colors that the screen can instantiate; and, there are a finite number of shapes, sizes, and colors that the screen can instantiate. Any means of discretely carving up the property spaces of color, shape, and size has to involve grouping into types what are in fact (subphenomenally) distinct shapes, sizes, and colors, some of which may differ less from adjacent properties grouped into other types. This makes it impossible always to determine unique class membership; hence, finite differentiation fails.

Pixellation alone, no matter the resolution, cannot account for images displayed on LCD screens belonging to a digital scheme; digital images qua images thus remain stubbornly analog. But perhaps a closer analysis of digital imaging technology can show that finite differentiation is met after all. Current technologies for sampling and instantiating light intensities group objective colors well below the level of phenomenal discrimination. For example, in the standard “Truecolor” system, a display pixel has three 8-bit subpixels, each of which emits a different visible wavelength with an intensity from a range of 256 values, yielding over 16 million objective colors. Such a large number of available colors gives the impression of a color continuum when, in fact, digital sampling technology has been used to carve up the objective color space into a disjoint series of wavelength intensities. On the one hand, from the fact that display pixels can be lit at intensities between and indiscriminable from adjacent discriminable intensities, it seems to follow that finite differentiation fails. On the other hand, precisely because digital technology involves microtechnology and metrology for subphenomenal discrimination between colors, the light intensity groupings that are expressed numerically as red-blue-green triplets (in, say, the Truecolor system) can be narrower than the objective color types that contribute to the resultant image scheme. The key is keeping the variations in the essentially analog properties of color, shape, and size small enough so that they cannot accumulate to the point of making a difference to image perception (Zeimbekis 2012). The types in the scheme of digital images are technologically segmented, transitive groupings of the same color-, shape-, and size-experiences. The carving out of a transitive sub-set of magnitudes has to occur relative to the needs of the users of the system. In the case of digital color, the types are classes of light intensities sufficient to cause the same color experience for normal human perceivers. The replicability of digital images is made possible by the gap between the discriminatory limits of the human visual system and the discriminatory limits of digital sampling technology.

Digital images can be replicated insofar as they are digital and thus finitely differentiated. They are finitely differentiated because they rely on subphenomenal sampling and display technology. In practical terms, replication depends on the use of binary code, even though this is not in fact what makes images qua images digital. Of course binary code representations are themselves part of a digital scheme. But the role of binary code in image-instantiation is just one of consistent preservation; preservation for long enough to permit reproduction. Despite the inherent replicability of digital images, it does not appear to follow automatically that artworks involving these images are multiples.

The SF MoMA is in possession of the original of Kalpakjian’s work, Corridor ; they control access to the video imagery. At present, the work is not available to be viewed: it cannot be viewed on-line as part of a digital archive or collection, nor is it currently on view in the physical space of the museum. The image sequence comprising the work could be multiply instantiated and widely distributed, but in fact it is not, nor is it meant to be. Similarly with Arcangel & Davis’s work, Landscape Study #4 : This work is described as an installation, meant to be exhibited in a physical gallery alongside an arrangement of printed stills, with a television connected to a Nintendo Entertainment System. Again, the image sequence displayed on the television could be multiply instantiated and widely distributed, but it is not, nor is it meant to be. Clips and copies of the landscape imagery are available on-line, but these do not instantiate parts of the work itself. By contrast, works of net art are instantiated whenever they are accessed by someone on-line.

There are many kinds of net art, including various forms of experimental on-line literature, conceptual browser art, and works drawing on software and computer gaming conventions. Extensive on-line collections of visual and audiovisual net art are rigorously curated and at the same time immediately accessible to ordinary Internet users. When it comes to the conventions of access and presentation, the contrast is striking between works of net art and works like those by Kalpakjian, and Arcangel & Davis. Perhaps a digital artwork comprising multiply instantiable images need not itself be multiply instantiable. At this point, the philosophy of digital art joins an ongoing debate about the ontology of art.

On the question of whether artworks are all the same kind of thing or many different kinds of things, ontological pluralism is often taken to be implied by the primary role of the artist in “sanctioning” features of their work (Irvin 2005, 2008; Thomasson 2010). A sanction can consist simply in, say, the painting of a canvas by a self-professed artist and the subsequent display of the work in a gallery. The artist has sanctioned those features of the work that make it a traditional painting. But what was once largely implicit is now often explicit: many contemporary works of art are defined by a set of instructions for their presentation (e.g., aspect ratio, resolution). We can find plenty of examples of non-digital works that are defined by a set of instructions, such as Felix Gonzalez-Torres’ Untitled (Portrait of Ross in L.A.) (1991). This work is given to a gallery to display by way of nothing more than a set of instructions for constructing and maintaining a pile of candies. Whether non-digital or digital, the instructions determine what is part of the work and what is not, and whether the work is singular or multiply instantiable. As a result, the instructions guide appropriate interpretation of the work. On this view, ontology precedes interpretation: we cannot properly and fully appreciate a work, for the work that it is, without a prior determination of what it comprises. This is a matter of contention, however. On another way of thinking, artworks just are objects of interpretation, and there is no artwork whose boundaries can be identified before we begin interpretation (Davies 2004).

The issue of the relation between ontology and interpretation is a complex and difficult one, but progress can be made on the issue through an examination of digital art practices. This is particularly in light of the high degree of self-consciousness with which many digital artists and digital art curators specify the features of digital art works. It is a common practice, for example, when archiving net art, to have artists fill out a questionnaire in order to specify which features of a work are crucial for its preservation—whether features of appearance, timing and motion, interactivity potentials and methods, linking to other sites, or hardware and software. When a work of net art is individuated by its imagery, say, the artist has chosen to make the inherent replicability of digital imagery part of the work. That this is a choice is suggested by the existence of singular works of digital visual art, like the examples discussed above. The question of whether the works by Kalpakjian, and Arcangel & Davis can function allographically requires further investigation (see D’Cruz and Magnus 2014). But if they can so function, the artist’s presentation instructions have a primary role to play in fixing, not just the art form (installation, movie, conceptual work, etc.) but the basic structure of the work – for example, in determining whether the work is singular and thus identical with a certain kind of physical display or multiple with no original display. Where interactive digital works are concerned, individuation is determined by a set of algorithms. An algorithmic account of interactive digital art suggests that, although the code is important for adequate instantiation of the work, it is the algorithm that specifies the crucial features of the work (Lopes 2010; Tavinor 2011; Moser 2018). Since the code is, ontologically speaking, less relevant than the algorithm, this account makes allowances for the variability that may be found in the code when an instance of a program is run on different kinds of devices.

Reflection on the kinds and significance of choices available to an artist contributes to a full appreciation of the artist’s work. For any artwork, appreciation begins with recognition of its status as a work , the product of artistic activity of some kind, and thus something to be appreciated as the achievement of an artist or group of artists. Most commonly, this achievement is understood in terms of the aesthetically significant effects achieved by an artist with certain kinds of tools and materials and in light of certain appreciative conventions. In other words, the achievement is always relative to an artistic medium. Returning to the case of an artist choosing what to do about the inherent replicability of digital imagery, another way of thinking about this choice is in terms of the artist recognizing the limits and capacities of their chosen medium. Images conveyed digitally are always replicable and so when an artist aims to convey artistic content through digital imagery, they either have to accept the inevitable multiplicity of their works or resist the tendency of the medium and somehow specify the work’s singularity in presentation. At a more fine-grained level, our appreciation of particular effects—of color and composition, expression, narrative structure, and so on—depends on the effects themselves but also on background acknowledgment of their degree of difficulty or innovation in the relevant medium. The production of digital art relies on the computer automation of many of the tasks, both manual and cognitive, traditionally involved in making art. The effects achieved by computer automation cannot be assessed in the same way as those achieved by traditional “hands-on” artistic methods. The terms of our appreciation, therefore, need to be adjusted in the digital age. This is certainly compatible with the continued relevance of medium-based appreciation, as long as we can make sense of digital media as artistic media (Binkley 1998). But there is a strong tendency in film and media studies to assume that the medium has absolutely no role to play in the appreciation of digital art.

Summing up this view, it supposedly follows from the fact that modern (digital) computers encode every kind of information in the same way—i.e., as a sequence of binary digits—that a digital artwork is no longer defined by its mode of presentation, whether in images, moving images, sound patterns, or text. A work’s display is rendered merely contingent by the fact that it is generated from a common code. By adding a particular instruction to the code sequence specifying a work, imagery associated with that work could be instantaneously converted into sounds or text, or just into different imagery. This possibility alone supposedly renders meaningless all talk of an artwork being in a particular medium and being properly appreciated in terms of that medium (Kittler 1999; Doane 2007).

Given the considerable effects of digital technology on artistic production, it is perhaps understandable that some commentators are inclined toward a radical overhauling of art theoretical concepts. But their arguments in support of such an overhaul are, at best, incomplete. We see this once we cite some important continuities between ways of making and thinking about art in the analog age and in the digital age. It has always been the case, for example, that “any medium can be translated into any other” (Kittler 1999: 1): Without using a computer, someone could manually devise a set of rules (an algorithm) for the translation of image values, say, into sounds or text. Moreover, a common storage and transmission means for (moving) imagery and sound is not unique to digital technology: As Doron Galili points out (2011), electronic image transmission going back to the late nineteenth century—in other words, precursors of the TV—relies on the conversion of both images and sound into electronic pulses.

Apart from these important continuities, the media theorist’s inference from translatability to medium-free art simply does not hold. That we could set about “translating” the imagery of Seven Samurai into a symphony does not mean that the original artwork lacks a medium; it is a film, after all, and as such, it has to be in the medium of moving images. The symphonic translation of Seven Samurai is not the same work as the 1954 film by Akira Kurosawa. This reminds us that, in deciding whether there is a digital medium, we must not reduce the medium to the artist’s materials, for it also matters how the artist uses those materials. Nor must we limit the constitutive materials of a medium to physical materials. The case of literature shows that neither the materials of an art form, nor their modes of manipulation, need be physical. The medium of literature is neither paper and ink nor abstract lexical symbols, but letters and words used in certain ways. There are, of course, many different ways of physically storing and transmitting literary works, including by the printed page, in audio recordings, and by memory (human or computer). But from the fact that David Copperfield can be preserved in many different formats, it does not follow that this novel is any less decisively a novel and, as such, in the medium of literature.

Just as with a literary work, the preservation and transmission of digital works in different formats depends on the use of a common code, but a binary numeric code rather than a lexical one. As we have seen, words and their literary uses constitute the medium of literature. In the same way, binary code, along with the information it implements, and its artistic uses constitute the medium of digital art. This allows for the possibility that the digital medium contains various sub-media, or “nested” media (Gaut 2010). For instance, within the medium of digital art, the medium of digital visual art comprises artistic uses of computer code specifically to create images. In technical terms, such uses can be referred to as (artistic) “bitmapping”, given that a computer ultimately stores all images (2D and 3D vector) as bitmaps, which are code sequences specifying the integers assigned to light intensity measurements in a pixel grid. The medium of bitmapping is thus distinguished by a kind of digital technology, but the kind used to produce just those items belonging to the traditional medium of images.

Once the notion of digital media is revealed to be no more confused or mysterious than the familiar notion of literary media, its irreducible role in appreciation becomes apparent. To take just one example, proper appreciation of films in the digital age depends on recognizing that digital filmmaking tools do not just make traditional filmmaking easier; they also present new creative possibilities and challenges. Given the maturity and mass-art status of the cinematic art form, it is easy to take for granted the medium of moving imagery; we may think we know exactly what its limits are, and we may even think we have seen everything that can be done with it. The digital medium is different, however, and digital cinema is in both the medium of moving imagery and the digital medium.

At first glance, it might seem odd to speak of “challenges” or “limits” in relation to digital processes, which allow for instantaneous and endless modification with increasingly user-friendly applications and devices. The high degree of automation in the process of capturing an image with a digital video camera, along with increasingly high image resolution and memory capacity, could make it seem as though digital images are too easily achieved to be interesting. Then there are the practically endless possibilities for “correcting” the captured image with applications like Photoshop. When we take a photo or video on our smartphones, an AI program automatically optimizes focus, contrast, and detail. Digital sound recording is likewise increasingly automated, increasingly fine-grained, and reliant on ever-larger computer memory capacities. Modifying and mastering recorded sound with digital editing software allows for an unlimited testing of options. In digital film editing, sequence changes are instantaneous and entirely reversible—quite unlike when the editing process involved the physical cutting and splicing of a film (image or sound) strip. Digital tools thus allow filmmakers to focus (almost) purely on the look and sound of the movie without having to worry about the technical difficulty or finality of implementation.

Rather than dismissing all digital works as too easily achieved to be interesting, medium-based appreciation requires that we consider the digital on its own terms. This means we must allow for the possibility that certain kinds of increased technical efficiency can bring new creative risks. For example, even though committing to certain editorial decisions does not entail irreversible alterations to a filmstrip, arriving at those decisions involves sifting through and eliminating far more options, a process which can easily become overwhelming and therefore more error-ridden. When we properly appreciate a digital film, part of what we need to appreciate is the significance of any scene or sequence looking just the way it does when it could have, so easily, looked many other ways. Similarly, when we properly appreciate an interactive digital installation or videogame, we are, in part, appreciating certain representations, functions, and capabilities of the input-output system, made possible by digital media. This is undeniably a form of medium-based appreciation and the medium to which we appeal is digital. It is only when we think of a digital film as in a digital medium that we can appreciate it as a particular response to the creative problem, introduced by coding, of finalizing selections from a vast array of equally and instantly available options.

The case of digital cinema is perhaps a useful starting point for work in the philosophy of digital art. Digital cinema is a multi-media art form, after all, involving 2D and 3D moving images as well as sound. It also has the potential for robust interactivity, whereby audiences select story events or otherwise modify a film screening in prescribed ways (Gaut 2010: 224–43). Many of the digital tools developed by the film and video game industries are now available more widely to artists interested in making other forms of digital art, including net art, digital sound installations, and virtual reality art (Grau 2003; Chalmers 2017; Tavinor 2019). In terms of how the use of these tools affects proper appreciation, there are important continuities between the filmmaking context and the wider digital art world. In addition, the philosophy of film is a well-established subfield in aesthetics, one that engages with both film theory and cognitive science in order to explicate the nature of film as a mass art (Thomson-Jones 2014, Other Internet Resources). For many of the standard topics in the philosophy of film, interesting and important questions arise when we extend the discussion from analog to digital cinema. There is a question, for example, about the kinds and significance of realism that can be achieved with traditional celluloid film as compared with manipulated digital imagery (Gaut 2010: 60–97). The philosophy of film can provide some of the initial terms of analysis for artworks in a broad range of digital media. At the same time, it is important to approach each of the digital arts on their own terms under the assumption that the digital is an artistically significant category.

4. Interactivity

More and more, contemporary artists are taking advantage of the dynamic and responsive capabilities of digital media to make art interactive. The experimental online literature, conceptual browser art, and videogames mentioned above all require user interactivity, but they do so to varying degrees. Therefore, if interactivity plays a distinctive role in the digital arts, there are good reasons to analyse the nature of these works more deeply.

Not all digital works are interactive, and not all interactive works are digital. However, since computers are inherently interactive, much of the early philosophical literature on interactivity arose from the emergence of computer art (also see Smuts 2009; Lopes 2001; Saltz 1997). The distinctive character of interactive digital art is best considered in tandem with the work’s ontology.

Before analyzing interactivity any further, first, consider the following description of the digital installation “Universe of Water Particles on a Rock where People Gather” (henceforth, “Rock where People Gather”) by TeamLab:

“Rock where People Gather” is reproduced in a virtual three-dimensional space. Water is simulated to fall onto the rock, and the flow of the water draws the shape of the waterfall. The water is represented by a continuum of numerous water particles and the interaction between the particles is then calculated. Lines are drawn in relation to the behavior of the water particles. The lines are then “flattened” using what TeamLab considers to be “ultrasubjective” space. When a person stands on the rock or touches the waterfall, they too become like a rock that changes the flow of water. The flow of water continues to transform in real time due to the interaction of people. Previous visual states can never be replicated, and will never reoccur (TeamLab 2018).

“Rock where People Gather” illustrates that interactive works permit us to appreciate both the work and the properties brought about by the interactions. To define these characteristics of interactive art, Dominic Lopes states, “A work of art is interactive just in case it prescribes that the actions of its users help generate its display” (Lopes 2010:36, original emphasis). The display is anything that is instanced in a work, or the perceptual properties that come about via interactivity. Users help generate these features making interactive works distinctive. However, at this point, one could imagine reading the chapters of, let us say, a digitized copy of The Brothers Karamazov in random order, thereby changing what properties get instanced from the original work. Does this example qualify as interactive art in the Lopesian sense? Although some stories, such as choose-your-own-adventure books, allow readers to shuffle the narrative arc, most traditional stories do not; if the randomized Karamazov example is interactive, it is only so in the weakest sense of the term because users are not prescribed to change the properties as described. Another way to think about these differences returns us to a work’s structure. Readers who decide to roguishly randomize a story merely change how they access a work’s structure simply because the medium does not prohibit it, whereas readers of choose-your-own-adventure books and other interactive works can change the work’s structure in a prescribed manner (Lopes 2001:68).

That users are responsible for generating certain features of an interactive work means that their displays, unlike those of non-interactive works, can occur in a couple of different ways (Lopes 2010: 37-38). The less standard of the two occurs when the displays of an interactive work are generated in a succession of states over a period of time, but where none of the displays can be revisited. One such example is Telegarden , a temporary work of computer art that users accessed from a networked computer. The work was comprised of a table with an attached mechanical arm that dispensed water and food for the plants via the users’ inputs. As one may imagine, the garden took shape in a variety of ways over the span of its exhibition, but each state of the garden, or its succession of display states, could not be repeated. Although not common, videogames can also exhibit this kind of display variability. Consider the experimental game, Cube . For a limited time, players could explore a large cube and its nested smaller cubes while racing to be the first to reach the center. As with Telegarden , players generated different properties of the game displays by interacting with it, but once a new display was generated, the previous ones were gone.

The more standard of the two variable structures for interactive works are displays that can be repeated, such as most net art and videogames that can be accessed many times, from multiple locations, to generate different displays. Although repeatable works are more common (at least with videogames if not museum-housed works), more needs to be said about the changing properties of these works and how the repeatability trait distinguishes interactive digital works from non-interactive digital images.

If the display properties of digital images can vary from instance to instance due to even slightly different settings on different devices (e.g., brightness, resolution, intensity), then the aesthetic and structural differences of many works could be misconstrued as interactive. Since the example just given is not an interactive work of art, it is worth looking more closely at what is going on with non-interactive repeatable works versus interactive repeatable ones. Consider traditional performance works such as works of theater and music. Each performance might differ to a slight degree due to different performers and other varying conditions of the environment, and these may certainly affect our aesthetic experiences each time. However, those changes, in principle, do not reshape the structure of the performed play or song. In the same way, the subtle changes made with a digitally displayed image do not change the structure of the image-based work. Compare those slight artistic or aesthetic variations to the display variability of interactive works. For example, many videogames permit players to choose which route to take, quests to accept, characters to kill or save, personalities to adopt, and the like. These sorts of in-game player choices are not merely generating features such as varying the brightness or resolution, nor are they as straightforwardly interactive as a game of chess that ends in a win or a loss. Rather, the degree of variability permits multiple endings. Again for comparison, while traditional tragedies will always end on a tragic note, some highly variable works can end either on a tragic note or on one that is not at all tragic.

To articulate the above more clearly, Dominic Preston says,

for any given artwork, each possible set of structural and aesthetic properties F is a display type of that artwork. (Preston 2014: 271, original emphasis).

From the above, we can briefly infer the following scenarios: works like digital photographs are ontologically similar to plays and music because they consist of one prescribed display type. While the display type might permit multiple displays (duplicates, performances, instances, etc.) consisting of subtle variances between the particular tokens, there is still a single correct display that should be maintained or achieved. Works that instance a succession of states such as Telegarden and Cube consist of multiple potential display types where only one display type is instantiated at any given time. Now, compare such works with those like videogames that present us with the strong degrees of display variability mentioned earlier. Because some repeatable works can end drastically differently from one “playthrough” to the next, there is no singular, correct display. Instead, these sorts of works consist of both multiple display types and multiple displays, which means users will generate one of the possible display types (and their displays) each time they repeat the work.

According to Katherine Thomson-Jones (2021), there is a problem with Preston’s claim that interactive artworks — at least ones that are digital — have multiple display types, as well as multiple displays. This is because the digital is inherently replicable and replicability requires a transmissible display — a single display type that can have multiple, interchangeable instances. This seems to introduce a problem of incompatibility: How can we have an image whose instances still count as instances of the same image-based work when those instances, in virtue of users’ actions, look very different from one another? There are various ways one might overcome this problem — for example, by distinguishing between the display of an image and the display of an artwork that incorporates the image in question. Preston’s distinction between display and display type can continue to play a role here. While the concept of interactivity with high variability is mostly applicable to videogames, one can imagine interactive digital installations, net art, and table-top roleplaying games to which it also applies.

It is important to reiterate that the strong interactivity just described is not restricted to the digital. Whilst interactivity is a standard feature of many contemporary digital works, and the responsiveness of such works is remarkable, non-digital appreciative categories can also exploit a similar degree of display variability. For example, literature, theater, and tabletop role-playing games can be strongly interactive in ways similar to many digital artworks. What is unique about all such works are the ways in which things like immersion, agency, identity, and fiction, to list a few features, are impacted by interactivity, due to the user’s role in the work (for more, see Robinson & Tavinor 2018; Patridge 2017; Meskin & Robson 2016).

A noteworthy point about the concepts presented above is that they are all conceived with the norms of traditional art and art practices in mind, meaning, works of art that are predominantly object-based. However, as described, interactive digital works can be said to have a “behavior” based on prescribed interactions. While traditional artworks typically emphasize a work that is complete prior to audience engagement, interactive works emphasize the moment-to-moment unfolding of the works by way of audience engagement. One recent approach to the ontology of art suggests that interactive works (among other kinds) are better conceived on a process-based, rather than on an object-based, model (Nguyen 2020). A key difference between the two models is that the former focuses on the user’s actions and experiences or “inward aesthetics”, while the latter focuses on the “outward” aesthetic features of the self-contained object (Nguyen 2020: 25). While a process-based account prioritizes the internalizability of a user’s actions, it does not completely give up the idea of an artistic object. Rather, it renders the object’s significance secondary insofar as any appreciation for a work’s objecthood is in service to the activities it permits (Nguyen 2020: 20).

Just as the case of digital art can enrich our understanding of interactivity, so it can enrich our understanding of artistic creativity and creative attribution. This is in part because the issues of interactivity and creativity are closely related. In making a work that is strongly interactive, an artist cedes to the user a certain amount of control in determining the look, sound, or structure of the work’s display(s). This raises a question about the user’s opportunities to be creative — to engage in creative interactions with a work — and how these opportunities can be supported or constrained by the artist’s design. In order to answer this question, we need to unpack the notion of creative control. In addition, we need to consider how creativity can be attributed to multiple agents involved at different stages of a work’s production. As it turns out, the case of AI art is particularly useful when considering the conditions of creativity and creative collaboration.

There is already a robust debate about the nature of creativity as it occurs in many different areas of human activity (see, e.g., Gaut & Kieran, eds., 2018; Paul & Kaufman, eds., 2014) There is a case to be made, however, that this debate can be enriched by drawing on work in the philosophy of AI. In everyday life, sophisticated AI systems are now being used for all kinds of purposes: We rely on these systems when we use Internet search engines, play strategic videogames, accept purchasing recommendations from online retailers, and check for viruses on our computers. In addition, we can use AI to generate new images and text, some of which can be incorporated into works of art. One can type a series of prompts into a chatbot like ChatGPT in order to generate a new screenplay in a certain genre. Similarly, one can assemble a set of images on which to train an “artbot” like MidJourney in order to generate paintings, drawings, or what look like photographs. It is common to attribute creative agency to the person who purposively initiates, and then monitors, the program used to generate the images or text that make up a work of AI art. A question remains, however, as to whether this should be the sole creative attribution when AI is involved. Some artists describe the AI system with which they work as a creative “partner”, and not just a sophisticated artistic tool. This is particularly the case with recent artworks that rely on “Deep Learning” (DL) to generate images or text in a remarkably independent way. Both chatbots and artbots rely on deep learning to categorize huge datasets (images or text) according to previously undetected patterns. To do this, a DL system must give itself new instructions — ones that depart from the initial instructions of the program — for the kind of image or text segment, and for the particular image or text segment, to be produced.

Among AI artists, the ones who write their own DL programs are the most likely to describe AI as a creative partner rather than just a creative tool. This seems particularly fitting when there is a synchronous collaboration between the artist and their DL system. For example, the works of Sougwen Chun are the result of Chun drawing alongside and in response to the actions of a mechanical drawing arm that is controlled by Chun’s custom-made program, D.O.U.G. (short for “Drawing Operations Unity Generation X ”). Chun and D.O.U.G. take turns adding to a single drawing. During this process, it is harder than one might think to say what makes the difference such that Chun is a creative agent, engaged in a creative drawing process, and D.O.U.G. is not. This is the case even though it is Chun alone who comes up with the idea for the work and initiates the drawing program.

As Margaret Boden has argued (2014), when philosophers deny the possibility of a computer ever being creative, they tend to underestimate the capacities of computers to produce items that are genuinely new as well as valuable in some domain. Still, many creativity theorists argue that novelty and value are insufficient for creativity; a third condition is needed, one concerning the manner of production. In order for something novel and valuable to count as creative, it must have been produced in a way that is agent-driven. The agency condition can be filled out, or broken down, in a variety of ways. At its core, however, the notion of agency is generally assumed to be opposed to the notion of having been ‘programmed’, or simply instructed, to perform certain tasks in a “mechanical”, or blind, fashion. Thus, the most basic objection to the possibility of creative computers is that they can only do what they are programmed to do, according to the intentions of a human agent. This objection needs clarification, however, since no set of rules completely determines the actions involved in following that set of rules. Since the advent of DL and other kinds of generative algorithm, it is possible for a computer program to change itself, to detect independent phenomena other than the ones that it was designed to detect, and to mimic spontaneity through randomness. Is this an instance of creative rule-following? If it is, we might expand our notion of creative collaboration. Perhaps creative collaboration is found, not just between human beings, but also between human beings and machines. In so far as the machines in question support strong interactivity, a work of digital art might involve three-part collaborations between human artists, human users, and AI.

Digital media can also be used for the purpose of connecting physical locations to virtual ones by using locative media. The phrase “locative art” is traced to Karlis Kalnins who applied the phrase to experimental projects coming from Locative Media Lab, a collective of international researchers and practitioners working with locative media. (Galloway & Ward 2005 ). Since the term “locative” is connected to location, site-specificity plays a significant role in our appreciation of locative works. Unlike with non-digital cases, however, site-specificity in locative art is both physical and virtual. The general concept is this: locative-specific media connect physical spaces with virtual ones, the perceptual features of which are generated from a digital device by human interactions. We often experience locative media in our ordinary and daily lives through navigation systems like Google Maps or Waze, and also creatively through augmented reality apps like Pokemon Go, AR graffiti (an app that allows users to visualize what their graffiti will look like in situ ), architectural simulators, and more. These are location-based works, often called locative projects or locative art, that use locative media. While locative technology has been around for decades, it is a relatively new subcategory to be recognized within the arts. Even so, locative art is more broadly acknowledged in the fields of technology, games, and sociology.

To understand the many ways locative media might be used for artistic practice, consider the following three examples.

  • What Was There (2010–2023) was a website that allows users to type in the coordinates of their current location (via their digital device) to see how certain geographical locations look throughout history. By typing a specific address into the application, the user experiences the physical location in front of them and represented images and facts of that location as they appeared in the past, from the same vantage point. Data on the site relies on regular citizens to upload historic images and connect them to specific locations, making them accessible to others; the greater the participation the richer the experience when navigating a particular place. Not only can guests gain a deeper connection to their specific localized places, but such applications motivate people to explore and appreciate locations outside of their normal destinations.
  • The Transborder Immigrant Tool has been used in a number of artworks and exhibitions.
The Transborder Immigrant Tool, devised by Electronic Disturbance Theater 2.0/b.a.n.g. lab, was a mobile phone application intended to guide individuals who were making their way to the United States through the deserts of the U.S./Mexico borderlands to water. The application delivered poetry to its users in an effort to assist in their emotional and mental well-being while offering information about survival during the dangerous journey. The creators of The Transborder Immigrant Tool considered it to be a performance intervention that included the app itself, its API, public reactions, and an ensuing government investigation. By the time TBT was ready for distribution in 2011, the border crossing had become more dangerous, presenting the risk that carrying a TBT phone might put users in danger. While the project was never distributed to its intended users, it still succeeded in confounding systems of political control, creating a call to action that resonated internationally, and using poetry to “dissolve” the US-Mexico border (Electronic Disturbance Theater 2.0 et al 2007).
  • KlingKlangKlong is played with smartphones that translate the players’ locations into sound. This is achieved in a straightforward manner: By moving through the physical space, the players simultaneously move on the surface of a virtual sequencer. One dimension (usually the latitude of the player) corresponds to pitch, the other dimension (longitude) is mapped to the time-position on the sequencer. The interface allows a manual switch to other audio parameters, although this feature is experimental. Each device receives the location and state changes of the currently active players almost instantly. The devices are connected by a central server, which also sets the boundaries of the playing field and manages the virtual players.
KlingKlangKlong serves as an experimental arrangement to explore the idea of playful (social) systems, a construction where human and virtual subsystems play with each other. In addition to the human participants, it therefore deploys a number of virtual players (»Virtuals«). Humans and Virtuals meet each other in mixed reality, a concept that was realized in early locative games. The commonly created soundtrack is the primary medium of communication between the players. Musical structures may temporarily arise through synchronized movements or be destroyed by any player’s intervention (Straeubig & Quack 2016).

In each of these examples, users synchronize geographical locations with virtual ones, granting mobility for users while also further challenging the paradigm of museum-driven works and art appreciation. The interactivity described in section 4 factors significantly into each of these projects, but also of importance are the social, economic, environmental, and political implications that factor in the design, development, and use of such works. Most of these projects are intended to bring about positive change, one way or another, by using locative media and by borrowing certain gaming mechanics for the purpose of engagement and interactivity. In fact, much of what is written about locative art is from fields within digital media studies on the media’s ‘playable’ qualities (avoiding the idea that artists are trying to merely gamify public spaces).

On the playable nature of locative art, Miguel Sicart says,

the data produced and used in smart cities should not necessarily be presented as a utility for citizens. It should be presented as a prop for play, as games but also as the source for toys and playgrounds. Data-rich cities can become playable cities, and, by becoming such, they can become more human, more inclusive spaces (Ackermann: 2016, 27).

Once again, the process-oriented and performative nature of locative works is central to appreciating the spaces they create. In connection with the playability of locative media, locative projects are also modifiable and typically intended to be hackable by the public in the communities where they are displayed. While the terms “hacker” and “hacking” may carry negative connotations, in its original conception, the practice of hacking was meant to improve upon existing computer programs, acknowledging the achievements of both the original creator and subsequent revisionists (Levy:1984). It is in this vein that locative projects are usually intended to change and improve over time given how users “play” with the data and inputs. In other words, local communities can play with these projects and even modify them, making locative works as relevant and reflective of the community as possible.

The above suggests that locative projects often have broader goals than mere entertainment. Although often intended to motivate play and playful attitudes, which, as indicated in the examples above can occur to varying degrees, the goal-oriented nature of locative works often makes the kind of play involved a “serious” kind. Serious play occurs when the enjoyment and pleasure that come from play do not just comprise entertainment but also serve some meritorious, real-world goal. A work can be said to support serious play even when users are not aware of the social or political goals of the work.

The customizable nature of locative media lends itself well to the altruistic aims of many locative artists. However, inclusivity, climate protection, social change, and any other desired effect of locative projects requires a given community to have sufficient funds for, access to, and user-knowledge about digital locative technology. This concern can also be understood, not just in terms of community requirements, but also in terms of technological requirements, for successful locative art. One way of putting it is as follows:

For any technological device to be “aware” of its context—physical or otherwise—it has to be able to locate, classify, collect, store, and use “relevant” information, as well as to identify and discard or ignore “irrelevant” information (Galloway & Ward 2005).

There are numerous political and economic factors affecting access to local and global spaces. With locative art, these factors are concerning for both ethical and artistic reasons.

  • Ackermann, Judith, Andreas Rauscher, Andreas and Daniel Stein, 2016, Introduction: Playin’ the city. Artistic and Scientific Approaches to Playful Urban Arts . Navigationen-Zeitschrift für Medien-und Kulturwissenschaften, 16(1), pp.7-23.
  • Atencia-Linares, Paloma and Marc Artiga, 2022, “Deepfakes, shallow graves: On the epistemic robustness of photography and videos in the era of deepfakes”, Synthese , 200, article 518.
  • Binkley, Timothy, 1998, “Digital Media”, in Encyclopedia of Aesthetics , M. Kelly (ed.), New York: Oxford University Press.
  • Blachowicz, James, 1997, “Analog Representation Beyond Mental Imagery”, Journal of Philosophy , 94(2): 55–84.
  • Boden, Margaret A., 2010, Creativity and Art: Three Roads to Surprise , Oxford: Oxford University.
  • Chalmers, David, 2017, “The Virtual and the Real”, Disputatio , 9(46): 309–352.
  • Currie, Gregory, 1996, “Film, Reality, and Illusion”, in Post Theory: Reconstructing Film Studies , D. Bordwell and N. Carroll (eds.), Madison: University of Wisconsin Press, pp. 325–44.
  • Davies, David, 2004, Art as Performance , Malden, MA: Blackwell.
  • D’Cruz, Jason and P.D. Magnus, 2014, “Are Digital Images Allographic?” Journal of Aesthetics and Art Criticism , 72(4): 417–27.
  • Doane, Mary Ann, 2007, “Indexicality: Trace and Sign”, differences: A Journal of Feminist Cultural Studies , 18(1): 1–6.
  • Dretske, Fred I., 1981, “Sensation and Perception”, in Knowledge and the Flow of Information , Cambridge, MA: MIT Press.
  • Electronic Disturbance Theater 2.0/b.a.n.g. lab, 2007-ongoing, “Transborder Immigrant Tool”, in Net Art Anthology [available online ].
  • Galili, Doron, 2011, “The Post Medium Condition, circa 1895”, presented at the Second International Colloquium of the Permanent Seminar on the History of Film Theory: The Impact of Technological Innovations on the Theory and Historiography of Cinema, Montréal.
  • Galloway, Anne and Matt Ward, 2005, “Locative Media as Socialising and Spatialising Practices: Learning from Archaeology”, Leonardo Electronic Almanac .
  • Gaut, Berys, 2009, “Digital Cinema”, in The Routledge Companion to Philosophy and Film , P. Livingston and C. Plantinga (eds.), New York: Routledge, pp. 75–85.
  • –––, 2010, A Philosophy of Cinematic Art , Cambridge: Cambridge University Press.
  • Gaut, Berys and Matthew Kieran (eds.), 2018, Creativity and Philosophy , New York: Oxford University Press.
  • Goodman, Nelson, 1976, Languages of Art: An Approach to a Theory of Symbols , Indianapolis/Cambridge: Hackett Publishing Co.
  • Goodman, Nelson and Catherine Z. Elgin, 1988, Reconceptions in Philosophy , London: Routledge.
  • Grau, Oliver, 2003, Virtual Art: From Illusion to Immersion , Cambridge, MA: MIT Press.
  • Haugeland, John, 1981, “Analog and Analog”, Philosophical Topics , 12: 213–26.
  • Heim, Michael, 1998, “Virtual Reality”, in Encyclopedia of Aesthetics , M. Kelly (ed.), New York: Oxford University Press.
  • Irvin, Sherri, 2005, “The Artist’s Sanction in Contemporary Art”, Journal of Aesthetics and Art Criticism , 63(4): 315–26.
  • –––, 2008, “The Ontological Diversity of Visual Artworks”, in New Waves in Aesthetics , K. Stock, and K. Thomson-Jones (eds.), Basingstoke: Palgrave Macmillan, pp. 1–19.
  • Kania, Andrew, 2018, “Why Gamers Are Not Performers”, The Journal of Aesthetics and Art Criticism , 76(2): 187–99.
  • Katz, Matthew, 2008, “Analog and Digital Representation”, Minds and Machines , 18: 403–08.
  • Kittler, Friedrich, 1999, Gramophone, Film, Typewriter , Stanford, CA: Stanford University Press.
  • Levy, Stephen, 1984, Hackers: Heroes of the Computer Revolution (Volume 14). Garden City, NY: Anchor Press/Doubleday.
  • Lewis, David, 1971, “Analog and Digital”, Noûs , 5(3): 321–27.
  • Lopes, Dominic McIver, 2001, “The Ontology of Interactive Art”, Journal of Aesthetic Education , 35(4): 65–81.
  • –––, 2010, A Philosophy of Computer Art , London/New York: Routledge.
  • Maley, Corey J., 2011, “Analog and Digital, Continuous and Discrete”, Philosophical Studies , 155(1): 117–31.
  • Millière, Raphaël, 2022, “Deep learning and Synthetic Media”, Synthese 200, article 231.
  • Moruzzi, Caterina, 2022, “Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems”, Journal of Aesthetics and Phenomenology, 9(2): 245–68.
  • Moser, Shelby, 2018, “Videogame Ontology, Constitutive Rules, and Algorithms”, in The Aesthetics of Videogames , London: Routledge, pp. 42–59.
  • Mitchell, William J., 1994, The Reconfigured Eye: Visual Truth in the Post-Photographic Era , Cambridge, MA: MIT Press.
  • Nguyen, C. Thi, 2020, “The Arts of Action. ” Philosopher’s Imprint , 20(14): 1–27.
  • Patridge, Stephanie, 2017, “Video Games and Imaginative Identification”, The Journal of Aesthetics and Art Criticism , 75(2): 181–84.
  • Paul, Christiane, 2008, Digital Art , 2nd edition, London/New York: Thames & Hudson.
  • Paul, Elliot Samuel and Scott Barry Kaufman (eds.), 2017, The Philosophy of Creativity: New Essays , New York: Oxford University Press.
  • Piccinini, Gualtiero, 2008, “Computers”, Pacific Philosophical Quarterly , 89: 32–73.
  • Ponech, Trevor, 2006, “External Realism about Cinematic Motion”, British Journal of Aesthetics , 46(4): 349–68.
  • Preston, Dominic, 2014, “Some Ontology of Interactive Art”, Philosophy and Technology , 27(2): 267–78.
  • Roberts, T. and Joel Krueger, 2022, “Musical agency and collaboration in the digital age” in Collaborative Embodied Performance: Ecologies of Skill , K. Bicknell and J. Sutton (eds.), New York: Bloomsbury: 125–40.
  • Robson, Jonathan and Aaron Meskin, 2016, “Video Games as Self-Involving Interactive Fictions”, The Journal of Aesthetics and Art Criticism , 74(2): 165–77.
  • Robson, Jonathan and Grant Tavinor (eds.), 2018, The Aesthetics of Videogames , London: Routledge.
  • Rodowick, D.N., 2007, The Virtual Life of Film , Cambridge, MA: Harvard University Press.
  • Rush, Michael, 2005, New Media in Art , London: Thames & Hudson.
  • Saltz, David S., 1997, “The Art of Interaction: Interactivity, Performativity, and Computers”, Journal of Aesthetics and Art Criticism , 55(2): 117–27.
  • San Francisco Museum of Modern Art (SF MoMA), n.d., “Craig Kalpakjian: Corridor (1995)—About the Artwork” [Museum Exhibit Label]. San Francisco, CA [ available online ].
  • Smuts, Aaron, 2009, “What is Interactivity?”, Journal of Aesthetic Education , 43(4): 53–73.
  • Straeubig, Michael and Sebastian Quack, 2016, “Playful Locative Ensembles in the Urban Soundscapes ”, Navigationen-Zeitschrift für Medien-und Kulturwissenschaften 16(1): 85–100.
  • Tavinor, Grant, 2009, The Art of Videogames , Malden, MA: Wiley-Blackwell.
  • –––, 2019, “On Virtual Transparency”. The Journal of Aesthetics and Art Criticism , 77(2): 145–156.
  • TeamLab, 2008, “Universe of Water Particles on a Rock where People Gather” [website description], Tokyo: Japan. [ available online ].
  • Thomasson, Amie L., 2010, “Ontological Innovation in Art”, Journal of Aesthetics and Art Criticism , 68(2): 119–30.
  • Thomson-Jones, Katherine, 2013, “Sensing Motion in Movies”, in Psychocinematics: Exploring Cognition at the Movies , A.P. Shimamura (ed.), Oxford: Oxford University Press, pp. 115–32.
  • Thomson-Jones, Katherine, 2021, Image in the Making: Digital Innovation and the Visual Arts , New York: Oxford University Press.
  • Wilken, Rowan, 2019, Cultural Economies of Locative Media . Oxford University Press.
  • Zeimbekis, John, 2012, “Digital Pictures, Sampling, and Vagueness: The Ontology of Digital Pictures”, Journal of Aesthetics and Art Criticism , 70(1): 43–53.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Thomson-Jones, Katherine, 2014, Philosophy of Film , in Oxford Bibliographies Online .
  • Enlighten, “WhatWasThere” (2010–2023) [ WhatWasThere available online ] [ archive link ]
  • Gottfried Michael Koenig Project
  • Austin Museum of Digital Art
  • Digital Art Museum
  • Rhizome’s Artbase , the largest on-line archive of new media art
  • Digital Art & Design – Victoria and Albert Museum , information on the history and practice of computer art and design.
  • The Whitney Museum’s Art Port , providing access to the museum’s collection of digital art and an exhibition space for commissioned works of net art.

computing: modern history of | creativity | film, philosophy of | Goodman, Nelson: aesthetics | ontology of art, history of

Copyright © 2023 by Katherine Thomson-Jones < Katherine . Thomson-Jones @ oberlin . edu > Shelby Moser < shelby . moser @ gmail . com >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

essay about digital art

Programmed:  Rules,  Codes, and   Choreographies  in Art,  1965–2018

Histories of the digital now.

By Christiane Paul, adjunct curator of digital art

essay about digital art

Installation view of Programmed: Rules, Codes, and Choreographies in Art , 1965–2018

Walk into any given gallery or museum today, and one will presumably encounter work that used digital technologies as a tool at some point in its production, whether videos that were filmed and edited using digital cameras and post-production software, sculptures designed using computer-aided design, or photographs as digital prints, to name just a few examples. Yet these works are not typically understood as digital art per se, since they use digital technologies as a production tool rather than a medium. Instead, digital art might be defined as art that explores digital technologies as a medium by making use of its medium’s key features, such as its real-time, interactive, participatory, generative, and variable characteristics, or by reflecting upon the nature and impact of digital technologies. 

The definition of digital art has continuously evolved since the art form emerged. In the 1960s and ’70s, digital art consisted mostly of algorithmic drawings in which the results of artist-written code were drawn on paper by pen plotters, and computer-generated films that also involved artistic use of programming languages. While these works take the form of what seem to be more traditional drawings or films, their creators did not simply use digital technologies as a production tool but deeply engaged with the digital medium and the potential of its underlying code. From the 1980s to the early 2000s, digital art was predominantly understood as digital-born art; that is, art created, stored, and distributed via digital technologies. Digital artworks increasingly began to involve the features that we understand as characteristic of the art form today, consisting of software and installations or Internet art that is real time, interactive, process-oriented, and performative. While not all of these characteristics are exclusive to digital art and can also feature in different types of performative events or video art, they are not intrinsic elements of objects such as digital photographs or prints.

As digital technologies have infiltrated almost all aspects of art-making today, many artists, curators, and theorists have pronounced an age of “post-digital” and “post-internet” art. These terms attempt to describe a condition of artworks that are conceptually and practically shaped by the internet and digital processes—taking their language for granted—yet often manifest in the material form of objects such as paintings, sculptures, or photographs that speak about the digital medium and would not be possible without it. 

As the language of the digital has become increasingly pervasive, we tend to associate the word “programmed” with the use of digital technologies. Yet throughout the history of art, artists have used programs—rule sets and abstract concepts—to create their work, employing mathematical principles to drive forms and ideas or establishing rules to explore structures and colors. Programmed: Rules, Codes, and Choreographies in Art, 1965–2018   traces some of these practices over the past fifty years, exploring the effects, creative potential, as well as limits of instruction- and rule-based art-making. The exhibition draws historical connections between conceptual, video, and contemporary digital art that may not be immediately obvious. The show covers a broad range of works, including paintings, weavings, drawings, dance scores, and software, as well as early light and TV sculptures from the 1960s and large-scale video and immersive installations. While not all the works are technological, they are still informed by the histories of art, science, and technology. In a world that has increasingly become algorithmically coded—from our conversations with our smart devices to our financial markets—it seems important to look at the aesthetic and social impact of these codes and ask, what kinds of programs do we create to express ourselves or to govern the world we live in?  

Both of the exhibition’s two sections examine different understandings of a program through an exploration of the historical trajectories that have informed today’s digital art. “Rule, Instruction, Algorithm” connects digital art to the rule-based conceptual artistic practices that predate digital technologies, with their emphasis on instructions and ideas as a driving force behind the work. The exhibition’s other section, “Signal, Sequence, Resolution,” focuses on the coding and manipulation of image sequences, the television, signals, and image resolution, thereby pointing to digital art’s origins in the history of moving images and the apparatuses of illusion and immersion. 

Whether analog or digital, the works in Programmed have been shaped as much by the history of science and technology as by art-historical movements and influences. A brief survey of that technological history and some of the ways that artists engaged with emerging technologies will help to situate the specific historical trajectories that are the focus of the exhibition.

A Brief History of (Digital) Art and Technology 

The years following World War II were formative in the evolution of digital media, marked by major theoretical and technological developments. In July 1945, the Atlantic Monthly published the essay “As We May Think” by Vannevar Bush, the American engineer and scientist who, during the war, headed the federal Office of Scientific Research and Development, which oversaw the Manhattan Project. Vannevar Bush, “As We May Think ,” Atlantic Monthly , July 1945. Envisioning a future in which the focus of science and technology would be on serving, rather than destroying, humankind, Bush describes a device he calls “memex,” a desk with translucent screens that would allow users to browse documents and to create their own trail through a body of documentation. Bush imagines that the content—books, periodicals,, images—could be purchased on microfilm, ready for insertion. The user could also input direct data. Bush’s specific device was never built, but his conception of it was profoundly influential in shaping the history of computing. The memex is now commonly acknowledged as a conceptual forebear to the electronic linking of materials and, ultimately, to the internet as a huge, linked, globally accessible database. In 1946, the University of Pennsylvania presented the world’s first digital computer, known as ENIAC (Electronic Numerical Integrator and Computer), which filled an entire room; five years later, the first commercially available digital computer, UNIVAC, which was capable of processing numerical as well as textual data, was patented. By then, American mathematician Norbert Wiener had coined the term cybernetics, from the Greek kybernetes (“governor” or “steersman”), to describe the emerging field of science devoted to the comparative study of different communication and control systems, such as the computer and the human brain. Artists immediately saw the creative potential of electronic control systems and began to experiment with cybernetic art, such as using feedback loops in responsive sculptures.

The 1960s—the decade in which the earliest works exhibited in Programmed were created—turned out to be particularly important for the history of digital technologies, a time when the groundwork was laid for much of today’s technology and its artistic exploration. Bush’s basic ideas were developed further by Theodor Nelson, who in 1961 coined the words hypertext and hypermedia to describe a space of writing and reading where texts, images, and sounds could be interconnected electronically and linked by anyone contributing to a networked docuverse. Nelson’s hyperlinked environment was branching and nonlinear, allowing readers and writers to create and choose their own paths through the information. His concepts obviously anticipated the networked transfer of files and messages over the internet, which originated around the same time. The Soviet Union’s launch of Sputnik in 1957 had prompted the United States to create the Advanced Research Projects Agency (ARPA) within the Department of Defense in order to maintain a leading position in technological innovation. In 1964, the RAND Corporation, the foremost think tank of the Cold War era, developed a proposal for ARPA that conceptualized the internet as a communication network without central authority. By 1969, the infant network—named ARPANET, after its Pentagon sponsor—was formed by four of the “supercomputers” of the time: at the University of California, Los Angeles; the University of California, Santa Barbara; the Stanford Research Institute; and the University of Utah. 

The end of the 1960s saw the birth of yet another important concept in computer technology and culture: the precursors of today’s information space and graphical user interface. In late 1968, Douglas Engelbart of the Stanford Research Institute introduced the ideas of bitmapping, windows, and direct manipulation through a mouse. His concept of bitmapping was groundbreaking in that it established a connection between the electrons flowing through a computer’s processor and an image on the computer screen. A computer processes in pulses of electricity that manifest themselves in either an “on” or “off” state, commonly referred to as the binaries “one” and “zero.” In bitmapping, each pixel of the computer screen is assigned small units of the computer’s memory (bits), which can also manifest themselves as on or off and be described as one or zero. The computer screen can thus be imagined as a grid of pixels that are either lit or dark, creating a two-dimensional space. The direct manipulation of this space by the user’s hand was made possible through Engelbart’s invention of the mouse. The basic concepts of Engelbart and of Ivan Sutherland, whose interactive display graphics program Sketchpad of 1963 was crucial in enabling computer graphics, were further developed by Alan Kay and a team of researchers at Xerox PARC in Palo Alto, California. Their work resulted in the creation of the graphical user interface and the “desktop” metaphor with its layered “windows” on the screen, which was ultimately popularized by Apple’s introduction of its Macintosh computer. While computers and digital technologies were by no means ubiquitous in the 1960s and ’70s, there was a sense that they would change society. It is not surprising that systems theory—encompassing ideas from fields as diverse as the philosophy of science, biology, and engineering—became increasingly important during these decades. The systems approach during the late 1960s and into the ’70s was broad in scope but deeply inspired by technological systems, and addressed issues ranging from social conditions to notions of the art object.

Artists have always adopted and reflected upon the technologies of their time, and they quickly became interested in exploring the theories and concepts behind the advances in digital computing.  The 1950s and ’60s saw a surge of participatory and/or technological art, created by artists such as Ben Laposky and John Whitney Sr.; John Cage, Alan Kaprow, and the Fluxus movement; and groups such as Independent Group / IG (1952–54: Eduardo Paolozzi, Richard Hamilton, William Turnball et al.); ZERO (1958: Otto Piene, Heinz Mack et al.); GRAV/Groupe de Recherche d’Art Visuel (1960­–68: François Morellet, Julio Le Parc et al.); and The Systems Group (1969: Jeffrey Steele, Peter Lowe et al.).

The fact that the relationship between art and computer technology at the time was often conceptual was largely due to the inaccessibility of the technology. Some artists were able to use discarded military computers while others gained access to computer technology through the universities where they worked. Wanting to forge what he described as an effective collaboration between engineers and artists, electrical engineer Billy Klüver founded Experiments in Art and Technology (EAT) in 1966 at Bell Labs, where he worked. Klüver developed joint projects with artists such as Andy Warhol, Robert Rauschenberg, Jean Tinguely, John Cage, and Jasper Johns that were first seen in performances in New York and ultimately at the Pepsi-Cola pavilion at the World Expo ’70 in Osaka, Japan. EAT was an early instance of the complex collaboration between artists, engineers, programmers, researchers, and scientists that would become a hallmark of digital art. Other artists who produced groundbreaking art under the auspices of Bell Labs at the time included Kenneth C. Knowlton, A. Michael Noll, Max Mathews, and Lillian Schwartz.

The 1960s also saw important exhibitions centered on art’s relationship to emerging technologies. A series of five international exhibitions organized in Zagreb between 1961 and 1973, under the term Nove tendencije  (New Tendencies) were used to advance new concepts of art for the postwar era. They included the program “Computer and Visual Research” as part of the fourth exhibition, Tendencije 4 (1968–69), which highlighted the computer as a medium for artistic creation. The first two exhibitions of computer art were held in 1965: Generative Computergrafik , showing work by Georg Nees, at the Technische Hochschule in Stuttgart, Germany, in February, followed in April by Computer-Generated Pictures , featuring work by Bela Julesz and A. Michael Noll, at the Howard Wise Gallery in New York. Although their works resembled abstract drawings and seemed to replicate aesthetic forms that were very familiar from traditional media, these artists captured essential aesthetics of the digital medium in outlining the basic mathematical functions that drive any process of “digital drawing.” In 1968, the exhibition Cybernetic Serendipity at the Institute of Contemporary Arts in London presented works—ranging from plotter graphics to light-and-sound environments and sensing “robots”—that today may seem clunky and overly technical but which nonetheless anticipated many of the important characteristics of the digital medium. Some works focused on the aesthetics of machines and transformation, such as painting machines and pattern or poetry generators. Others were dynamic and process-oriented, exploring possibilities of interaction and “open” systems. In 1970, American art historian and critic Jack Burnham organized the exhibition Software – Information Technology: Its New Meaning for Art at the Jewish Museum in New York. In addition to featuring the work of artists such as Agnes Denes, Joseph Kosuth, Nam June Paik, and Lawrence Weiner—all of whom are included in Programmed —the show also exhibited the prototype of Theodor Nelson’s hypertext system, Xanadu.

Using the new technology of the time, such as video and satellites, artists in the 1970s also began to experiment with live performance and networks that anticipated the interactions now taking place on the internet and through the online streaming of video and audio. The focus of these artists’ projects ranged from the application of satellites for the mass dissemination of a television broadcast to the aesthetic potential of video teleconferencing and the exploration of a real-time virtual space that collapsed geographic boundaries. At Documenta 6 in Kassel, Germany, in 1977, Douglas Davis organized a satellite telecast to more than twenty-five countries, which included performances by Davis himself, Nam June Paik, Fluxus artist and musician Charlotte Moorman, and German artist Joseph Beuys. In the same year, a collaboration between artists in New York and San Francisco resulted in the Send/Receive Satellite Network, a fifteen-hour, two-way interactive satellite trans­mission between the two cities. Also in 1977, what became known as the world’s first interactive satellite dance performance—a three-location, live-feed composite performance involving performers on the East and West coasts of the United States—was organized by Kit Galloway and Sherrie Rabinowitz, in conjunction with NASA and the Educational Television Center in Menlo Park, California. The project established what the creators called "an image as place," a composite reality that immersed performers in remote places into a new form of “virtual” space.

Throughout the 1970s and ’80s, painters, sculptors, architects, printmakers, photographers, and video and performance artists increasingly began to experiment with new computer-imaging techniques. During this period, digital art evolved into the multiple strands of practice that would continue to diversify in the 1990s and 2000s, ranging from more object-oriented works to pieces that incorporated dynamic and interactive aspects. With the advent of the World Wide Web in the mid-1990s, digital art found a new form of expression in net art, which became an umbrella term for numerous forms of artistic explorations of the internet. In the early 2000s, net art entered a new phase when artists began to critically engage with the platforms associated with Web 2.0 and social media, producing work on social networking sites such as Facebook, YouTube, Twitter, and Instagram. As digital technologies became part of the objects surrounding us—forming the so-called internet of things—and familiarity with the language of the digital continued to grow, artists began to engage in practices that are now referred to as post-digital, creating works across a range of media that are deeply dependent on digital technologies in their material form.

Rule, Instruction, Algorithm

The works exhibited in Programmed have to be seen in the context of the technological, scientific, and art-historical developments briefly outlined above. What is now understood as digital art is embedded in complex and multifaceted histories that interweave several strands of artistic practice. One of these art-historical lineages is explored in the “Rule, Instruction, Algorithm” section of Programmed , which connects early rule- and instruction-based art forms, such as conceptual art, to algorithmic art and art practices that set up open technological systems.

Among the earliest works in the show are screen prints by Josef Albers, made long after the artist emigrated to the United States from Germany, where he had been an instructor at the Bauhaus, the art school shuttered by the Nazis in 1933 whose innovative approach to design would nevertheless inspire calls for a Digital Bauhaus—unifying art, design, and technology—in arts and academia from the 1990s onwards. Albers was particularly interested in color theory and investigating the perceptual changes in hue caused by placing different colors next to each other. In the works from his Homage to the Square and Variant series exhibited in the show, he developed rules for nesting colored squares and rectangles to emphasize how our perception of a single color—its hue, saturation, and transparency—varies depending on its proximity to and interaction with adjacent colors. Albers work is paired with John F. Simon Jr.’s Color Panel v1.0 (1999), a work of software art based on the Bauhaus experiments with color and displayed on a laptop modified by the artist. Dividing the screen into five rectangles, the software written by Simon encodes variations of transparency and color coding and mixing. One of the rectangles is a programmed version of the “transparency problem” that Albers posed to his students, asking them to mix intermediate colors to make it appear that one shape overlay another. In Color Panel v1.0 , it is the algorithm that mixes the colors to simulate transparency. 

Instruction- and rule-based practice, as one of the historical lineages of digital art, featured prominently in art movements such as Dada (which peaked from 1916 to 1920), Fluxus (named and loosely organized in 1962), and conceptual art (1960s and ’70s), each of which incorporated a focus on concept, event, and audience participation, as well as variations of formal instruction. The idea of rules being a process for creating art also has a clear connection with the algorithms that form the basis of all software and every computer operation: a procedure of formal instructions that accomplish a result in a finite number of steps. Just as with the combinatorial and rule-based processes of Dada poetry or Fluxus performances, the basis of any form of computer art uses the instruction as a conceptual element. 

A large group of works within “Rule, Instruction, Algorithm” explores the connection between programming and conceptual artists, who saw the concept of the idea as a driving force behind their own artistic practices. Among the most prominent conceptual artists, Sol LeWitt created extensive bodies of idea- and instruction-based work in mediums ranging from drawings, photographs, and prints to sculptures that he referred to as “structures.” His wall drawings consist of instructions written in natural language that are executed as drawings at the specific exhibition site. Leaving the execution of the work to someone other than the artist was central to LeWitt’s notion of conceptual art: “In Conceptual art the idea or concept is the most important aspect of the work. When an artist uses a conceptual form of art, it means that all of the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art.” Sol LeWitt, “Paragraphs on Conceptual Art,” Artforum , June 1967, p. 80.

In Programmed , one of the four walls of LeWitt’s Wall Drawing #289  (1976), which implements the instruction “24 lines from the center, 12 lines from the midpoint of each of the sides, 12 lines from each corner,” is juxtaposed with Casey Reas’s {Software} Structure #003 A and #003 B  (2004/2016). Reas’s software executes the instructions, “A surface filled with one hundred medium to small circles. Each circle has a different size and direction, but moves at the same slow rate. Display: A. The instantaneous intersections of the circles; B. The aggregate intersections of the circles.” Reas’s {Software} Structures explicitly reference the work of LeWitt and explore the relevance of conceptual art to the idea of software art. The artist starts his process by writing textual descriptions that outline dynamic relations between visual elements and then implements them as software. Reas is also well known for having created, together with Ben Fry, the programming language Processing, an open-source development environment using the Java language that strives to teach nonprogrammers the fundamentals of computer programming in a visual context.

It is no coincidence that instruction-based practices of the 1960s such as conceptual art developed at the same time when artists began to use computers to create early algorithmic art. The pioneers of this art form wrote code that was stored on punch cards and then run through a computer to drive pen plotters that would create “digital drawings” on paper. Early practitioners became known as Algorists and included artists Harold Cohen, Herbert Franke, Manfred Mohr, Vera Molnar, Frieder Nake, and Roman Verostko, as well as Chuck Csuri, Frederick Hammersley, and Joan Truckenbrod, each of whose work is on view in Programmed . Csuri created Sine Curve Man  (1967), known as the first figurative computer drawing done in the United States, at Ohio State University in collaboration with programmer James Shaffer using an IBM 7094, considered one of the most powerful computers of the early 1960s. The 7094 was employed by NASA in both the Gemini and Apollo space programs, and it was used in early missile defense systems as well. The output of the 7094 consisted of 4-×-7-inch “punch cards” that stored information to drive a drum plotter, specifying when to pick the pen up, move it, and put it down, as well as when the end of a line had been reached, and so on. 

Other instruction-based practices originating in the 1960s were the events and “happenings” of the international Fluxus group of artists, musicians, and performers, which were also often based on the execution of precise instructions. Their fusion of audience participation and event can be seen as a precursor of the interactive, event-based nature of many of today’s computer artworks. The concepts of the found element and instructions in relation to randomness also formed the basis of the musical compositions of vanguard American composer John Cage, whose work in the 1950s and ’60s is most relevant to the history of digital art, anticipating numerous experiments in interactive art. Cage described structure in music as its divisibility into successive parts, and often filled the pre-defined structural parts of his compositions with found, preexisting sounds. In Programmed , the connection between musical scores and instruction-based art is evinced in the work Dance (1979), a collaboration between dancer and choreographer Lucinda Childs and Sol LeWitt, with a score by Philip Glass. Projected onto the gallery floor below video documentation of Dance are Childs’s diagrams that outline the movements of the dancers performing to Glass’s music, which—similar to that of Cage—uses repetitive structures. 

Programmed reveals the use of instructions and language as material in a group of works in which instruction and form collapse into one another and become one and the same. Both Lawrence Weiner’s HERE THERE & EVERYWHERE  (1989)—comprised of four text segments that invite viewers to imagine places—and Joseph Kosuth’s Five Words in Green Neon (1965)—literally spelling out its title in green neon—use language as material and medium, highlighting its potential to generate art. The two works are juxtaposed with W. Bradford Paley’s CodeProfiles (2002), a work of software art that displays the very code that makes the code itself visible on the screen. As in Kosuth’s piece, viewers are looking at the language that creates the work. Paley also shows the role of the reader, writer, and computer in the construction of the piece: an amber line indicates how viewers can read the code line by line, a white line traces how Paley wrote the code, and a green line shows how the computer executes the code to render it visible on the screen. CodeProfiles thereby underscores that digital art is based in rules and instruction, as is most conceptual art.

The potential of instructions and algorithms is further explored in a group of works by Ian Cheng, Alex Dodge, and Cheyney Thompson that engage with the generative qualities of rule-based systems. Theorist, artist, and curator Philip Galanter defines generative art as any art practice in which the artist uses a system, such as a set of natural-language rules, biological processes, mathematical operations, a computer program, a machine, or other procedural invention, that is set into motion with some degree of autonomy, thereby contributing to or resulting in a completed work of art. Philip Galanter, “ What Is Generative Art? Complexity Theory as a Context for Art Theory ,” Proceedings of the International Conference on Generative Art, Milan, Italy (Milan: Generative Design Lab, Milan Polytechnic University, 2003). Generative art practices are used in a range of communities, among them computerized music, computer graphics and animation, VJ culture, and glitch art. They can be traced back to ancient forms such as the symmetric composition rules for generating patterns, for example the masterpieces found in the Islamic world, one of the cradles of mathematical innovation. Not coincidentally, the word algorithm has its roots in Arabic. In Programmed , the generative patterns of the artworks on view are set in motion by systems, such as a conversation between artificial intelligences or the Drunken Walk algorithm, which is used in fields ranging from economics to chemistry and physics to map unpredictability. These algorithmic generative processes for creating patterns also play a crucial role in one of the most important inventions in the history of computing, the Jacquard loom. Created by Joseph Marie Jacquard in 1804, the loom revolutionized the process of weaving through the use of programs that were stored as punched cards to automate the generative creation of fabrics. Charles Babbage and Herman Hollerith would later use programming with punch cards in their conceptualization of computers. Thus, one could argue that generative art made the invention of computers possible, a connection implied by the Jacquard weavings of both Rafaël Rozendaal and Mika Tajima in the exhibition.

Signal, Sequence, Resolution

While the works brought together in this section of the exhibition also employ rules and instructions in their creation, they trace a historical lineage that is distinct from conceptual practices, focusing instead on concepts of light and the moving image, a trajectory that originates with early kinetic art and continues through to new digital forms of cinema and interactive notions of television. Embedded in this trajectory is the evolution of different types of optical environments, which have been researched by scholars of art and technology such as Oliver Grau and Erkki Huhtamo. See Oliver Grau,   Virtual Art: From Illusion to Immersion (Cambridge, MA: MIT Press, 2003); and Erkki Huhtamo, Illusions in Motion: Media Archaeology of the Moving Panorama and Related Spectacles (Cambridge, MA: MIT Press, 2013).

A group of pre-digital works from the 1960s plays with the concept of the electronic signal and draws attention to its function as a carrier of instructions and visual information. For his work Magnet TV (1965), Nam June Paik, a pioneer of video art, placed an industrial-size magnet on top of a TV so that the magnetic interference with the television’s reception of electronic signals distorts the picture into abstract forms. In Thrust (1969), Earl Reiback similarly draws attention to the creation of images through electronic signals by emptying the cathode-ray tube monitor of a TV and replacing it with sculptural elements. A different engagement with light and signal unfolds in Jim Seawright’s Searcher (1966), an early kinetic sculpture whose searchlight both generates light and reacts to it. In scientific terms, kinetic energy is the energy possessed by a body by virtue of its motion, and kinetic art, which peaked from the mid-1960s to the mid-1970s and was often inspired by ideas of cybernetic control systems, frequently produced movement through machines activated by the viewer.

The concept of the digital moving image and digital cinema has been shaped by several strands of media histories and practices, ranging from animation and the live-action movie to immersive environments and the spatialization of the image. We now associate cinema mostly with live action, which is only one of the many trajectories of the moving image’s history. Another trajectory has its roots in the early moving images of the nineteenth century that were based on hand-drawn images and viewed through pre-cinematic devices, such as the Zoetrope and the Kinetoscope. This strand of the history of cinema would develop into animation, which has also gained new momentum and popularity through the possibilities of the digital medium. Continuing the pioneering work in computer-based graphics and filmmaking by figures such as John Whitney Sr. and Chuck Csuri in the 1960s, Bell Labs artist-in-residence Lillian Schwartz created three groundbreaking films on view in Programmed . John Whitney Sr., often called “the father of computer graphics,” used old analog military computing equipment to create his short film Catalog (1961), a catalogue of effects on which he had been working for years. Whitney’s later films Permutations (1967) and Arabesque (1975) secured his reputation as a pioneer of computer filmmaking. Chuck Csuri's film Hummingbird (1967), for which more than 30,000 individual images generated by a computer were drawn on film by means of a microfilm plotter, also is considered a landmark of computer-generated “animation.” For her film Enigma (1972), Schwartz used a programming macro language that divides the screen into a grid of pixels and generates images as patterns of dots. Through rapid shifts between rectangular forms, she creates the perception of strobing color. In Newtonian I and Newtonian II  (both 1978), Schwartz draws upon mathematical systems to create the illusion of three-dimensional images.

In his seminal book of 1970, Gene Youngblood chronicled artists’ experimentation with spatializing moving images in a physical environment as part of his broader theorization of a new “expanded cinema.” Gene Youngblood, Expanded Cinema (New York: Dutton, 1970). Both Nam June Paik’s Fin de Siècle II (1989), one of the centerpieces of the current exhibition, and Mynd (2000) by the Icelandic artist Steina, another pioneer of video art and co-founder of The Kitchen in New York, explore this creation of visual spaces in their resequencing of images. Consisting of a wall of more than two hundred video monitors, Fin de Siècle II  choreographs sequences from music and art videos, anticipating the multi-channel “remix” of videos on the web, a version of which Paik could be said to have presciently envisioned as early as 1974 in what he called an “electronic super highway.” In his report “Media Planning for the Post-Industrial Society,” which Paik submitted to the Art Program of the Rockefeller Foundation, itself an early supporter of artists working in “new media” such as video and television, the artist explicitly outlines his vision for an “electronic super highway.” Steina’s Mynd creates a different kind of image space by surrounding viewers with images of the Icelandic landscape that are subjected to different kinds of processing through the video-editing software Image/ine, thereby contrasting analog and digital image creation and processing. The spatialization of images finds yet another expression in Jim Campbell’s Tilted Plane (2011), a room-size installation of hundreds of hanging LED lights that form a grid of “pixels,” which in turn functions as a low-resolution video display. A two-dimensional video of birds taking off and landing displayed on the three-dimensional tilted plane of lights becomes a flickering abstraction as the viewer moves into the room. Campbell’s exploration of resolution and pixelation can be traced to Pointillism, the painting technique developed by Georges Seurat and Paul Signac in 1886 in which distinct tiny dots of color, mostly the same size, are applied to canvas to form an image.

In more conventional cinema, digital technologies are playing an increasingly important role as a production tool; even if a movie is not a special-effects extravaganza, images that appear entirely realistic have often been constructed through digital manipulation. However, the use of digital technologies as a tool in the production of a linear film does not fundamentally challenge the language of film. Yet the digital medium has the potential to redefine the very identity of cinema and the moving image in many ways. From the possibilities of instant copying and remixing to the seamless blending of disparate visual elements into a simulated form of reality, the medium challenges traditional notions of realism and questions qualities of representation. In addition to altering the status of representation and expanding the possibilities for creating moving images, be they live action or animation, the digital medium has also profoundly affected narrative and non-narrative film through its inherent potential for interaction. 

The element of interaction in film and video is not intrinsic to the digital medium and has already been employed by artists and performers who experimented with light in their projection, for example by incorporating the audience in the artwork through shadow play, an ancient form of storytelling and entertainment. Closed-circuit television and live video captures that made the audience the “content” of the projected image continued these explorations. What is considered to be the world’s first interactive movie, KinoAutomat by Radúz Činčera, was first shown in the Czech pavilion at the 1967 World Expo in Montreal. Shot in alternative versions of image sequences, the film required the audience to vote on how the plot would unfold. Yet digital media have inarguably exploded the potential for interaction, taking these earlier experiments to new levels and leading to new forms of artistic exploration. Featured in Programmed is Lynn Hershman Leeson’s Lorna (1979–84), the first artwork done on interactive LaserDisc. The work unfolds on a television placed within a room-size installation that mirrors the space shown on screen, and its branching narrative is navigated by viewers via a remote control. The television is both the system of interaction and the only means of mediation for the video’s protagonist, Lorna, an agoraphobic isolated in her apartment. The disruptions in the non-linear narrative mirror the instabilities of Lorna’s psychological state.

As digital technologies have impacted nearly all aspects of daily life, artists have not only used them for the creation of new forms and image spaces but also for a critical engagement with the social, cultural, and political impacts of these technologies. A group of works in Programmed explore our increasingly encoded realities and the biases that may be inscribed in them. Keith and Mendi Obadike’s The Interaction of Coloreds (2002), for example, uses satire to highlight how customers’ skin color factors into online commerce, questioning assumptions about the internet as supposedly color-blind. Jonah Brucker-Cohen and Katherine Moriwaki’s America’s Got No Talent (2012) creates an interactive data visualization of the Twitter feeds surrounding popular network-TV talent shows, drawing attention to the ways in which opinion and sentiment are affected by reality television’s use of social media. The artworks exhibited in this group point to the profound changes that technologies have brought about in gathering, processing, and classifying data, thereby altering the frameworks for communication and the fabric of society as a whole.

Digital technologies and interactive media have expanded the range of artistic practices, from advancing concepts originally explored in conceptual art to generating new possibilities for moving-image production and the creation of immersive visual spaces. In the process they have challenged traditional notions of the artwork, audience, and artist. The artwork is often transformed into an open structure in a process that relies on a constant flux of information and engages the viewer in the way a performance might do. The public or audience becomes a participant in the work, reassembling the textual, visual, and aural components of the project. Rather than being the sole creator of a work of art, the artist often plays the role of a mediator or facilitator for audiences’ interactions with and contributions to the artwork. The creation process of digital art itself frequently relies on complex collaborations between the artist and a team of programmers, engineers, scientists, and designers. As such, digital art has brought about work that often defies easy categorization, collapsing boundaries between the disciplines of art, science, technology, and design. Presenting examples of the ways in which artistic practice has engaged with rules, codes, and choreographies for the past fifty years, Programmed points to the rich and complex histories of art, science, and technology, and the ways in which they together have driven and nurtured the evolution of the others—and in the process, have changed how we construct and perceive our societies and cultures at large.

TV screens stacked on top of each other with faces.

Programmed: Rules, Codes, and Choreographies in Art, 1965–2018

Sept 28, 2018–Apr 14, 2019

A 30-second online art project: Peter Burr, Sunshine Monument

Learn more at whitney.org/artport

National Endowment for the Arts

  • Grants for Arts Projects
  • Challenge America
  • Research Awards
  • Partnership Agreement Grants
  • Creative Writing
  • Translation Projects
  • Volunteer to be an NEA Panelist
  • Manage Your Award
  • Recent Grants
  • Arts & Human Development Task Force
  • Arts Education Partnership
  • Blue Star Museums
  • Citizens' Institute on Rural Design
  • Creative Forces: NEA Military Healing Arts Network
  • GSA's Art in Architecture
  • Independent Film & Media Arts Field-Building Initiative
  • Interagency Working Group on Arts, Health, & Civic Infrastructure
  • International
  • Mayors' Institute on City Design
  • Musical Theater Songwriting Challenge
  • National Folklife Network
  • NEA Big Read
  • NEA Research Labs
  • Poetry Out Loud
  • Save America's Treasures
  • Shakespeare in American Communities
  • Sound Health Network
  • United We Stand
  • American Artscape Magazine
  • NEA Art Works Podcast
  • National Endowment for the Arts Blog
  • States and Regions
  • Accessibility
  • Arts & Artifacts Indemnity Program
  • Arts and Health
  • Arts Education
  • Creative Placemaking
  • Equity Action Plan
  • Historically Black Colleges and Universities (HBCUs)
  • Literary Arts
  • Native Arts and Culture
  • NEA Jazz Masters Fellowships
  • National Heritage Fellowships
  • National Medal of Arts
  • Press Releases
  • Upcoming Events
  • NEA Chair's Page
  • National Council on the Arts
  • Leadership and Staff
  • What Is the NEA
  • Publications
  • National Endowment for the Arts on COVID-19
  • Open Government
  • Freedom of Information Act (FOIA)
  • Office of the Inspector General
  • Civil Rights Office
  • Appropriations History
  • Make a Donation

Tech as Art: Commissioned Essays from Arts Practitioners

The following essays were commissioned as a companion to the 2021 report,  Tech as Art: Supporting Artists Who Use Technology as a Creative Medium . Focused to raise visibility of current “Tech As Art” discourse occurring within the larger landscape of contemporary arts, each essay offers compelling provocations uplifting the idea that an equitable, resilient, and thriving arts and cultural ecosystem includes deepened support for artistic practices focused on technology as a creative medium. Alongside providing further context for the arts and cultural sector, these essays offer new perspectives and considerations for intersecting industry leaders in the fields of science, technology, engineering, and math (STEM), to discover shared goals and priorities with tech-focused artists and related practices. These essays are part of an Arts & Technology Field Scan conducted by the National Endowment for the Arts, in partnership with the Ford Foundation and the Knight Foundation.

“ Recoding the Master’s Tools: Artists Remake Systems of Oppression and Extraction in Technology ” by Vanessa Chang

“ How Artists Can Bridge the Digital Divide and Reimagine Humanity ” by Agnes Chavez

“ Teaching the Co-Creators of a New Economy from Lincoln, Nebraska ” by Megan Elliott

“ Artist Perspective: Building Afrocentric Technoculture and Community ” by Ari Melenciano

“ Space, Time, Sustainability, Community: Giving Digital Artists What They Need ” by Stephanie Pereira

“ How the Arts Sector Can Support Transformational Technology ” by Omari Rush

“ Funder Perspective: Broadening Support for Arts and Technology ” by Eleanor Savage

“ Future World-building Depends on Artists and Collaborative Networks ” by Kamal Sinclair

“ Where Is the Public Discourse Around Art and Technology? ” by Hrag Vartanian

“ A Call-to-Action in STEAM Education ” by S. Craig Watkins

Stay Connected to the National Endowment for the Arts

Humanities Futures: Franklin Humanities Institute

  • All Content Types Blog Papers Events Videos Contributors
  • All Groups Academic Futurology Concepts, Figures, Art Forms: Cooperation Concepts, Figures, Art Forms: Melodrama Concepts, Figures, Art Forms: The Other Concepts-Figures-Art Forms: Adorno Concepts-Figures-Art Forms: Whose Kafka Conference: Health Humanities and Social Justice Dance Program Department of African & African American Studies Department of Art, Art History & Visual Studies Department of Asian & Middle Eastern Studies Department of Classical Studies Department of Cultural Anthropology Department of English Department of German Languages & Literature Department of History Department of Music Department of Philosophy Department of Political Science Department of Religious Studies Department of Romance Studies Department of Slavic & Eurasian Studies Department of Theater Studies FHI-NCCU Digital Humanities Initiative Humanities Futures at Large: Conferences & Symposia Humanities Futures at Large: Distinguished Speakers Program in Gender, Sexuality, and Feminist Studies Program in Literature Special Projects: Conferences + Speakers Uncategorized video Working Group: African Cities Working Group: Ancient Mind Working Group: Corporealities Working Group: Environment Working Group: Global Asian Health Humanities Working Group: Global Health Humanities Working Group: Islamic Studies Working Group: Israel/Palestine Working Group: Remapping Sound Studies Working Group: The Black Outdoors Working Group: The Paranormal Working Group: The Present and Future of Critical Islamic Studies Working Group: Translations Studies
  • All Keywords academic futurology Accra Achille Mbembe acoustemology activism activismo aesthetic aesthetics Aesthetics Now affect Africa african american studies African cities african studies African-American africanist Afro-Eurasia Afro-Latinidad afro-pessimism agriculture air Alan Bleakley alt-ac(tivism) animals anthropocene anthropology Antihumanism apocalypse archives art writing arts Asia Asian American Studies automation bacteria beauty BGI biogeochemical cycles Biopolis biopolitics biotech black Italians Black Lives Matter black outdoors blackness bodies body body art botany Breath Body Voice breath body voice conference Brexit British music bruno latour business Bylex campus finance cancer capitalism catarina gomes catfish cell culture challenges Chinese medicine cisgender cities classical studies climate change Cold War college debt Colombia colonial medicine colonialism comics commodity communication community comparative modernities computational concepts/figures/art forms seminars concert Congo consumption corporatization cosmography cosmopolitanism craft creativity crisis critical ecology critical islamic studies critical thinking critical zone critical zone science criticism cultural anthropology cultural studies cultural views culture currency Damon Tweedy dance dance studies Daoism databasing David Novak de-colonialismo decolonialism democracy department partnerships Departments dictadura dictatorship digital Digital Futures digital humanities digital media digitalization discipline Donald Trump Donna Haraway Dreamer Movement duke global health Duke Health Duke Music Duke University dystopia early modern East Asia ebook ecology embodied simulation embodiment emotional intelligence empiricists entextulization environment environmental humanities epistemology esotericism ethics ethnic ethnography ethnomusicology ethnonationalism Eurasia Eurasian studies Eurasianism European Union exile exilio experimental music expressive life faces faculty union fake news fascist feeling feminism field folding fred moten future future of political theory gender gender roles genome geopolitics george balanchine Ghana global & emerging humanities working groups global and emerging humanities working groups global Asian health humanities global blackness global South global warming globalism globalization governance graduate students groundwater recharge Han Tong-se hauntology health health humanities health/medical humanities healthcare heteropatriarchy history Hobbes hope hospitality humanism humanities humanities in africa humanities lab humanities labs humanity and humanism hyperpartisanship hypothalamic neurons Ibn Arabi illness image images independence independencia india indifference indigenous indigenous knowledges indigenous peoples industrialization inequality inequity inflation infrastructure intellectuals inter-departmental seminars interdependencies interdisciplinarity international health intersectionality invisibilized Islam Islamicate italianità Japanoise Jay Hammond jihad John Hope Franklin justice Keywords in Sound Kinshasa knowledge knowledge-holders labor market lake dessication land subsidence language Latin America learning liberal arts literary multilingualism literatura literature local love machine mapping maroonage mass dictatorship materialism materiality Medhin Paolos media mediatization medical humanities medicine meme memory MENA metaphor metrics michael merson Middle East migration Migrations minoritization mobilization modern power modernity molecular biology multilingual philology music Muslim national borders nature negritud mundial negrophobogenesis neo-Freudianism neuroarchaeology neurodiversities neuroesthetics neuropsychiatry neuroscience new comedy North Africa open access origami Pan-Asian paradoxes Paul Celan people of color perception performance Persianate perspectivism peter sloterdijk pharmaceuticals phenomenology philosophy pluralism POC political aesthetics political science politics pollution population postcolonial postcolonialism posthumanist ethics postmodernism power practice vs. theory precarity professional progress public and private spheres public humanities publishing queerness race race assemblage racial translations racialization reading reality reason rebellion redemption redevelopment Religion religious studies research ritual Rockefeller Foundation Russia russian studies sacrifice saidiya hartman satellite city science studies scientific expedition security sensation significant configuration slaveness slavery social justice sonification Sony Lab’Ou Tansi sound studies soundscape South Asia South Korea sovereignty space sport steam-punk steampunk Stephen Jaffe study Sufism sustainability systemic racism technologies technology tenure-track job terrorism text textbook the global arena the visual theater theater studies themes and figures theorists theory theory/pratice tigritud Tigritude timespace totalitarianism Tourist City tradition transborder exchange transgender transgender studies translation transpacific trash Trump truth underemployment undergraduate students undocumented citizenship union untranslatables urban urbanism utopia value production Victorian art violence vision visual brain visual motion war water WHO Wissenschaft Wole Soyinka world literature writing Zong
  • Contributors

Against Digital Art History

Claire bishop cuny graduate center.

First, let me clarify that I am not talking about digitized art history (i.e., the use of online image collections) but rather digital art history, that is, the use of computational methodologies and analytical techniques enabled by new technology: visualization, network analysis, topic modeling, simulation, pattern recognition, aggregation of materials from disparate geographical locations, etc. Some of these techniques have been around for several decades and have proven useful, especially for scholars working on periods where there is little surviving visual evidence (e.g., reconstructing ancient sites). Yet the visual theorist Johanna Drucker, writing in 2013, states that so far none of art history’s "fundamental approaches, tenets of belief, or methods are altered by digital work"—unlike in the 1980s, when "traditional art history" was upended by the incursion of semiotics, psychoanalysis, Marxism, feminism, post-colonial theory, and post-structuralism (Drucker 2013). 1 Drucker nevertheless imagines that future digital databases will permit new questions to be asked of canonical works; she imagines, for example, a database containing the provenance history of different sources of pigments used in Western manuscript illumination and Renaissance painting, which would situate a work like Van Eyck’s Arnolfini Wedding (1434) in relation to global systems of trade and economic value. Her vision of digital art history thus stands as a combination of digital technologies, network analysis, and connoisseurship.

I am not talking about digitized art history…but rather digital art history, that is, the use of computational methodologies and analytical techniques enabled by new technology.

Rather than thinking in terms of theoretical changes, however, we should compare the incursion of digital reproduction into art history to previous technological innovations. Prior to the late nineteenth century, art historians employed originals, casts, prints, sketches, and verbal descriptions to support and disseminate their research (Nelson 2000). The introduction of photographic reproduction enabled wholly new methodological approaches in art history—from the formalism of Heinrich Wölfflin, who introduced the slide comparison to the art history lecture in the 1880s, to the iconographical approach of Aby Warburg in the 1920s, who drew upon a vast archive of photographic reproductions from antiquity to advertising to advance his theory of nachleben . The change wrought by the digitization of slide collections since 2000 is therefore not only one of size and speed (an increased quantity of images for analysis and faster search returns), but also one of method, opening the door to "distant viewing." Already well known in Comparative Literature as "distant reading," this method proceeds by subjecting vast numbers of cultural artifacts to quantitative computational analysis.

A troubling introduction to this method can be found in the first issue of the International Journal for Digital Art History, launched in June 2015. In the first of six articles, new media theorist Lev Manovich introduces five key terms from data science that he believes to be useful to art historians: object , features , data , feature space , and dimension reduction (Manovich 2015). His text is illustrated with examples of his own research projects that draw upon Big Data, including Selfiecity (visualizations of thousands of Instagram selfies in different cities around the globe, assessing the images in terms of age, gender, position, frequency of smiling, etc.) and a principle content analysis (PCA) of over six thousand Impressionist paintings, calculating visual similarities in content and coloration. 2 Another paper, by K. Bender, analyzes 1,840 works of art from the thirteenth to the twentieth centuries showing the figure of Aphrodite or Venus, revealing that on average, artists turned to this theme 2.8 times in their lives (Bender 2015). A third article reports the results of feeding 120,000 portraits from the thirteenth to the twentieth centuries through facial-recognition software in order to establish whether the "canon of beauty" had changed over time (de la Rosa and Suárez 2015). Unsurprisingly, it had—the study concludes that there is a conspicuous decrease of "beauty" in the twentieth century. Only to someone entirely unfamiliar with modernism would this come as a surprise.

I admit that most academic papers, when boiled down to one line, risk sounding simplistic, but in this case the fatuity is extreme. Basic terms like beauty (and even portraiture ) remain uninterrogated; instead, the authors observe that the "more average and symmetrical, the more beautiful a face is usually ranked," noting with approval that this criterion turns "a subjective opinion such as what face is beautiful into something measurable and objective" (ibid.). A complex human evaluation is reduced to statistical calculation. Equally blunt is the claim, found in almost every essay in this journal’s inaugural issue, that "this empirical finding has never before been highlighted in art history"—as if novelty were a sufficient measure of interest and substance. Further, the data set affirms the art historical canon ("Impressionist paintings," "figures of Aphrodite or Venus") rather than challenging it or even addressing it critically. Who decides what is understood as the canon? What is left out? On the evidence of these articles, practitioners of digital art history have a limited awareness of critical debates within art history (such as the long-standing, and some would say long-dead, question of "beauty"), but also a limited grasp on how to frame a meaningful research question. Theoretical problems are steamrollered flat by the weight of data.

Who decides what is understood as the canon? What is left out?

This silence, however, seems to be to digital art history’s advantage. This new approach is already finding its way into museums, and not just conservation departments that have long had a relationship to scientific research. Consider the network map produced by the Museum of Modern Art, New York, for the exhibition "Inventing Abstraction 1910–1925" (2012–13), created by the curators in collaboration with a professor and a doctoral student at Columbia University’s business school. 3 The map, an update of Barr’s well-known diagram for the catalogue Cubism and Abstract Art (1936), covered a wall at the entrance to the exhibition. On the exhibition website, the map allows users to click on various names, mapped geospatially from the West to the East, in order to see which artists were in contact with whom during this period. One positive outcome of this mapping was that several female artists, usually relegated to the sidelines, were repositioned as key players: Sonia Delaunay and Natalia Goncharova were ranked as the "most connected" alongside Jean Arp, Guillaume Apollinaire, Pablo Picasso, Tristan Tzara, and Alfred Stieglitz. But what does it really mean to be "connected"? As art history doctoral students Jonathan Patkowski and Nicole Reiner argue in their critique of the exhibition, this map recodes the early twentieth-century artist as a contemporary networked entrepreneur whose importance is now gauged in terms of number of social connections (i.e., documentable acquaintances) rather than artistic innovations (Patkowski and Reiner 2013). Carefully reasoned historical narrative is replaced by social network (the avant-garde equivalent of LinkedIn) and has no room for non-human agents that elude quantification—such as African artifacts, which were crucial to the development of abstraction, or the imperial powers that mobilized their circulation in Europe.

My point is that subordinating art history to computational analysis might well reveal "empirical findings never before highlighted in art history," but this method also perpetuates uncritical assumptions about the intrinsic value of statistics.

My point is that subordinating art history—whether the invention of abstraction, Impressionist painting, or the new genre of the selfie—to computational analysis might well reveal "empirical findings never before highlighted in art history," but this method also perpetuates uncritical assumptions about the intrinsic value of statistics. In Undoing the Demos (2015), Wendy Brown argues that neoliberalism should be regarded less as a political formation than as a form of reason, a system of governance in which "all spheres of existence are framed and measured by economic terms and metrics, even when those spheres are not directly monetized" (Brown 2015, 10). Her examples include any online activity that measures output by the number of "likes" or "followers," from Facebook and Instagram to online dating. Digital art history is just such a subordination of human activity to metric evaluation. It is inextricably linked to the ascendancy of the digital humanities, which has flourished despite financial cuts to the "analog humanities", and which is seen as a way to make humanities’ outputs "useful"—like science, technology, engineering, and mathematics (i.e., industry-preferred STEM subjects). 4 In the words of new media scholar Richard Grusin, "It is no coincidence that the digital humanities has emerged as ‘the next big thing’ at the same moment that the neoliberalization and corporatization of higher education has intensified in the first decades of the twenty-first century" (Grusin 2013). This is not to say that the digital humanities are doomed to be the unwitting handmaidens of neoliberal imperatives, but it is important to note how its technopositivist rationality is disturbingly synchronous with the marketization of education: the promotion of MOOCs as value-for-money content delivery; the precarious position of adjunct professors; the tyranny of academic rankings; and the remaking of the university away from "quaint concerns with developing the person and citizen" and toward a model of the student as self-investing human capital (Brown 2015, 23). 5 Any study that mobilizes Big Data needs to reflect critically on the mechanisms by which this data is gathered: corporate data mining, state surveillance, and algorithmic governance techniques. 6

Digital art history, as the belated tail end of the digital humanities, signals a change in the character of knowledge and learning.

Digital art history, as the belated tail end of the digital humanities, signals a change in the character of knowledge and learning. Ideals like public service, citizenship, knowledge as an end in itself, and questions of what is just, right, and true have decreasing validity because they resist quantitative measurement, and moreover do not easily translate into information that optimizes the performance of society (i.e. generate) profit. Instead, research and knowledge are understood in terms of data and its exteriorization in computational analyses. This raises the question of whether there is a basic incompatibility between the humanities and computational metrics. Is it possible to enhance the theoretical interpretations characteristic of the humanities with positivist, empirical methods—or are they incommensurable?

We have to be careful how we phrase this dilemma. Drucker floats the possibility—although she eventually rejects the idea—that visual art might be fundamentally resistant to computational processing and analysis because it is so emphatically tied up in narratives of singularity, individuality, and exceptionality. These valorizing terms are of course not exclusive to art history and play an important role in canon formation across all of the humanities. We know from Franco Moretti’s controversial method of "distant reading"—analyzing literature not by studying particular texts, but by aggregating massive amounts of data—that singular genius is one of the first concepts to fall by the wayside when dealing with literature as an integrated system of global publishing. On the one hand, this is appealing: who among us could really argue that the canon isn’t too white, male, and European? And Moretti is right to observe that close readings can become a "theological exercise—very solemn treatment of very few texts taken very seriously" (Moretti 2000). 7 When you glance at Moretti’s work—such as Graphs, Maps, Trees (2007)—it is conspicuous that paradigmatic examples and block quotes have been replaced with diagrams, models, and schemas, but at least these graphs trigger interpretation: a social history supported by statistics rather than text mining the number of times a given word appears in Proust. 8 Moretti’s earlier work, prior to setting up the Stanford Literary Lab in 2010, is especially interesting in trying to analyze all literature from a given period, both canonical and noncanonical; questions of historical causality remain central for him, in part because they are the blind spot of distant reading, the argument that statistics cannot supply.

Yet, increasingly, Moretti—like Lev Manovich—proceeds with the data set in advance of a research question, or what digital humanist Alan Liu calls "tabula rasa interpretation—the initiation of interpretation through the hypothesis-free discovery of phenomena" (Liu 2013). 9 In this model, topics are generated without an initial concept or question from an interpreter looking to confirm a theme or pattern; computers read texts/images algorithmically, with minimal human intervention. In the case of Manovich’s Cultural Analytics (a hybrid new interdiscipline), data are aestheticized into patterns, but the task of interpreting these patterns is left up to others. 10 As a result, digital art history has a fraught relationship to history and interpretation. Does the data set exist in history before being sequenced digitally or is it only actualized once it has been laid out via the digital archive? Are the assembled historical "facts" found or produced? What’s the relation between what’s empirically observable and what’s true? Technology is presumed to provide objective access to reality in a way that subjective interpretation cannot. The result is an avoidance of argumentation and interpretation, as exemplified by the articles in the International Journal of Digital Art History . 11 Computational metrics can help aggregate data and indicate patterns, but they struggle to explain causality, which in the humanities is always a question of interpretation. In effect, a post-historical position is assumed: the data is out there, gathered and complete; all that remains is for scholars to sequence it at will. Here, computational methods become another manifestation of the drive for mastery over history and the archive. The analog humanities, by contrast, remain outside the logic of tidy deliverable answers; their importance, as media theorist Gary Hall notes, lies in their ability to hold open a space for "much-needed elements of dissensus, dysfunction, ambiguity, conflict, unpredictability, inaccessibility, and inefficiency" (Hall 2013, 798).

Contemporary art, perhaps more than any other art form, is entirely embroiled in digital technology: it permeates the production of work, its consumption and circulation. It is noticeable that artists are increasingly turning to cut-and-paste methods to create work across a wide variety of media. Pre-existing cultural artifacts are remixed and reformatted, generating a mise-en-abyme of references to previous historical eras. As part of this historical orientation, obsolete technologies have acquired a new auratic currency (8 and 16mm film, slide projectors, fax machines, even VCR players), as has the trope of the archive. We are currently in a hybrid moment where non- or pre-digital materiality is sustained alongside a digital way of thinking: an approach to information in which sources are decontextualized, remixed, reorganized, and archived. This hybridized interpenetration of digital and non-digital extends to the distribution and consumption of art. Today, most exhibitions reach their audiences as jpgs: artists increasingly mount their shows with the installation shot in mind, and gallery lighting has become brighter so that photographs ‘pop’ on a back-lit plasma screen. Works of art are bought and sold as jpgs, without collectors ever having seen the original in person.

My current project, "Déjà Vu: Reformatting Modernist Architecture," has engaged in a type of distant reading—one that could only have been realized with the assistance of digital technology, but which is steered by a critical human eye. In the slideshow that accompanies the lecture version of "Déjà Vu," I replace the singular, paradigmatic example with hundreds of case studies—works of art gathered from North and South America and Eastern and Western Europe since 1989. Over three hundred images scroll before viewers, in different combinations; the aim is to move beyond the traditional illustrative slide comparison to a scenario in which the images begin to create an argument in their own right, bolstering (but also at moments contesting) my interpretation. Over the course of an hour, the audience experiences a number of déjà vus: works of art, all of which take as their starting point a pre-existing work of modernist architecture or design (including iconic structures by Le Corbusier, Oscar Niemeyer, and Vladimir Tatlin), also recur in different sequences. 12 The title refers to Paolo Virno’s theory of déjà vu as a distanciation from agency: he describes it as a pathological condition of watching ourselves live and feeling that the future has been fatalistically prescribed for us, and connects this condition to the post-political consensus after 1989. 13 Something of this fatalism is conveyed in the relentlessness of my PowerPoint, which generates the feeling of scrolling through a tide of images (as when searching online), and yet each work appears before us, rather than being aggregated into a single graphic visualization. The PowerPoint partly repeats the numbing effect of the online image world, but also becomes a tool to make this available to interpretation.

My target is the mainstream, the mediocre, the déjà vu: the work we feel like we’ve seen before, the highlights of modernism already witnessed, the projects by artists that are unquotable because they are themselves so reliant upon quotation.

Given that the rise of this artistic trend is a convergence of ideological narratives about a geopolitical condition ("the end of history") encountering the proliferation of digital media, this flow of images generates an argument about repetition and banality without me having to spell it out verbally. The slideshow has occasionally infuriated audiences, who see it as leveling the specificity of artists’ practices in different parts of the world, and ignoring attempts to chart gender or race through the quotation of modernist forebears (even though my text draws out these historical and ideological differences). My reason for presenting images in this "distant," non-hierarchical way is that I believe there are no paradigmatic examples of this trend, and that the differences between these works are less significant than their similarities. My target is the mainstream, the mediocre, the déjà vu: the work we feel like we’ve seen before, the highlights of modernism already witnessed, the projects by artists that are unquotable because they are themselves so reliant upon quotation.

Distant reading serves as a critique of the system in which these works thrive: not just the rapidity of image circulation online, but also the New York art world, with its thousands of commercial galleries and their disproportionate impact upon museum practice, all of which creates an increasingly off-putting haze of hype and high finance around contemporary art. This condition is rarely resisted by artists here, who leave art schools with huge debts and need to get on the career ladder as soon as possible in order to start repaying loans. The MFA-debt/gallery-profit cycle has made it increasingly difficult to write about contemporary art without also wanting to run a mile from it. Distant viewing is my expression of this distance. The disjunctive simultaneity of proximity and distance is also the condition of consuming images in the twenty-first century and thus the subject of my paper as much as its method. As such, I hope that my project functions as a critical intervention both into a contemporary art history that seems always to bolster singular figures for the market, and into a digital art history that privileges computational over ideological analyses.

1 Drucker draws the useful distinction between digitized and digital art history on page 5.

2 Selfiecity can be found online at www.selfiecity.net . The main findings include the following: more women take selfies than men and strike more extreme poses; the average age of selfie photographers is 23.7; people in Moscow smile less than people in São Paulo and Bangkok. The project used Amazon’s Mechanical Turk workers to classify 640 selfies from each city, taken from a random sample of 120,000 images from Instagram.

3 Paul Ingram and Mitali Banerjee, www.moma.org/interactives/exhibitions/2012/inventingabstraction/?page=connections

4 The term analog humanities is taken from Sterne 2015, 18.

5 The Washington Post recently reported that Purdue University (Indiana) has partnered with businesses as an alternative to student loans: investors front students the money to pay for education in exchange for a share in future earnings (Douglas-Gabriel 2015).

6 This problem is not confined to digital art history. As English/Comp Lit scholar Brian Lennon notes, ". . .the digital humanities has displayed almost no specifically political interest in the world outside the university and too little explicit interest of any kind in the broader interinstitutional politics of the world within the university in its imbrication with the institutions of security and military intelligence" (Lennon 2014, 140–41).

7 For a concise response, see Schulz (2011).

8 Influenced by historian Ferdinand Braudel’s theory of the longue durée , Moretti argues that the novel developed as a system of its genres (in other words, we cannot speak of "the novel" but only of a whole set of forty-four genres). Looking at the publication rates for novels over periods of decades, he moves from quantitative facts to speculation and interpretation; for example, he suggests that the rise and fall of the various genres of the novel in the United Kingdom correlate to twenty-five- to thirty-five-year cycles (i.e., to generations of readers) (Moretti 2007). Earlier work, such as "Conjectures in World Literature," provocatively conclude that the modern novel first arises not as an autonomous development but as a compromise between a western formal influence (usually French or English) and local materials"; in other words, the Western European novel is an exception, not the rule (Moretti 2000).

9 This can be seen, for example, in Moretti’s quantification of the plot of Shakespeare’s Hamlet (Moretti 2011).

10 See Gary Hall’s incisive critique of Manovich (Hall 2013).

11 Likewise, the authors of the paper on beauty and portraiture conclude that "any approach to the culturomics of art history and beauty also takes into account cultural evolution and cultural history as forces that shape the results we find in the data"—without feeling any obligation to supply this (Rosa and Suárez 2015, 125).

12 This type of work is near unsearchable on the Internet because search engines cannot cope with self-reflexivity (contemporary art quoting modern art). My examples were therefore amassed slowly, via exhibition catalogues, artists’ websites, press releases, Tumblrs, and blogs.

13 Post-politics is a term used by political philosophers—including Jacques Rancière, Chantal Mouffe, Slavoj Žižek, and Jodi Dean—to describe the post-ideological consensus that dominated global politics after the Cold War.

Bender, K. 2015. "Distant Viewing in Art History, A Case Study of Artistic Productivity." International Journal for Digital Art History 1: 100–110.

Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution . New York: Zone Books.

Douglas-Gabriel, Danielle. 2015. "Investors Buying Shares in College Students." Washington Post , November 27.

Drucker, Johanna. 2013. "Is There a ‘Digital’ Art History?" Visual Resources 29 (March): 7.

Grusin, Richard. 2013. "The Dark Side of the Digital Humanities." Thinking C21 blog. http://www.c21uwm.com/2013/01/09/dark-side-of-the-digital-humanities-part-2

Hall, Gary. 2013. "Toward a Postdigital Humanities." American Literature 85 (4): 781–809.

Lennon, Brian. 2014. "The Digital Humanities and National Security." differences 25 (1): 132–55.

Liu, Alan. 2013. "The Meaning of the Digital Humanities." PMLA 128 (2): 414.

Manovich, Lev. 2015. "Data Science and Digital Art History," International Journal for Digital Art History 1: 12–35. www.dah-journal.org

Moretti, Franco. 2000. "Conjectures on World Literature." New Left Review 1: Jan–Feb. http:// www.newleftreview.org/II/1/franco-moretti-conjectures-on-world-literature

———. 2007. Graphs, Maps, Trees: Abstract Models for a Literary History . London: Verso.

———. 2011. Network Theory, Plot Analysis." New Left Review 68: March–April. http://www.newleftreview.org/II/68/franco-moretti-network-theory-plot-analysis

Nelson, Robert. 2000. "The Slide Lecture, or the Work of Art History in the Age of Mechanical Reproduction." Critical Inquiry 26: 414–34.

Patkowski, Jonathan, and Nicole Reiner. 2013. "Inventing Abstraction, Reinventing Our Selves," Material World blog, February 23. www.materialworldblog.com/2013/02/inventing-abstraction-reinventing-our-selves

de la Rosa, Javier, and Juan-Luis Suárez. 2015. "A Quantitative Approach to Beauty: Perceived Attractiveness of Human Faces in World Painting." International Journal for Digital Art History 1: 112–29.

Schulz, Kathryn. 2011. "What Is Distant Reading?" New York Times , June 24. http://www.nytimes.com/2011/06/26/books/review/the-mechanic-muse-what-is-distant-reading.html

Sterne, Jonathan. 2015. "The Example: Some Historical Considerations." In Between Humanities and the Digital , edited by Patrik Svensson and David Theo Goldberg. Cambridge, MA: MIT Press.

Virno, Paul. 2015. Déjà Vu and the End of History . New York: Verso. Previously published in 1999.

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

  • Curated Group: Digital Futures
  • Department of Art, Art History & Visual Studies

Share This Article

Inter-departmental seminar: translations.

October 20, 2015

Latest Blogs

  • Transgender Studies: Course Listings & Sample Reading List October 15, 2019
  • FHI-NCCU Digital Humanities Fellows holds second annual symposium June 7, 2018
  • Table of Contents for Humanities Futures Papers December 4, 2017
  • Instructor Guest Post: Building Global Audiences for the Franklin Humanities Institute September 25, 2017
  • Announcing new cohort of FHI-NCCU Digital Humanities Fellows (2017-18) August 19, 2017

Latest Papers

  • Academic Precarity in American Anthropology
  • After the Rebellion: Religion, Rebels, and Jihad in South Asia
  • Climate Change, Cultures, Territories, Nonhumans, and Relational Knowledges in Colombia
  • Clive Bell’s "Significant Form" and the Neurobiology of Aesthetics
  • The New Humanities?
  • Health, Illness, and Memory

Latest Media

  • An Interview with David Novak, UC Santa Barbara
  • “The Education of Bruno Latour: From the Critical Zone to the Anthropocene” Feature-Length Documentary
  • From Body to Body: Duke Students Learn From a Dance Legend
  • Archaeology, Memory, and Conflicts Workshop [Panopto stream]
  • Craig Klugman: Future Trends in Health Humanities Publishing and Pedagogy
  • Neurodiversities | Deborah Jenson: Flaubert’s Brain: Epilepsy, Mimesis, and Injured-Self Narrative

Franklin Humanities Institute

Home — Essay Samples — Information Science and Technology — Digital Era — Why Digital Art Is Better Than Traditional Art

test_template

Why Digital Art is Better than Traditional Art

  • Categories: Digital Era

About this sample

close

Words: 471 |

Published: Dec 16, 2021

Words: 471 | Page: 1 | 3 min read

Works Cited

  • Anderson, S. (2020). The Digital Renaissance: How Technology is Transforming Art. New York: Thames & Hudson.
  • Johnson, E. (2018). The Advantages of Digital Art for Beginners. Artwork Archive. Retrieved from https://www.artworkarchive.com/blog/the-advantages-of-digital-art-for-beginners
  • Landa, R. (2018). Digital Painting Techniques: Practical Techniques of Digital Art Masters. 3rd ed. New York: Routledge.
  • Adobe. (n.d.). Adobe Photoshop. Retrieved from https://www.adobe.com/products/photoshop.html
  • Autodesk. (n.d.). Autodesk Sketchbook. Retrieved from https://www.autodesk.com/products/sketchbook
  • Smith, J. (2019). Traditional Art vs. Digital Art: Which is Right for You? Medium. Retrieved from https://medium.com/@jadesmithauthor/traditional-art-vs-digital-art-which-is-right-for-you-85500c1f0149
  • Creative Bloq. (n.d.). The Best Drawing Tablets in 2023. Retrieved from https://www.creativebloq.com/buying-guides/best-drawing-tablet
  • ArtRage. (n.d.). ArtRage. Retrieved from https://www.artrage.com/
  • Chen, M., & Li, L. (2021). Exploring the Advantages of Digital Art Education in a Post-pandemic World. International Journal of Art & Design Education, 40(1), 148-163.
  • Clark, J., & Iskin, R. (Eds.). (2022). The Routledge Companion to Digital Humanities and Art History. New York: Routledge.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Prof. Kifaru

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 580 words

3 pages / 1186 words

2 pages / 795 words

3 pages / 1445 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Why Digital Art is Better than Traditional Art Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Digital Era

The right to privacy is a fundamental human right that has evolved and adapted over time, particularly in the face of rapid technological advancements. In an era where personal information is more accessible than ever before, [...]

The experience of nurturing my virtual child illuminated the complexities of parenthood and child development within a digital context. As I guided my virtual child through infancy, childhood, and adolescence, I discovered [...]

In the digital age, we find ourselves immersed in a sea of information and entertainment, bombarded by a constant stream of images, videos, and messages. Neil Postman's prophetic book, "Amusing Ourselves to Death," published in [...]

The provision of free internet access is a topic of growing importance in our increasingly digital society. The internet has transformed the way we communicate, access information, and engage with the world. However, access to [...]

"Is digital transformation making its presence felt? Surely, if one goes by the contribution that only apps have been making to our national income. Data suggests that in 2015-16, apps have contributed 1.4 lakh crore to our GDP. [...]

By revolutionising business processes and workflows, Robotic Process Automation offers a multitude of financial and non-financial benefits. However, most companies are concerned how their workforce will respond to RPA amidst [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay about digital art

Digital art - Science topic

Figure 1. Growth rate of emergence and extraction

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Digital Art Exhibition Review Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Digital exhibition, digital art criteria, digital virgin, digital dancer, reference list.

Digital or new media art has greatly transformed the world or art. Digital art emerged in the 1970s and has gained immense popularity with advancement in digital technology. Prior to the digital art era, modern art was more prevalent. Sometimes, digital art is often grouped as a form of modern art but comparing it with most modern art works, it is evident that there is significant difference.

New medial or digital art therefore refers to a variety artistic works prepared using digital technology (Miller 2008, p. 31). Other phrases such as multimedia and computer art are also used to refer to digital art and all the artistic work the common features are referred to as new media art. Uri Dotan is one of the most successful digital artists.

Based in New York, the artist has many artistic works under his name. Uri Dotan’s works of art are common in many exhibitions not only in New York but also in other parts of the world. With advancement in internet technology, many artists have resorted to online exhibit to display their works of art. In this paper, I will review Uri Dotan digital art online exhibit.

Digital art has wide usage in advertisement and filmmaking. It plays a significant role in the modern world where graphic representation has found wide use in the field of marketing. The wide usage of the internet has also created very significant ground for growth of digital art.

Digital art can be created in various ways. Some of the artistic works are purely computer-generate while other could be generated from other sources. The works of arts could be generated through scanning, vector graphics and other technologies (Christiane 2006, p. 46). The works of art therefore can refer to works of arts created through other means and later transformed using a computer program.

Uri Dotan’s online art exhibit is one of the most fantastic online exhibitions to visit. Unlike conventional art exhibit, online art exhibit is accessed from any place in the world, as long as one has internet connection. Online exhibit is therefore open to large audiences. With many upcoming artists, online exhibition provides an efficient ways for displaying artistic works to the public. Uri Dotan is a skilled artist as demonstrated in his works of art. The common theme in his artistic works is that of influence of technology.

In his works of arts, the artist is able to take his audience many years ahead of their time. Using his skills in digital art, the artists capture his imagination of the future. Digital art is first of all created in the mind of the artist before being projected electronically (Churchill 2007, para. 3). Uri Dotan demonstrates his imagination as he takes us fifty years ahead of our time through his skills by creating artwork that shows influence of technology on society (The Williams Gallery 2008, para. 2).

The extraordinary Uri Dotan’s sculptures and paintings exist in digital world. The artistic works appealing to the eye right from the first pages of the online exhibition.

To review an exhibition successfully, one need to use some criteria. Each style of art has some expected criteria. Although the criteria are not explicit, an artist using a particular artistic knows the standard that his or her work or art should meet. Digital art style in one of the challenging styles to analyze.

This is above all due to the dynamic nature of the style where digital artists keep coming up with new things every now and then (Wands 2007, p. 79). Digital art mainly comprise of paintings, sculpture and other work of art that have been transformed using digital technology. For digital art to be considered to be successful, It should be able demonstrate relationship with the work of art from which it was obtained. Uri Dotan is able to demonstrate this as looking at his work one knows that he or she is looking an artistic work.

The second criterion that Uri Dotan is able to demonstrate in his work is that is able to the medium as a tool to communicate his message rather than as an end. Their criterion that Uri Dotan is able to demonstrate in his work is originality. All the artistic works in the exhibit are not derived from other people’s works but they are original. The last important criterion of good work of art demonstrated in Uri Dotan work is life. All his works of art have life in themselves and therefore, they are able to be appealing to eyes.

The first artistic work to be encountered is given the name Virgin. The beautiful digital painting is an abstract sculpture. The digital Venus is created in an architectural space giving it an outstanding pose (The Williams Gallery 2008, para. 4). The painting is composed in a vertical rectangle with figure centered. The painting is movable and recognizable; it as if the artist is welcoming his audience to view his remaining works of art.

The second artistic work in the online exhibit is referred to as Digital dance. The artwork is comprised of bright strips of metal. The work is poised vertically by use of architectural space (The Williams Gallery 2008, para. 6). Although all these are features of a conventional work, there is no doubt when looking art work that one is looking and a virtual sculpture. Although virtual, the artwork is able draw emotions of creativity associated with world of art rather than technological environment.

Uri Dotan’s proficiency in digital art continues to unfold as one meets his other works. The second phase of his exhibits begins with a figure which is more mural Navado-like. Tree leaves and sticks have also been used to symbolize hair and the hand. A pedestal, in three dimensions, sits in front of the mural. Beneath this figure, a keen observe is able to what looks like human feet.

Uri Danton’s artistic demonstrate his command in digital art. The artist is able to explore various styles in his artistic work. For his fifth piece of work (labeled Hummingbird), Uri Danton has decided to make use of an angle and walls, as opposed to the all too familiar rectangle (The Williams Gallery 2008, para. 7). The image is not completely enclosed but some of its parts come out of the case.

The image seems complicated with shadow falling on the wall. In total, there are eight Uri Dotan’s artistic works in the online exhibit. All the eight digital images are wonderful and they demonstrate Dotan’s proficiency as a digital artist. The artist is able to create spaces and fill them with desired figure.

Digital art is a major revolution in the world of art. Advancement in digital technology has provided artists with a tool not only to create new images but also transform old works of art. Uri Dotan has demonstrated his skills as a digital artist. In the online exhibit, his work comes out as of high quality and skillful.

Christiane, P., 2008, Digital art . New York: Thames & Hudson.

Churchill, S., 2007. The Art of Digital Show. Web.

Miller, R., 2008, Digital Art: Painting With Pixels . New York: Twenty-First Century Books.

The Williams Gallery., 2008. Uri Dotan . Web.

Wands, B., 2007, Art of the digital age . New York: Thames & Hudson.

  • Art Gallery with Ko-Kutani
  • Role of Modern Art Museum
  • History of Asian Exhibition: Analysis
  • Paintings: Andrew Wyeth Exhibition
  • Sculpture and painting
  • The Art Institute of Chicago
  • Exhibitions at the International Center of Photography
  • Packaging Visual Communication
  • Exhibition Review
  • Summary of “Inside the White cube” by Brian O’Doherty
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, February 7). Digital Art Exhibition Review. https://ivypanda.com/essays/digital-art-exhibition-review/

"Digital Art Exhibition Review." IvyPanda , 7 Feb. 2019, ivypanda.com/essays/digital-art-exhibition-review/.

IvyPanda . (2019) 'Digital Art Exhibition Review'. 7 February.

IvyPanda . 2019. "Digital Art Exhibition Review." February 7, 2019. https://ivypanda.com/essays/digital-art-exhibition-review/.

1. IvyPanda . "Digital Art Exhibition Review." February 7, 2019. https://ivypanda.com/essays/digital-art-exhibition-review/.

Bibliography

IvyPanda . "Digital Art Exhibition Review." February 7, 2019. https://ivypanda.com/essays/digital-art-exhibition-review/.

Content Marketing Institute

How To Turn Old Content Into a New Work of Art With an AI Assist

How To Turn Old Content Into a New Work of Art With an AI Assist

  • by Lieu Pham
  • | Published: August 7, 2024
  • | Content Operations

Have you heard any of the countless remixes of New Order’s instant classic Blue Monday ? Each one reimagines the synth-laden ’80s staple, a derivative of the original work, yet brings something new to the table.

Content remixing operates exactly like that. It’s a form of “ cultural remixing ” in the digital realm that uses existing ideas and assets to craft something new and unexpected.

Natural language processing models such as ChatGPT or other generative AI tools like Claude have made the art of content remixing easier and more accessible, allowing you to breathe new life into — and get more mileage out of — your greatest hits with a relatively low lift.

The following are four AI-assisted remix strategies for building on existing ideas. I use ChatGPT because it’s my go-to tool, but you can try variants of these on your NLP platform of choice.

Note: It’s worth emphasizing that all remixed content should be original or used with express permission from the original creators. Later in the article, see the do’s and don’ts list about using AI responsibly.

Remix 1: The Best of Album — How to expand articles into white papers

Difficulty grade: 7/10

Use this AI expansion method to transform individual blog articles into a cohesive long-form piece by weaving together insights from handpicked articles with a common thread. Similar to David Bowie’s Legacy album and other best-of smashes, this approach transforms existing content into an elevated and comprehensive resource, like a white paper.

The process takes just a few steps:

Step 1: Develop a unified story

This technique employs a bottom-up storytelling approach, beginning with smaller components and gradually building them into a larger, more comprehensive narrative. You use generative AI to seamlessly blend blog posts into a cohesive storyline to achieve a smooth flow that connects diverse areas of a singular topic.

You can do this in a few ways. First, you can paste your brand’s blog headlines into the prompt or add the themes from the headlines to generate an outline for the white paper.

Example prompt:

As a content strategist for a supply chain technology provider committed to advancing sustainable practices, your objective is to create a cohesive white paper by synthesizing multiple blog articles. Targeted at global companies with complex supply chains spanning multiple countries, your task is to prompt AI to seamlessly connect ideas and findings, crafting a narrative arc that guides readers through a clear journey of discovery and insight.

Your goal is to transform disparate blog articles into a unified white paper that engages readers with a compelling and logical storyline. Your deliverables should be a white paper outline based on our blog articles focused on “supply chain sustainability.” Explore how shifting customer values are reshaping traditional business models, highlight successful case studies, and discuss emerging trends in sustainable practices, environmental stewardship, and carbon neutrality goals.

Example prompt response:

Whitepaper Outline: Advancing Sustainable Supply Chains in a Global Context

I. Introduction

Purpose of the Whitepaper

  • Introduce the significance of supply chain sustainability in today’s global business landscape.
  • Highlight the growing importance of sustainable practices driven by shifting customer values.

II. Understanding Supply Chain Sustainability

Defining Supply Chain Sustainability

  • Define what constitutes a sustainable supply chain.
  • Discuss the triple bottom line approach: economic, environmental, and social aspects.

 III. The Shift in Customer Values and Business Models

Impact of Shifting Customer Values

  • Explore how changing consumer preferences are influencing businesses to adopt sustainable practices.
  • Case studies of companies successfully aligning with customer values.

IV. Key Components of Sustainable Practices

Environmental Stewardship

  • Strategies and technologies for reducing carbon footprint and environmental impact across the supply chain.
  • Highlight innovations in renewable energy adoption and resource efficiency.

Social Responsibility

  • Addressing labor standards, human rights, and community engagement in supply chain operations.
  • Case studies showcasing best practices in social sustainability.

Economic Viability

  • Balancing sustainability goals with economic objectives.
  • Cost-benefit analysis of sustainable investments and their long-term financial impacts.

V. Case studies of Successful Implementation

Leading Companies in Supply Chain Sustainability

  • Highlight specific examples of companies achieving significant milestones in sustainability.
  • Analyze their strategies, challenges faced, and lessons learned.

VI. Emerging Trends in Sustainable Practices

Technological Innovations

  • Role of AI, IoT, and blockchain in enhancing supply chain transparency and sustainability.
  • Predictive analytics for sustainable decision-making.

  Regulatory Landscape

  • Overview of global regulations impacting supply chain sustainability.
  • Compliance challenges and opportunities for businesses.

VII. Future Outlook: Towards Carbon Neutrality Goals

Setting and Achieving Carbon Neutrality

  • Strategies for setting ambitious carbon neutrality goals.
  • Pathways to achieve carbon neutrality across supply chain operations.

VIII. Conclusion

Summary of Key Findings

21. Recap the importance of supply chain sustainability for global companies.

22. Encourage continued innovation and collaboration towards a sustainable future.

IX. Recommendations for Global Companies

Guidance for Implementing Sustainable Practices

  • Practical steps and recommendations for companies looking to integrate sustainability into their supply chain strategies.
  • Call to action for industry leaders to drive change and set new standards.

Once you’ve woven a solid outline of your narrative with umbrella storytelling, it’s time to take things up a notch with semantic expansion.

Step 2: Explore semantic expansion 

Rather than rehash blog posts in a white paper, harness the power of generative AI to go into the nuances and implications of the key concepts detailed in the outline. Use prompts to explore related topics, dive into underlying principles, and explore real-world applications.

This approach turns your white paper into a powerhouse of insights. Example prompt:

Conduct a comprehensive semantic expansion for Chapter 4 of the whitepaper on Environmental Stewardship and regulatory compliance. Focus on global regulations, exploring diverse regulatory landscapes across major markets, and analyzing the impact of various initiatives. Discuss the complex challenges and opportunities faced by businesses in achieving compliance and upholding sustainability standards amidst evolving regulatory frameworks.

Tailor this exploration to resonate with our audience—global companies with complex supply chains spanning multiple countries and regions. Highlight our expertise as a supply chain technology provider dedicated to enhancing sustainable practices, ensuring the topic aligns with our authority and relevance in the industry.

Chapter 4: Environmental Stewardship and Regulatory Compliance

Introduction

In the context of global supply chains, environmental stewardship has emerged as a critical component of corporate responsibility and operational strategy. This chapter explores the intricate interplay between sustainability goals and regulatory frameworks across major markets worldwide. By delving into these dynamics, we aim to illuminate the complex challenges and opportunities faced by multinational companies in achieving regulatory compliance while upholding stringent sustainability standards.

Global Regulatory Landscape: A Comprehensive Analysis

North America

In the United States, environmental regulations enforced by the Environmental Protection Agency (EPA) such as the Clean Air Act and the Resource Conservation and Recovery Act (RCRA) dictate stringent standards for emissions and hazardous waste management. These regulations impact supply chain operations significantly, influencing compliance strategies and operational efficiency. Similarly, Canada’s regulatory landscape under the Canadian Environmental Protection Act (CEPA) and provincial laws introduces complexities that require tailored approaches to environmental compliance across diverse regions.

Within the European Union (EU), directives like REACH (Registration, Evaluation, Authorization, and Restriction of Chemicals) and the Waste Framework Directive set rigorous standards for chemical safety, waste management, and sustainable production. These directives not only shape product compliance requirements but also mandate transparency and accountability throughout supply chain networks. Post-Brexit, the United Kingdom continues to align with EU environmental standards while developing its own regulatory frameworks to ensure environmental sustainability remains a priority.

Asia-Pacific

In China, environmental regulations overseen by the Ministry of Ecology and Environment (MEE) encompass stringent measures on air and water quality, emissions control, and sustainable manufacturing practices. These regulations are pivotal for companies operating within China’s manufacturing hubs, influencing operational strategies and technological investments. Japan, under the Basic Environmental Law and Industrial Standardization Act, similarly enforces strict environmental standards that impact supply chain operations and sustainability practices across industries.

Complex Challenges in Achieving Compliance

Navigating the diverse regulatory landscapes across North America, Europe, and Asia-Pacific poses significant challenges for global companies with complex supply chains. Variations in environmental standards, compliance requirements, and reporting protocols demand robust governance frameworks and adaptive strategies. The complexity increases operational costs and risks, necessitating comprehensive risk management and compliance monitoring strategies to maintain regulatory alignment while achieving sustainable business practices.

Opportunities for Innovation and Leadership

Advancements in supply chain technologies such as artificial intelligence (AI), Internet of Things (IoT), and blockchain present opportunities to enhance environmental data management, traceability, and compliance reporting. Leveraging these technologies enables proactive compliance strategies and real-time monitoring capabilities, thereby strengthening supply chain resilience and sustainability performance. Strategic partnerships with regulatory bodies, industry associations, and technology providers foster collaborative initiatives that drive innovation and leadership in sustainable supply chain management.

As a supply chain technology provider committed to enhancing sustainable practices, we recognize the critical importance of regulatory compliance in achieving environmental stewardship goals. By navigating the complexities of global regulatory frameworks and leveraging advanced technologies, global companies can mitigate compliance risks, drive operational efficiencies, and position themselves as leaders in sustainability. Our expertise and solutions empower businesses to integrate environmental stewardship seamlessly into their global supply chain strategies, ensuring long-term sustainability and competitive advantage in a dynamic regulatory landscape.

Repeat this semantic expansion prompt for each chapter of the white paper.

From here, you should:

  • Fact check . Fact check. Fact check.
  • Develop original content based on the responses given.
  • Integrate exclusive case studies and insights from expert voices, flexing your perspectives on table stake topics . (Don’t forget to cross-link to the blog articles that inspired this piece.)
  • Design and add relevant images and data visualizations, such as charts, graphs, and infographics.
  • Edit, fact-check, proofread ,and publish .

Remix 2: The Concept Album — How to create an integrated campaign from one idea

Difficulty grade: 9/10 Creating high-performing campaigns starts with understanding your audience journey, catering to diverse decision-makers, developing a compelling idea, and using different distribution channels to promote it.

Think of Gorillaz’s game-changing album Demon Days. Just as Gorillaz crafted a novel virtual band to execute a concept album and used multimedia storytelling to engage its audience, successful campaigns take a novel central theme and add a lens to all elements and outputs seamlessly. It’s all about integration — think of it as the gold standard for marketing success. Use performance analytics to pinpoint top-performing campaigns, guiding your strategy with data-backed insights.

Here’s how to generate your AI-developed integrated campaign:

Step 1: Craft a clear campaign message

The crux of this approach is simple: Develop a fresh and catchy hook to revolve your campaign around. Then, bring it to life via tailored content designed to resonate across channels and customer stages.

To land in your all-important campaign message, try this style of prompt:

As a content strategist for an AI retail software company serving enterprise retailers, draw inspiration from your top-performing article “How AI-Powered Personalization is Changing the Face of Retail.” Craft five campaign message variations that are educational or inspirational, each up to two sentences long. Ensure they are platform agnostic and strategically appeal to the specific pain points or interests of B2B decision makers in retail. Example prompt response:

Certainly! Here are five campaign message variations that blend education and inspiration, aimed at B2B decision makers in retail:

  • “In the age of hyper-personalization, your retail enterprise can thrive like never before. Harness AI-powered insights to understand each customer’s unique preferences and behaviors, delivering tailored experiences that drive loyalty and revenue. Embrace the future of retail with technology that not only meets but anticipates consumer expectations, positioning your brand at the forefront of innovation and customer satisfaction.”
  • “Imagine a retail ecosystem where every interaction feels uniquely crafted for each customer. With AI-driven personalization, this vision becomes reality. Empower your team with tools that analyze vast data sets in real-time, enabling smarter decision-making and personalized marketing strategies. From predictive analytics to personalized recommendations, elevate your retail strategy to exceed customer expectations and stay ahead in a competitive marketplace.”
  • “AI isn’t just transforming retail—it’s redefining it. Embrace the power of AI to decipher complex consumer insights, predict trends, and optimize operations seamlessly. Equip your enterprise with the tools to deliver personalized experiences at scale, driving increased conversions and customer retention. Stay agile and responsive in a rapidly evolving market landscape, where AI isn’t just a tool but a strategic advantage that propels your business towards sustained growth and profitability.”
  • “The future of retail isn’t about guesswork—it’s about precision. AI-powered personalization offers your enterprise the ability to tailor every customer interaction with unparalleled accuracy and relevance. From automating personalized product recommendations to optimizing inventory management, leverage AI to streamline operations and enhance customer satisfaction. Join the leaders in retail innovation by embracing AI-driven solutions that empower your team to exceed customer expectations and drive sustainable business growth.”
  • “In a world where customer expectations are higher than ever, AI serves as your competitive edge. Revolutionize your retail strategy with AI solutions that transform data into actionable insights, enabling you to anticipate customer needs and preferences before they do. Empower your team to innovate fearlessly, knowing that AI-driven personalization not only enhances customer experiences but also drives operational efficiencies. Lead the charge towards a more personalized, efficient, and profitable future in retail with AI as your trusted partner.”

Step 2: Make it personal

Choose your best message variation and tweak it to match two persona profiles. Create content that speaks directly to their needs and interests.

Example prompt :

Develop a comprehensive campaign focused on Message Variation 3, “”AI isn’t just transforming retail—it’s redefining it,” designed to engage two key personas across four stages of their journey: awareness, consideration, evaluation, and retention.

Persona 1: Business Decision Maker (End User Persona): This persona is responsible for leveraging technology to enhance organizational operations in Retail.

Persona 2. Tech Influencer (ROI and Security Evaluation Persona in Retail): This persona evaluates ROI, security implications, and integration of new technology into existing systems.

Ensure the campaign addresses each persona’s specific needs and concerns at each stage, leveraging Message Variation 3 to resonate effectively throughout the journey .

Campaign: Redefining Retail with AI

Message Variation 3: “AI isn’t just transforming retail—it’s redefining it.”

Persona 1: Business Decision Maker (End User Persona)

Awareness Stage:

  • Campaign Theme: “Discover the Future of Retail with AI”
  • Content Focus: Highlight the transformative potential of AI in retail operations.
  • Blog posts and articles on industry-leading platforms discussing AI trends in retail.
  • Webinars featuring case studies of successful AI implementations in retail.
  • Infographics and social media posts showcasing statistics on AI’s impact on customer experience and operational efficiency.

Consideration Stage:

  • Campaign Theme: “Empowering Your Retail Strategy with AI”
  • Content Focus: Provide deeper insights into AI applications and benefits specific to retail.
  • Whitepapers detailing AI technologies and their relevance to retail operations.
  • Customer testimonials and success stories illustrating tangible business outcomes from AI adoption.
  • Interactive online demos showcasing AI-powered tools and their ease of integration into existing systems.

Evaluation Stage:

  • Campaign Theme: “AI-Powered Solutions for Your Retail Success”
  • Content Focus: Address concerns about implementation challenges and ROI.
  • ROI calculators demonstrating potential cost savings and revenue increases with AI.
  • Comparison guides highlighting how AI solutions stack up against traditional methods.
  • Expert-led Q&A sessions and live chats to address specific queries about AI integration and security.

Retention Stage:

  • Campaign Theme: “Staying Ahead with AI in Retail”
  • Content Focus: Foster ongoing engagement and support for AI adoption.
  • Continuous education through monthly newsletters with AI insights and updates.
  • Exclusive access to new AI features and upgrades for current users.
  • Customer loyalty programs recognizing early adopters and providing incentives for advocacy.

Persona 2: Tech Influencer (ROI and Security Evaluation Persona)

  • Campaign Theme: “AI’s Role in Secure and ROI-Driven Retail Innovation”
  • Content Focus: Highlight AI’s security features and potential ROI.
  • Thought leadership articles on AI security measures in retail environments.
  • Webinars featuring cybersecurity experts discussing AI integration best practices.
  • Infographics and case studies illustrating ROI metrics from AI implementations in retail.
  • Campaign Theme: “Evaluating AI Solutions for Retail Security and ROI”
  • Content Focus: Provide detailed comparisons and analyses of AI solutions.
  • Technical whitepapers exploring AI algorithms and their impact on security.
  • Security certifications and compliance documents for AI platforms.
  • Virtual workshops on integrating AI securely within existing IT infrastructures.
  • Campaign Theme: “Maximizing ROI with AI in Retail”
  • Content Focus: Address concerns about scalability and long-term ROI.
  • Case studies demonstrating scalability of AI solutions in large retail enterprises.
  • ROI calculators tailored to different retail scenarios and business models.
  • Live demonstrations of AI platforms showcasing scalability and integration capabilities.
  • Campaign Theme: “Partnering for Secure and Sustainable AI Growth in Retail”
  • Content Focus: Foster ongoing collaboration and support for secure AI implementations.
  • Access to a dedicated support portal with resources on AI security updates.
  • Quarterly webinars on emerging AI trends and their security implications.
  • Early access to beta testing for new AI security features and enhancements.

Cross-Persona Integration:

Throughout all stages, emphasize the collaborative benefits of AI adoption in retail, showcasing how business decision makers and tech influencers can work together to achieve strategic goals. Use consistent messaging from Message Variation 3 to reinforce AI’s transformative potential in redefining retail operations, addressing specific concerns and highlighting tailored benefits for each persona. This integrated approach ensures a cohesive campaign that resonates with both personas, driving awareness, consideration, evaluation, and retention effectively. As you can see, these are pretty “meh” titles, but what’s offered is a starting point for further development. From here, you would:

  • Develop an integrated marketing plan mapped to channels, personas, and stages of the funnel.
  • Create original content to ensure complete campaigns for each persona.
  • Integrate expert voices and insights. (Don’t forget to link to the blog articles that inspired this piece.)
  • Design and add images and data visualizations, such as charts, graphs, infographics.
  • Edit, fact-check, proofread, and publish.
  • Implement a structured content calendar with cross-channel promotion for maximum impact.
  • Embrace dynamic personalization and automation to elevate customer engagement and drive conversions, strengthening your brand’s competitive edge.

Remix 3: The Collaboration Album — How to create an SEO-rich knowledge base

Difficulty grade: 6/10 Knowledge bases that combine diverse insights in one central place are critical for supporting customers and establishing industry authority. Think of this remix style as a collaborative album. In Ella Fitzgerald’s Ella Sings with Her Friends, Ella stays the main character in the narrative, yet the album brilliantly showcases the synergy of a host of top-notch singers.

This type of content is more product-focused in nature, but the idea is to augment it with expertise and knowledge to add value. By tapping into internal and external insights, you can easily curate comprehensive knowledge bases on your company’s products and services. This collaborative approach helps to extract a myriad of viewpoints enriched with expert insights and quotes.

Step 1. Create your Knowledge Base

Use this style of prompt to generate a list of common questions about a company’s products and solutions.

Example prompt: As a content strategist tasked with developing a comprehensive knowledge base for a leading provider of marketing automation solutions for agencies and brands, outline the essential categories and questions that should be prominently featured in our Knowledge Base. Focus on addressing common queries and providing clear, informative answers to support our users effectively.

Creating a comprehensive knowledge base for a leading provider of marketing automation solutions involves organizing information into clear categories and addressing common user queries effectively. Here’s an outline of essential categories and sample questions for your FAQ knowledge base:

1. General Information

  • What is [Your Company Name] and what do you offer?
  • How can marketing automation benefit my agency/brand?
  • Is your platform suitable for agencies, brands, or both?

2. Getting Started

  • How do I sign up for [Your Company Name]?
  • What are the initial setup steps for new users?
  • Where can I find tutorials or guides to help me get started?

3. Features and Functionality

  • What are the key features of your marketing automation platform?
  • How can I create and manage email campaigns?
  • Can I automate social media posting and scheduling?

4. Integration

  • Which third-party applications and platforms does [Your Company Name] integrate with?
  • How do I integrate [Your Company Name] with my existing CRM system?
  • Is there an API available for custom integrations?

5. Account Management

  • How do I update my account information?
  • What are the different subscription plans available?
  • How can I add or remove users from my account?

6. Troubleshooting and Support

  • What should I do if I encounter issues logging in?
  • How do I troubleshoot common problems with campaign delivery?
  • Where can I find technical support or contact customer service?

7. Security and Privacy

  • How does [Your Company Name] ensure the security of my data?
  • What measures are in place to protect my privacy?
  • Do you comply with data protection regulations (e.g., GDPR)?

8. Billing and Payments

  • How do I update my billing information?
  • What are the accepted payment methods?
  • Can I view my billing history and invoices online?

9. Training and Resources

  • Are there training sessions available for new users?
  • Where can I find video tutorials or webinars on advanced features?
  • Is there a knowledge base or community forum for users to share tips?

10. Product Updates and Roadmap

  • How often do you release updates to your platform?
  • Where can I find information about upcoming features or enhancements?
  • Can users provide feedback or suggest new features?

Tips for Effective FAQ Management:

  • Keep answers concise and clear: Avoid technical jargon and provide step-by-step instructions where necessary.
  • Update regularly: As your platform evolves, ensure FAQs reflect the latest features and improvements.
  • Use multimedia: Include screenshots, videos, or links to relevant resources to enhance understanding.
  • Promote self-service: Encourage users to explore the FAQ before contacting support to reduce repetitive inquiries.

Step 2: Crowdsource and scale

Now that you have a list of questions, answer them by tapping into the collective genius of people across your organization — all while keeping a sharp eye on quality. If you don’t have people internally who could answer a question, don’t hesitate to tap into external subject matter experts .

Once you’ve got your answers locked in, fire them off to your product owners and domain experts for a final accuracy check.

This prompt can help you create email templates that make crowdsourcing a cinch: Example prompt:

Develop a concise email template to solicit contributions for our Knowledge Base from both internal and external stakeholders. Clearly communicate that their submissions will be publicly accessible and used for educational purposes. Ensure their name, position title, and organization are prominently displayed with their answers. Organize prompts according to the predefined chapters. Ensure these prompts encourage stakeholders to offer practical tips or strategies that benefit end-users of our product. Example prompt response:

Subject: Help Us Build a Knowledge Base: Share Your Expertise!

Dear [Stakeholder’s Name],

We are reaching out to invite you to contribute your expertise to our Knowledge Base, a valuable resource designed to educate and empower users of [Your Company’s] marketing automation solutions.

Your insights will help fellow professionals navigate and optimize their experience with our platform, benefiting from real-world strategies and practical tips shared by industry experts like yourself.

Key Information:

  • Your Contribution: Please share practical tips or strategies that enhance the use of [Your Company’s] marketing automation solutions.
  • Format: Responses will be organized into predefined chapters within our Knowledge Base, ensuring easy navigation and relevance.
  • Visibility: Your contributions will be publicly accessible and attributed to you, prominently displaying your name, position title, and organization.

Prompts for Contributions:

  • Getting Started: What advice would you give to new users setting up their accounts?
  • Campaign Management: Share a strategy or tip for optimizing email campaigns or social media automation.
  • Integration: How have you successfully integrated [Your Company’s] platform with other tools or systems?
  • Troubleshooting Tips: What common issues have you encountered, and how did you resolve them?
  • Advanced Features: Share a practical example of how you’ve used advanced features to achieve specific goals.

Your contributions will directly benefit our community of users by offering practical insights and enhancing their overall experience with [Your Company’s] solutions.

Please submit your responses directly to [Email Address] by [Deadline Date]. Should you have any questions or need further clarification, feel free to reach out.

Thank you for your ongoing support and collaboration in building a robust Knowledge Base that serves as a valuable resource for all users.

Warm regards,

[Your Name] [Your Position] [Your Company] [Contact Information] It’s a no-brainer, but it bears repeating: Once your expert insights are cobbled together, closely fact-check, edit, and proofread your Knowledge Base before releasing it to the world.

Remix 4: The Forward-Thinking Album — How to create timely, responsive content

Difficulty grade: 7/10 Many brands draw inspiration from traditional newsrooms to develop topical content, incorporating timely updates like breaking news or emerging trends. They even use it as intel to set their own trends. This strategy enables them to craft progressive content that builds trust and authority. Just as Beyoncé’s pioneering album Cowboy Carter positioned Queen Bey as a trendsetter squarely in the zeitgeist, yet ahead of her time, all brands should similarly strive to lead the conversation.

Step 1: Zero in on the nut graph

Use ChatGPT to swiftly dissect current events and extract insightful angles for op-eds or in-depth analyses.

For instance, start with this piece from Marketing Week about brand building being the most important factor in achieving business objectives. Seize its core discovery or nut graph (newspeak for the central theme or key insight) and use it as your launchpad. Always attribute properly by crediting the original source and authors. Your goal isn’t to parrot the news but to offer a unique take and expert response.

essay about digital art

As a content marketer at a marketing agency, your task is to create content in response to recent insights revealing that 33% of marketers prioritize brand advertising as their primary tactic, despite concerns over excessive focus on performance metrics. Develop three strategic story angles that explore the implications of these findings on B2B strategies. Each angle should be suitable for a thought leadership piece, opinion editorial, or critical analysis aimed at our audience of marketing leaders. Ensure to attribute and cite the original study titled “Brand building ‘most important’ factor in achieving business objectives” from Marketing Week.

Certainly! Here are three strategic story angles that explore the implications of prioritizing brand advertising in B2B strategies, tailored for a thought leadership piece or opinion editorial aimed at marketing leaders:

1. “Strategic Shift: Embracing Brand Building as a Core B2B Marketing Strategy”

Angle: Despite the prevalent focus on performance metrics such as ROI and lead generation, a significant segment of marketers (33%, according to Marketing Week) are prioritizing brand advertising as their primary tactic. This angle examines why B2B marketers are increasingly recognizing the strategic value of brand building in achieving long-term business objectives.

Key Points to Explore:

  • Building Market Resilience: Discuss how investing in brand advertising enhances brand resilience and market positioning, especially in competitive B2B landscapes.
  • Long-term vs. Short-term Goals: Analyze the balance between immediate performance metrics and the strategic benefits of building enduring brand equity.
  • Impact on Customer Relationships: Explore how a strong brand presence influences B2B customer relationships, trust, and loyalty over time.

Citation: According to Marketing Week’s study on brand building as the most important factor in achieving business objectives, this shift underscores a growing acknowledgment among marketers of the enduring impact and value of strong brand equity in B2B markets.

2. “Navigating Complexity: Integrating Brand Advertising with Measurable Outcomes in B2B Marketing”

Angle: In the era of data-driven decision-making, balancing brand advertising with measurable outcomes presents a challenge for B2B marketers. This angle delves into strategies for effectively integrating brand advertising into B2B marketing campaigns while demonstrating tangible ROI and performance metrics.

  • Strategic Alignment: Discuss methodologies and technologies for aligning brand advertising efforts with measurable performance indicators such as conversion rates and customer acquisition costs.
  • Metrics for Brand Impact: Propose innovative metrics for quantifying the impact of brand advertising on brand awareness, perception, and consideration in B2B purchase cycles.
  • Case Studies and Best Practices: Highlight successful B2B brands that have effectively balanced brand advertising with performance metrics to achieve strategic marketing goals.

Citation: According to Marketing Week’s insights, B2B marketers are increasingly leveraging brand advertising not only to enhance brand perception but also to drive measurable business outcomes, reflecting a nuanced approach to modern marketing strategies.

3. “Beyond the Buzz: Redefining Success in B2B Marketing Through Brand Advertising”

Angle: Beyond immediate metrics, the strategic emphasis on brand advertising signifies a broader shift towards redefining success in B2B marketing. This angle explores how prioritizing brand advertising can redefine success metrics and position companies for sustained growth and market leadership.

  • Creating Competitive Advantage: Discuss how a strong brand narrative and consistent advertising can differentiate B2B companies in crowded markets and attract high-value clients.
  • Long-term Value Proposition: Analyze the role of brand equity in enhancing the overall value proposition of B2B offerings and influencing customer lifetime value.
  • Innovation and Adaptation: Explore strategies for innovating within brand advertising to stay relevant and adaptable to evolving market dynamics and customer preferences.

Citation: As highlighted by Marketing Week’s study, the strategic prioritization of brand advertising underscores its pivotal role in driving sustainable growth and market leadership in B2B sectors, emphasizing the importance of long-term brand building efforts.

These story angles aim to provoke thoughtful discussion and provide actionable insights for marketing leaders navigating the evolving landscape of B2B marketing. Each angle is grounded in the latest insights from Marketing Week to provide credibility and relevance to the discussion around brand advertising in B2B strategies.

Step 2: Assemble your A team

Choose your favorite angle and team up with three subject matter experts to blend diverse perspectives in an in-depth analysis. Merge AI-generated insights with SME voices to craft compelling content that speaks directly to your audience’s interests and concerns.

For this exercise, lets pick the second angle, “Navigating Complexity”. And let’s use generative AI to create email templates that make sourcing that expertise a breeze.

Utilize “Navigating Complexity: Integrating Brand Advertising with Measurable Outcomes in B2B Marketing” as a unique perspective to pitch an opinion piece integrating insights from three experts within your organization. Develop a concise email requesting quick quotes from each expert, tailored to their expertise in strategies, advice, or tips for our customers. Highlight the topical urgency with a 48-hour deadline, emphasizing the timeliness of the topic.

Your experts—VP of Product, Head of Insights, and VP of Strategy—are pivotal. Ensure your pitch addresses the insights from the Marketing Week article title “Brand building ‘most important’ factor in achieving business objectives,” specifically citing: “While 33% of marketers prioritize brand advertising, 49.8% acknowledge an excessive focus on performance marketing.” Properly attribute these findings for credibility.

Subject: Quick Quotes Needed: Expert Insights on Integrating Brand Advertising with Measurable Outcomes

Dear [Expert’s Name],

I hope this message finds you well.

We are currently preparing an opinion piece on “Navigating Complexity: Integrating Brand Advertising with Measurable Outcomes in B2B Marketing” and would greatly value your unique perspective. This topic is of critical importance, particularly in light of recent insights from Marketing Week, which highlight that while 33% of marketers prioritize brand advertising, 49.8% recognize an overemphasis on performance marketing. These findings underscore the need for balanced strategies that merge brand building with tangible results.

Given your expertise in [specific area related to their role], we are seeking your quick input on this subject. Specifically, we would appreciate a brief quote (2-3 sentences) on:

  • Strategies : What key strategies should businesses adopt to effectively integrate brand advertising with measurable outcomes?
  • Advice : What actionable advice can you offer to marketers struggling to balance brand and performance marketing?
  • Tips : Any practical tips you have for ensuring brand advertising efforts contribute to clear business objectives?

Could you please provide your insights within the next 48 hours? Your timely response will greatly enhance the depth and relevance of our piece.

Thank you in advance for your contribution.

Best regards,

[Your Full Name] [Your Position] [Your Contact Information] [Your Company]

Once you have the angles and the expert intel, creatively bring it all to life in the formats of your choice. Getting your news-focused perspectives and trend-setting concepts into the world before they grow stale is essential, so act fast.

Remix responsibly

Content remixing with generative AI mirrors the music industry’s remix culture — reinventing the familiar to captivate audiences anew with due regard for the original work and its essence.

Given ethics and intellectual property are especially murky waters in the realm of generative AI, err on the side of caution. Only use prompts with content that your brand owns. In rare instances when using someone else’s data point or finding as a launchpad (as seen in the fourth remix above), always credit the originator. With AI’s tendency to hallucinate , triple-checking content for accuracy and unexpected elements is paramount.

Because AI relies on formulaic language, you need to also be wary of its tendency to produce lackluster and robotic-sounding copy. A mere glance at the prompts generated above reveal how generic the output is. And there are some tell-tale words and phrases that are a dead giveaway that AI has generated the copy. “I hope this email finds you well,” “in an evolving landscape,” “unleash,” and “delve” are a few examples among many. 

As a result, flexing your own writing skills and ensuring strict editorial oversight will be more important than ever in the age of AI-assisted content creation. The generated prompts in this article are only part of the process. Treat them as thought starters rather than a proxy for this type of original thinking.

This list of do’s and don’ts should sound familiar to any ethical editorial professional:

  • Do respect copyright and intellectual property laws. When in doubt, ask.
  • Do review, fact-check, and edit AI outputs before finalizing to ensure quality control.
  • Don’t feed confidential or sensitive data to generative AI tools, as these inputs might enter public datasets.
  • Don’t replace expert writers with generic AI responses. Instead, use AI to conjure ideas and complement their expertise.

Employing meticulous oversight and upholding high ethical standards allows your creative game to produce more smash hits.

Grab your generative AI and get remixing

These four remix ideas inspire you to go beyond single, often dead-end prompts. I’m not entirely sold on the idea that generative AI is all about efficiency when it comes to creativity. Sure, it can speed things up, like a turbo boost for brainstorming sessions. But let’s face it, real creativity is more than just churning out ideas — it’s about those light-bulb moments, those gut feelings, and the kind of spark that AI hasn’t quite figured out how to replicate. AI can crunch numbers and spit out options, but when it comes to genuine, off-the-wall creativity, humans still have the edge (and responsibility).

Please note: All tools mentioned in this article were suggested by the author. If you’d like to suggest a tool, share the article on social media with a comment.

Register to attend Content Marketing World in San Diego. Use the code BLOG100 to save $100. Can't attend in person this year? Check out the Digital Pass for access to on-demand session recordings from the live event through the end of the year.

HANDPICKED RELATED CONTENT:

  • How To Reboot Your Old Content (and Create Future Treasures)
  • 5 Steps You Can Take Now To Make Future Content Updates Easier
  • 7 Generative AI Prompts To Help Your Content Marketing Workflows
  • How To Work AI Into Content Marketing (in a Way That Works for You)
  • Harness AI To Harmonize Your Brand Voice: A Step-by-Step Guide

Cover image by Joseph Kalinowski/Content Marketing Institute

Lieu Pham

IMAGES

  1. Digital Art (300 Words)

    essay about digital art

  2. 📗 Art Essay Example

    essay about digital art

  3. Becoming an Artist

    essay about digital art

  4. Visual Art Analysis Free Essay Example

    essay about digital art

  5. Visual Arts Essay

    essay about digital art

  6. What Is Art Essay Examples

    essay about digital art

VIDEO

  1. Essay on Digital India for New India / Paragraph on Digital India for New India/ Essay Digital India

  2. Art journaling digitally: a relaxing step-by-step process

  3. how people think digital art work:

  4. video essay digital storytelling

  5. Digital Bangladesh Paragraph/Essay writing in English

  6. Easy Mahadev shivling Drawing ideas 😱 #art #shorts #yt #youtubeshorts #ytshorts #mahadev #youtuber

COMMENTS

  1. There Is a Digital Art History

    Johanna Drucker's question still stands, ten years after it was first proposed: has there been a "digital" art history? In this essay, we suggest that a "digital" art history is, at the very least, on the horizon, thanks to multimodal machine learning models.

  2. "How Artists Can Bridge the Digital Divide and Reimagine Humanity"

    The STEMarts Lab, founded in 2009, designs installations and artist-embedded curricula that focus on the intersection of the arts, humanities, and philosophy with science and technology. Through immersive and educational sci-art experiences, students work directly with artists whose work imagines what can be achieved with digital technologies.

  3. The Philosophy of Digital Art

    The philosophy of digital art is the philosophical study of art that crucially relies on computer processing in its production or presentation. There are many kinds of digital art, including digital cinema and video, digital photography and painting, electronic music, literary works generated by so-called "chatbots", NFT art, net art, and ...

  4. Histories of the Digital Now

    Histories of the Digital Now. By Christiane Paul, adjunct curator of digital art. Walk into any given gallery or museum today, and one will presumably encounter work that used digital technologies as a tool at some point in its production, whether videos that were filmed and edited using digital cameras and post-production software, sculptures ...

  5. Essay about Digital Art Technology

    Digital art technology is in some ways, a form of art that can be constructed on a computer based template and still portray an artistÕs rendition but in a mechanical manner. ÒAs early as 1912, the futurists planned moving sculptures driven by machine, in which the …show more content…. Photoshop was a great new technology and contributor ...

  6. Digital Art and the Future of Traditional Arts

    PDF | This article forms a descriptive study of the presence of digital art, which has been signified by three extraordinary occurrences, i.e., the... | Find, read and cite all the research you ...

  7. Full article: Digital Art History

    This introduction frames and situates this special issue of Visual Resources on the topic of digital art history, intentionally assembled five years after the journal's previous digital art history issue with the goal of assessing the progress made in the field and encouraging more widespread adoption of digital methodologies.

  8. The International Journal for Digital Art History

    The peer-reviewed journal seeks to gather current developments in the field of Digital Art History worldwide and to foster discourse on the subject both from Art History and Information Science. Today, Artificial Intelligence (AI) is at the forefront of art-historical research and cultural discourse: from the creation of digital images with the use of transformer models, such as Dall-E and ...

  9. Art, space and technology: how the digitisation and digitalisation of

    Abstract In an increasingly digital world, art is transferred from a physical to a virtual space. This essay examines three effects of this digitisation (conversion of physical into virtual content) and digitalisation (processing of digitised content) on the consumption of art.

  10. A Dream of Digital Art: Beyond the Myth of Contemporary Computer

    classification of arts in their historical context, to argue that, instead of "computer art," "new media art," "Internet art," or other such terms, "digital art" is a better term to describe art created using the computer. I do not suggest the term "digital art" for art that is output from the computer to various surfaces and replicates traditional forms such as painting and graphic design ...

  11. Tech as Art: Commissioned Essays from Arts Practitioners

    These essays are part of an Arts & Technology Field Scan conducted by the National Endowment for the Arts, in partnership with the Ford Foundation and the Knight Foundation. " Recoding the Master's Tools: Artists Remake Systems of Oppression and Extraction in Technology " by Vanessa Chang. " How Artists Can Bridge the Digital Divide and ...

  12. How Digital Technology Influences Art Essay

    Digital technology has become part of the everyday life in the world. This section of the paper discusses the impacts of digital technology on the world of arts.

  13. Research in Digital Art: [Essay Example], 1465 words

    Digital art captures an artistic work or practice that uses any form of digital technology as part of its creation or presentation process. As the digital age began its march into the world between 1950 and 1970, it was only a matter of time before artists would grasp its progressive technologies for their own creative output.

  14. Against Digital Art History

    Against Digital Art History. Abstract: This article responds to two issues affecting the field of contemporary art history: digital technology and the so-called computational turn in the humanities. It is divided into two parts: the first connects problems with "digital art history," an offspring of digital humanities, to neoliberal metrics ...

  15. Development of Art in The Digital Era

    Art is modernizing day by day which leads us to practices of art in this culture. Modern techniques like computers are basically having a huge impact on visualizing the art. Even though, digitalization has turned the whole world of art. The thesis behind the essay is positive attitude towards digital art and it's exploration in digital world.

  16. Why Digital Art is Better than Traditional Art

    The second reason digital art is better than traditional art is because of how easy it is to master it. With new advances in technology, applications are able to show videos and real-time tutorials on how to draw. For example, the 3DS game Disney Art Academy shows tutorials on how to draw Disney characters. With the new advances in digital art ...

  17. The Digital U-Turn in Art History

    Summary Over the past decade humanities researchers have increasingly come to embrace digital methods. Art historians, however, have often resisted engaging with these developments. In this article, we explore the driving factors behind art history's reticence toward the digital turn in the humanities. Reflecting on the historiographic trajectory of the emerging field of digital art history ...

  18. 6182 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on DIGITAL ART. Find methods information, sources, references or conduct a literature review on DIGITAL ART

  19. Impact of Digital Technologies on Contemporary Art Essay

    This paper looks at the impact digital technologies have had on ideas about and uses of drawing in contemporary art.

  20. Digital Art Essay

    Digital Art Essay. Digital art is a general term for a range of works by the artists who use digital technologies as a major part of the processes of creation and representation (Paul, 2006). Specifically, digital technology refers to the application of computers as media and partners for the artists in creating art works.

  21. Digital Art Exhibition Review

    Digital or new media art has greatly transformed the world or art. Digital art emerged in the 1970s and has gained immense popularity with advancement in digital technology.

  22. Digital Art Essay

    Digital Art Essay. Introduction to artistic software. Digital art is an artistic work that artist uses different technology software to produce creative art. All digital art need a context that is defined by specific software and hardware. However, Digital art must be evaluated from its contents and aesthetical point of view, in order to be ...

  23. Digital Art (300 Words)

    Digital art is a general term for a range of artistic works and practices that use digital technology as an essential part of the creative and/or presentation process. Since the 1970s, various names have been used to describe the process including computer art and multimedia art, and digital art is itself placed under the larger umbrella term ...

  24. Persuasive Essay On Digital Art Theft

    Persuasive Essay On Digital Art Theft. Throughout the last year, I 've been seeing more and more posts warning artists about art thieves and stolen art. As the age of modern technology comes upon us, modern art forgery appears in the form of digital art theft. As a non-physical product, digital art is incredibly easy to duplicate, steal and sell.

  25. How To Turn Old Content Into a New Work of Art With an AI Assist

    Ready to put out a new album filled with remixes that will prompt your audience to groove? These step-by-step processes lets you remix four types of albums using your original work, a generative AI assist, and some creativity.