I think that cinema is changing in profound ways in the digital era. This is due in the first place, of course, to digital technologies which have revolutionized every aspect of film production, distribution, and exhibition. But it is also due to larger social, political, and economic changes -- to the processes that go under the name of neoliberal globalization.
This claim is not a novel proposition -- it is something that everyone is aware of, and that everyone is responding to -- even when the responses take the form of resistance or denial. And yet, despite our collective awareness that things are radically changing, I don't think that we have succeeded, as of yet, in taking the full measure of all this change.
In this respect, it is hard to escape a certain shortsightedness. Some of the characteristics of the new digital technologies that seem at first glance to be crucial may turn out to be less important in the longer term. For instance, in the past I made much of the fact that digital recording devices are no longer indexical, in the way that analog cameras used to be. The direct causal connection between things in the world and the photographic image of these things is broken, once photosensitive film is replaced by an array of digital sensors. This might seem to trouble, or to place into crisis, André Bazin's axiomatic insistence that photography does not merely represent external reality, but rather directly «transfers reality from the object depicted to its reproduction».
Like many other film theorists, I spent years worrying about the consequences of this shift. But today, I realize that I was wrong. Digital moving images are no less "real" than analog ones. It is true that CGI makes it possible to replace the Bazinian real with an entirely simulated environment. But it is equally true that smaller cameras make it possible to get much closer to actuality: to photograph real things with a proximity, and an attention to detail, that were never possible before. Whatever the merits and defects of Bazin's realism, these are not changed when images are captured through digital sampling rather than through photochemical reactions, or when sound waves are reproduced through digital calculation rather than analogical modeling.
Let there be no mistake. Digital technologies do indeed offer us new and different affordances for cinematic representation and expression than the older technologies did. But it is impossible to deduce these new affordances from the technical details alone. As Stanley Cavell puts it in his writing on film, «the aesthetic possibilities of a medium are not givens... Only the art itself can discover its possibilities, and the discovery of a new possibility is the discovery of a new medium». In other words, there is no such thing as an art's essential formal potentialities. The expressive uses of a medium are not a closed set. They cannot be known in advance. They can only be discovered or invented, and then elaborated, in the course of actual audiovisual production.
In other words, we need to follow the artists, and see where their uses of new procedures lead. There is no such thing as a one-to-one correspondence between technologies and results, of course. Many excellent filmmakers still make films that look and sound like pre-digital works, despite their use of new digital tools. There are also some filmmakers -- Michel Gondry and Tarsem Singh particularly come to mind -- who exhibit a laudable streak of stubborn perversity: they go out of their way to produce works in which digital-seeming effects are in fact created through older, analog means.
Nonetheless, I do think that a new sensibility, or a new aesthetics, is in process of emerging in filmmaking today. The changes are sometimes subtle, and difficult to discern at first glance. At other times, these changes are garish and ostentatious, which often leads to their being dismissed as mere gimmicks, or as vulgar and inartistic. I think we need to look more closely at the new digital effects, without overhastily dismissing them. I cannot pretend to give a thorough and complete list, but I would like to consider at least a few examples of the new tendencies cropping up in recent digital cinema.
One obvious tendency is ultrakinetic editing, especially in the action sequences of what Matthias Stork has dubbed "chaos cinema". It is generally accepted that, starting in the 1970s, Hollywood moved from traditional continuity editing to the style that David Bordwell calls intensified continuity, involving «more rapid editing... bipolar extremes of lens lengths... more close framings in dialogue scenes... [and] a free-ranging camera». But after 2000, with the help of new digital tools, directors like Michael Bay, Christopher Nolan, and the late Tony Scott have pushed these tendencies to the breaking point -- and beyond. They have developed a disjunctive editing style of which even Eisenstein and Godard never dreamed. Gunfights, martial arts battles, and car chases are rendered in sequences involving shaky handheld cameras, extreme or even impossible camera angles, together with composited digital material. These are all stiched together with rapid cuts, frequently involving deliberately mismatched shots. The sequence becomes a jagged collage of fragments of explosions, crashes, physical lunges, and violently accelerated motions. There is no sense of spatiotemporal continuity any longer; all that matters is delivering a continual series of shocks to the audience.
Many established critics reject and despise this style. But I think they are missing a crucial point, which is that these new stylistics open up a new form of spatial and temporal relations. In post-continuity films, we learn to understand the world around us in a new way. We leave Newtonian space and time behind, and enter instead into the spacetime of modern physics, and of digital devices that operate beneath the threshold of ordinary human perception. Today, multiple communications and computing devices, working almost at the speed of light, reshape every aspect of our lived experience. We access data, and monitor the physical and social environment, with our phones. We are also the targets of data collection, which goes on at speeds and on scales that we cannot directly perceive. Meanwhile, financial supercomputers manipulate markets on a time scale of microseconds, producing new states of affairs over which we have no control, and which nonetheless shape our lives retroactively. Post-continuity editing styles remediate all these developments, by simulating for our eyes and ears, and making us feel, a kind of machine perception that affects us all the time, but that is otherwise phenomenologically unavailable to us. In this respect, even the crassest productions by the likes of Michael Bay are doing what cinema has done ever since its invention: showing us the world in new ways, giving us new perceptions, reproducing the world «in its own image» (as Bazin said), rather than in ours.
If digital tools enable fragmented editing on a scale beyond anything seen before, they also enable continuous single takes to an extent that was never previously possible. Directors like Welles used to display their virtuosity by means of dense and difficult sequence shots. And Hitchcock tried to hide his editing, in order to give the impression of a single long take, in Rope (1948). But today, not only do digital cameras allow for much longer continuous takes than were possible with film, but such technologies as motion control, together with compositing and digital intermediate in post-production, allow for a far greater degree of seamless flow of sound and image than ever before.
Many film theorists have expressed discomfort with these new possibilities. David Rodowick, for instance, argues that, even though Sokhurov's Russian Ark (2002) was in fact shot in a single 93-minute take, the extensive use of digital compositing in post-production disqualifies it as a long-take film. There is no unity of events in space and time in Russian Ark, Rodowick claims, and therefore no true sense of cinematic duration. Rodowick's complaint is echoed by the many champions of "slow cinema," who praise the use of old-fashioned, non-digitally-enhanced long takes by such directors as Bela Tarr and Tsai Ming-liang. The "slow cinema" style is often presented by its champions as a noble form of resistance to new digital technologies.
As in the case of post-continuity editing, so here I would also like to suggest that these criticisms are based upon a fundamental misapprehension of digital technology. No less than post-continuity editing, composited digital long takes work to express new spacetime relations, in line with the ways that spacetime is reconfigured by new technologies and new socio-economic arrangements.
Consider the recent music video for the song Let It Be, by the British soul singer Labrinth (Timothy McKenzie). The video is directed by the production duo known as Us (Christopher Barrett and Luke Taylor). The video consists in an apparent single take, which moves through a single warehouse space. The camera glides and stops and zooms in and circles around and twists and turns and swoops, as it moves through this space. In different parts of the space, we see different skeletal groupings of fixtures and furniture, like the decors of various rooms in a home and in a recording studio -- but all incomplete and without walls or ceiling. In each of these spaces, we see Labrinth, sometimes alone, and sometimes with bandmates and friends, engaged in various activities, ranging from composing the song, to recording it, to having a business pitch meeting, to buying a car, to shooting a music video that features him getting out of the car, to hanging out in the living room watching TV, to standing alone in the kitchen drinking coffee, with the sink filled to the brim with dirty cups.
In the video, the camera move smoothly from one to another of these events, placing them in the same warehouse space, and without regard for the temporal order in which they would have originally occurred. Usually the camera just contemplates one scenario at a time, but sometimes (and especially when the camera is gliding between them) we see several scenes on the screen at once, or other scenes in the background when one is in the foreground. A whole history -- the singer’s life, on the one hand, and his specific experience of composing, pitching, recording, producing, and making a video for the song, on the other -- is compressed (or better, composited) within the confines of a single camera movement. The camera never holds still for very long; it is usually gliding, but it is always steady and never jerky or agitated. Presumably the videomakers laid down the elaborate camera movement first, and then used motion control technology to reproduce it, so that all the scenes shot separately would fit together into one seamless apparent take.
The camera movements follow the structure of the song, which we hear in completed form, even though the action shows it still in process. The camera follows all of the music's articulations: its repetitions (verse and chorus) as well as its build-up to a culminating crescendo. In effect, the incidents of the singer's life have been retrospectively reordered by the song itself. One form of temporality replaces another, or better multiple others; the events are pressed into a unifying framework (that of the song, and the continuous camera movement), even as they are left as separate and discontinuous incidents (since they are each displayed separately, in schematized form). This suggests that digital compositing retains certain of the powers both of mise en scène and montage. Instead of being opposed to one another (think of Bazin versus Eisenstein), these two formal procedures now interpenetrate and exchange characteristics.
Alongside ultrarapid non-continuity editing, and smooth yet heterogeneous composited long takes, consider such additional digital techniques as the following (with noteworthy examples from music videos):
Superimposed images (Anthony Mandler's video for Rihanna's Disturbia and Jake Nava's video for Lana Del Rey's Shades of Cool);
GIF-like loops of recurring images (Tom Beard and FKA twigs' video for FKA twigs' Papi Pacify);
Multiple temporalities on the screen at once (Jonathan Glazer's video for Radiohead's Street Spirit);
Use of the SnorriCam to alter relations between foreground and background (Motellet's video for Tove Lo's Habits).
In all of these cases, the traditional language of cinematic formal analysis no longer makes sense. None of these videos can be accurately described through categories like mise en scène (what exists in front of the camera in each individual shot), cinematography (what the camera itself does in the course of each shot), and editing (what is subsequently assembled out of the material recorded by the camera). Such divisions are no longer accurate for describing either the operations performed in the course of making the film, or the formal aspects of the completed film as it is experienced by its audience. We need new ways of understanding how cinematic processes work, because contemporary audiovisual works articulate images and sounds in new ways, both reflecting and creating new modes of sensorial experience for the 21st century. Rather than clinging to older technological and aesthetic modes, film critics need to find new categories and new language that are more adequate to the new modes of experience that we see and hear today.