Clara Ferrão Tavares
GERFLINT-Groupe d’études et recherchespour le français langue international
ID- Orcid 0000-0002-7959-0757
Extrato de artigo publicado em português
2. “See,” what? Multimodal Communication
This was the question that arose in the very first years of my career as a teacher and pedagogical supervisor, which led me to my specialization in the didactics of languages and cultures, and brought me to the issue of pedagogical communication as the core of my research. An apparently insignificant, even anecdotal, reason led me to prioritize nonverbal communication in articulation with the verbal dimension within the dynamics of the foreign language classroom. It stemmed from the following comment by my internship supervisor: “you made too many gestures and put your hands in your pockets.” Nonverbal communication was the subject of two theses defended at the University of Paris 3 – Sorbonne Nouvelle.
Later, throughout my research path, “zones of proximity” between school and the media emerged, resulting from the observation and analysis of classes and other learning spaces, especially media-based ones or those generated by technological devices such as learning platforms, blogs, and social networks… The fact that I expanded my corpus to include pedagogical situations, not always explicit, in media contexts, led me to question other domains of knowledge that ultimately led me to what I now call multimodal communication.
Briefly, I will attempt to present the theoretical framework which, over the years, has integrated different theories and models. Thus, before arriving at the concept of multimodal communication or modality, it is necessary to return to pragmatics and the “orchestral” conception of communication, developed in the 1950s in the context of research conducted by scholars of the so-called Palo Alto School. For P. Watzlawick, a researcher of this American school, “one cannot not communicate,” since “one cannot not have behavior” (Watzlawick, 1967, in Winkin). This total or “orchestral” conception of communication challenged the linear models of communication that emerged from telecommunications engineering and were adopted in applied linguistics (often imported uncritically into Portuguese classes at the end of the last century), which postulated a dissociation of functions between sender and receiver. In Watzlawick’s logic, a distinction emerges with implications for the didactics of languages and cultures: the distinction between content and relationship. When we produce a verbal utterance, we simultaneously give our interlocutor “instructions” on how to process that verbal content, often through nonverbal behavior. When the content and the relationship contradict each other, we speak of paradoxical behavior or of a “double bind.”
The studies of this school had a major impact on the analysis of communication in various contexts such as political, professional, and pedagogical communication. Interest in the study of “multichannel” communication, as it was then called, gave rise to methodological tools for analyzing verbal, paraverbal, nonverbal, and iconic interactions. For example, classroom observations in my corpus revealed frequent occurrences of “double bind” behaviors by teachers towards certain students. Without the possibility of metacommunicating—of telling the teacher whether they should interpret the positive message conveyed by the verbal appreciation “very good” or the negative nonverbal evaluation conveyed by a posture of withdrawal, quick hand gestures, the absence of a smile, or even lack of eye contact—students often find themselves in a “double bind” situation, eventually distancing themselves from the teacher and the subject of study.
The distinction between relationship and content brought Palo Alto researchers closer to the concept of “multimodality.” However, I do not believe I encountered the term in the texts of this school, although Watzlawick does refer to “digital and analog modes.”
The term multimodality began to appear at the end of the last century in the field of transportation, and was later adopted in nanotechnology, language sciences, language-culture didactics, and educational sciences in the 1980s, helping to illuminate many dimensions of communication in pedagogical contexts (Bruner (1985), Jacquinot (1997), cited in Ferrão Tavares, 2013).
The definition I propose was developed from research conducted in media and pedagogical contexts and differs from definitions that reduce multimodality to a mere superimposition of supports or languages (multichannelity), as I mentioned, relying essentially on J. Cosnier, who emphasizes that “[t]he body is not only an essential support of mental activity, as shown by its role in enunciative activity, but also an essential instrument of relational activity with the world and with others” (Cosnier, 2007:20). I thus define multimodality as a cognitive, relational, and empathic process that encompasses a set of interacting modes, involving the body (embodiment of verbalization and thought), and which manifests itself in the actions of those involved in the communication situation (space and time), following the definition proposed by Cosnier (2008). Multichannelity is one component of multimodality, but it is limited to the level of languages and channels, whereas multimodality is constructed in relation to the subjects who share a communication space and time, involving complex dimensions, particularly neurological ones, that determine or are determined by the body and its interaction with various communicative devices.
Even though after this definition it is difficult, even paradoxical, to distinguish dimensions that are evidently interconnected and operate through accumulation or compensation, for expository reasons, within an article centered on “seeing” and “learning to see,” I must focus on different planes. Space is the first mode to be “seen” in a classroom, since it plays a functional role, either facilitating (sociopetal) or hindering (sociofugal) communication. For example, in traditionally arranged classrooms, the back-corner seats are usually chosen by students who do not want to participate. Unconsciously, they perceive that the teacher’s gaze extends like a funnel, “excluding” them from the class. Thus, among students in the back rows, only those in the center will be observed. Likewise, students sitting too close to the teacher (for example, in a seminar setting, those seated next to the teacher) struggle to capture his/her gaze. Since eye contact is fundamental to passing the floor or taking initiative (as highlighted by C. Goodwin and R. Sommer, cited by Ferrão Tavares, 1999), students located outside that angle will have fewer opportunities to speak (Ferrão Tavares, 1999).
Space is used differently depending on the activities or even on the type of discourse emphasized in various sequences. Normally, acts of explanation or instruction involve a static position of the teacher near the podium or the board—spaces of authority but also functional ones. Reading aloud, summarizing a work by the teacher or by a student, explaining a diagram on the board or a multimedia presentation, or using an interactive whiteboard all require the “explainer” to face the audience, in a panoptic position. In contrast, group facilitation requires movement and proximity.
One might have expected that with technology, classroom spaces would have changed. Yet, not only have physical spaces remained the same, but the presence of computers and even tablets has eliminated the possibility for teachers and students to look each other in the eye—a condition which, as mentioned, is essential for carrying out certain activities. The focusing of gaze on the projection screen often harms interactive dynamics, and individual computers confine students to their own space. Paradoxically, the co-construction space of learning—the traditional blackboard, where teacher and students erased and rewrote texts and diagrams in response to interactions—has been replaced by the fixity of “PowerPoint,” hence it is no surprise that this tool is often seen as an instrument of “deadly boredom.”
Today, the classroom should have broken down its walls. The world should enter it, with its doors left open. Yet paradoxically, one does not see many extensions of the classroom on the Internet. And when discussions arise about abolishing textbooks and adopting Internet-centered practices, there is in fact a reinforcement of traditional dimensions in many virtual pedagogical materials and devices. Moreover, opening the classroom to “research” requires time, and preparation on the part of both teachers and students. The lack of such preparation can contribute to reinforcing inequalities. Much has been said about the “multitasking” abilities of so-called “digital natives.” Yet today, various studies show that not only is this ability limited from a neurological perspective, but that it is often accompanied by difficulty concentrating on school tasks, on the teacher’s discourse, or that of peers. Digital natives do not want to remain seated in class, yet paradoxically they have spent the past decades sitting in front of computers. With mobile Internet, the situation has worsened: the bedroom space has shrunk to the mobile phone screen. They are often in the same room without seeing each other. To think, to listen… students also need to sit, and they need to learn to look, to see.
Linked to space is time. The time frame shaped by technologies has contracted, temporal distances have blurred, and the school must, on one hand, adapt to the pace of technologies, and on the other, ensure the necessary time for reflection, using tools and instruments to collect information, compare, analyze, and evaluate it—conditions required for students to adapt to the future.