This project lies at the gesture-sign interface. Co-participants in dialogues
organize their conduct through a wide array of meaningful resources,
including manual and gaze practices. Some of which act as interactive
means that regulate social interaction. Similarly, signers also perform such
movements. However, interactional components in sign language (SL)
conversation and aging remain unexplored. This research is a quantitative
and qualitative description of palm-ups, index finger extended gestures,
holds, (self-) adaptors, and gaze shifts in the context of signed interaction,
and how they compare to those used in spoken interaction among older
adults. The objectives are: (1) to study the distribution and frequency of the
5 units in LSFB and BF as well as their semantico-pragmatic functions; and
(2) to investigate their interactive roles in LSFB and BF to compare
contrastively how they participate in the management of social interaction in
each language when the same methodology is applied. Using ELAN for
analysis, the hypothesis is that meaning is not only conceptual but also
interactionally designed. Rather than opposing gesture and sign, this study
argues for their integration as part of a continuum at the interactive level.
Our claim is that both signers and speakers use a wide range of those 5
markers of meaning. This conception shifts focus from considering the
manual channel as purely linguistic in SL interaction and as nonlinguistic in
spoken interaction. The project draws on data from 3 multimodal corpora,
(1) the LSFB corpus made up of dyadic conversations including 2 pairs of
signers, 2 male and 2 female (≥ 66 y. old); (2) CorpAGEst corpus with 4
female speakers (≥ 75 y. old) recorded in semidirected interviews; and (3)
FRAPé with 2 pairs of female speakers (≥ 66 y. old). The FRAPé corpus
constitutes the first multimodal cross-linguistic analysis between LSFB and
BF.