Wim Gombert

94 CHAPTER 6 CODING CHUNKS e student L2 French essays written in Word were rst cleansed by removing anything personally related to the author (such as names) and messages intended for the teacher. Subsequently, each essay was coded for chunks independently by two raters, the researcher and an MA student who wrote her Master thesis on chunks (Vandendorpe, 2020). All di erences in coding were discussed and resolved. Coding was done using a Visual Basic for Applications (VBA) script (see Appendix D and E), previously created by van der Ploeg (2017), as part of Piggott’s (2019) research project, with some slight modi cations. QUANTIFYING CHUNKS e problem with quantifying chunks is that longer chunks may have a negative e ect on the total number of chunks. It is also di cult to count embedded chunks. Gustafsson and Verspoor (2017) considered a variety of ways to measure chunk usage and found that simply counting the number of chunks did not re ect the actual amount of authentic target language use, but “chunk coverage” did. Chunk coverage is operationalized as the number of words occurring in chunks divided by the total number of words in text. erefore, the current paper will follow Gustafsson and Verspoor (2017) and Hou et al. (2018) in considering chunk coverage a valid measure re ecting chunk language use. CAF MEASURES For English, there are many tools to automatically ascertain complexity, accuracy and uency measures. For French, such tools are not so readily available, but Granfeldt et al., (2006) developed a tool called Direkt Pro l, originally designed to ascertain di erences in language use between pro ciency levels. e tool provides a great deal of detailed information at various morpho-syntactic levels, but we chose a few speci c measures to operationalize complexity and accuracy. For complexity, we used Tense, Guiraud Index and Sentence Length. Beginners use the present tense almost exclusively but as they become more advanced, other tenses appear (Granfeldt & Ågren, 2014). Tense Use is operationalized as the number of tenses other than the Present Tense. e Guiraud Index has proven to be a reliable measure of lexical complexity for texts containing more than 200 words (Van Hout & Vermeer, 2007). Sentence Length is considered an excellent measure of syntactic complexity (Norris & Ortega, 2009; Oh, 2006; Yoon, 2017). For accuracy, we used Subject-Verb agreement and Determiner-Noun agreement as Ågren et al. (2012) found that these agreement measures contribute signi cantly to accuracy in French L2. For uency we used Text Length, following Chenoweth and Hayes (2001). It is operationalized as the total number of tokens in the text, following several authors.