Wim Gombert

52 CHAPTER 3 group e ects could not be calculated as they were assumed to change every year over the course of 6 years. erefore, simple T-tests were considered su cient. STUDY 1: COMPARING READING AND LISTENING SKILLS At the end of six years, all students were tested on reading and listening skills. Measuring reading and listening skills is relatively easy because of the availability of valid and objective tests from Cito. e scores of the groups were also compared to the national average. As these receptive skills are tested on the central nal exams, they are considered extremely important by policymakers, schoolboards and teachers. us, L2 teaching programs usually spend vast amounts of time on speci cally training for these exams in the nal three years. STUDY 2: COMPARING WRITING SKILLS At the end of the six years, all students were tested on their writing skills. e students were asked to write on a pre-assigned topic. All writing samples were rated by a group of trained experts by means of holistic scores using a detailed rubric and by an automated analysis of morphosyntax (cf. Bartning & Schlyter, 2004), complemented by analyses on Complexity, Accuracy and Fluency scores such as text length, sentence length and Guiraud (Granfeldt & Ågren, 2014). STUDY 3: COMPARING CHUNK USE IN WRITING Nowadays, chunks are considered a crucial aspect of L2 development; they contribute to uency and authenticity of L2 use and may also speed up L2 development (Gustafsson & Verspoor, 2017). L2 studies have also shown that chunks are good indicators of pro ciency level (Forsberg, 2010; Hou et al., 2018; Verspoor et al., 2012). For that reason, a third study was conducted to examine the frequency and use of di erent types of chunks in writing in the SB and DUB groups. STUDY 4: COMPARING SPEAKING SKILLS At the end of the six years, all students were tested on their speaking skills. To be able to compare these skills between the two programs, a valid and reliable oral pro ciency test had to be developed. e test was based on a test developed and validated by the Centre for Applied Linguistics (CAL) in Washington: e Student Oral Pro ciency Assessment (SOPA). e SOPA in turn is based on the Pro ciency Guidelines (ACTFL, 2012) of the American Council for the Teaching of Foreign Languages (ACTFL), the American equivalent of the Common European Framework of Reference for Languages (CEFR), which makes use of a rating scale comprising 4 dimensions and 9 levels ( ompson et al., 2002). Because this test was designed speci cally for children in elementary school foreign language programs, the content of the test was adapted to contain topics that are

RkJQdWJsaXNoZXIy MTk4NDMw