Instructional approaches: differences between kindergarten and primary school teachers
1 Introduction
Improving children’s literacy is a focus of education policy in many countries, because literacy is an important life skill in its own right, and also essential for accessing the wider curriculum at school. It is therefore important to identify approaches to teaching literacy that work well, especially for those who struggle with reading and writing. This identification is not easy, and there are many unevidenced claims made by developers and advocates of particular approaches.
The last two decades have seen a proliferation of instructional practices available to schools, and promotion of the use of technology, software, and scripted curricula to aid teaching, all of which promise to improve pupils’ literacy. Many have not been robustly or independently evaluated. Under pressure to produce results, schools may be misled by anything that promises success. It is important that schools know which approaches are not evidence-informed, as these may actually do more harm than good, and which are more promising.
In England, the Education Endowment Foundation (EEF) Teaching and Learning Toolkit provides a potentially useful resource for schools on evidence-led approaches. The Toolkit summarises the evidence for some of the commonly known and tested approaches based on meta-analyses of prior research. Such meta-analyses average the ‘effects’ across all studies included, and these studies may vary considerably in terms of quality (e.g. level of attrition), phase of education and outcome measures (both type and quality). These quality factors can affect the apparent ‘effect’ size of an intervention. For example, larger studies are more likely to produce smaller effect sizes than smaller studies (Slavin & Smith, 2009), and studies that use measures related to the intervention tend to show bigger effect sizes than those using treatment independent measures (Slavin & Madden, 2011). Therefore, averaging effect sizes across studies can mask many issues relating to quality. The authors of the EEF Toolkit are aware of such difficulties, and are taking steps to address them (EEF,2018).
Evaluating single studies from scratch takes considerably longer than simply aggregating effect sizes. Consequently, the evidence for a number of widespread classroom practices remains unclear. This paper addresses this problem by considering the evidence from individual studies that evaluate common approaches used in the primary classroom for literacy (notably reading and writing skills), including those meta analysed without quality control for the Toolkit, to provide a best evidence summary for teachers.
2 Improving primary literacy
教学方法: 幼稚园教师与小学教师的差异
1、介绍
提高儿童识字率是许多国家教育政策的一个重点,因为识字本身就是一项重要的生活技能,也是获得学校更广泛课程的必要条件。 因此,重要的是找出有效的识字教学方法,特别是对于那些读写困难的人来说。 这种识别并不容易,开发人员和特定方法的拥护者提出了许多未经证实的主张。
在过去的二十年里,学校获得了大量的教学实践,并推广使用技术、软件和照本宣科的课程来辅助教学,所有这些都有望提高学生的识字率。 许多都没有得到有力或独立的评估。 在要求取得成果的压力下,学校可能会被任何承诺成功的东西误导。 重要的是,学校要知道哪些方法没有证据支持,因为这些方法实际上可能弊大于利,哪些更有希望。
在英国,教育捐赠基金会(eef)的教学工具包为学校提供了一个潜在的有用的资源,以证据为导向的方法。 该工具包总结了一些通常已知和测试方法的证据,这些方法是基于先前研究的荟萃分析。 这种荟萃分析将所有研究的“效应”平均化,而且这些研究在质量(例如消耗程度)、教育阶段和结果衡量(类型和质量)方面可能有相当大的差异。 这些质量因素可以影响干预措施的明显“效果”大小。 例如,较大规模的研究比较小规模的研究更有可能产生较小的效果(slavin & smith,2009) ,使用干预措施的研究往往比使用独立治疗措施的研究显示更大的效果(slavin & madden,2011)。 因此,跨研究的平均效应大小可以掩盖许多与质量有关的问题。 Eef 工具包的作者意识到了这些困难,并正在采取措施解决它们(eef,2018)。
从零开始评估单项研究比简单地汇总效应大小需要更长的时间。 因此,大量课堂实践的证据仍不清楚。 本文通过考虑个别研究的证据来解决这个问题,这些研究评估了小学课堂中常用的识字方法(特别是阅读和写作技能) ,包括那些没有对工具包进行质量控制的元分析,以便为教师提供最好的证据总结。
2、提高小学生的识字率
提高儿童的识字率,尤其是那些来自贫困家庭的儿童,一直是英国历届政府关注的问题。 这种担忧在一定程度上是因为儿童在国际比较中表现相对较差。 在英格兰,只有75% 的孩子达到了预期的阅读水平,78% 的孩子在小学毕业时达到了写作水平(教育部[ dfe ]2018)。 贫困儿童的数字较低,有资格享受免费学校餐。 这是一个问题,因为读写能力是进一步学习的基本途径。 在小学努力达到“预期”阅读水平的学生普遍发现很难获得完整的中学课程,这对他们以后的学习有影响(wolf & katzir-cohen,2001; pikulski & chard,2005) ,以及后来的生活(kuczera,field,& windisch,2016)。之前的评论已经确定了一系列提高阅读和写作的策略。 Wanzek and vaughn (2007) and wanzek,wexler,vaughn,and ciullo,2010; wanzek et al. ,2013)建议对有阅读困难和残疾的学生的干预应该尽早提供,并且通过小组。 Marulis 和 neuman (2010年)