Issue 2, July 2022

Editorial Note

by Tan Heng Kiat, Kelvin (NIE)

Despite our best efforts, assessment in the form of tests and examinations still dominates our teaching and learning in schools. We term our large-scale national examinations in Singapore as high stakes assessment, which perpetuates the impression that only national examinations are important and have ‘stakes’. Assessment that is not described as ‘high stakes’, such as assessment for learning practices in schools, are implied not to have anything at stake and are unimportant.

What, then, is at stake when it comes to assessment for learning? And what is at stake for our learners in assessment beyond performing in tests and examinations? Perhaps we should ask less what our learners should be doing for assessment, and ask more what assessment is doing for all our learners. This is the recurring focus of the articles in this AFAL issue – What should assessment, in particular assessment feedback, be doing for all our learners?

We begin with exploring the alignment between teachers’ assessment literacy and students’ learning outcomes, and identify the concrete steps for directing what teachers learn about feedback literacy to systematically and eventually lead to students learning.

In order for assessment feedback to focus on students’ learning, it is vital to change mindsets from assessment feedback as (mere) provision of advice, to feedback as a systematic pedagogy. In that context, Professional Learning Teams from Edgefield Secondary School and Riverside Secondary School collaborated with NIE assessment faculty to develop a Feedback Pedagogy for their learners (PG 02/21 THKK). Four feedback pedagogies in the context of English Language, Malay Language, Humanities, and Chinese Language are featured. Such feedback pedagogy may be unpacked in three common phases – (a) preparing students to receive feedback, and in some instance receiving their requests for feedback, (b) engaging students with feedback advice and dialogue, and finally (c) supporting students in responding to feedback.

In the context of primary education, we feature two examples from Chua Chu Kang Primary School of supporting students in assessment to optimise their learning. “Initiate to Differentiate” describes the initiative of an entire MTL department to differentiate assessment for different learners. The school wide implementation of Growth Mindset provides an example of supporting mindset change for teachers and students, and the emphasis on growth in turn keeps minds focused and receptive to optimising learning from assessment.

Finally, the H.E.A.L. approach to assisting students with assessment stress originated with students with learning difficulties in mind, but its systematic and affirming practices would be useful for any student who would encounter undue and unhelpful stress from assessment. That is probably something we need as educators ourselves in our strenuous assessment literacy efforts!      

On behalf of the editorial team, let me thank you, our readers, for the pleasure of supporting you in your assessment and learning efforts. As a community, the editorial team periodically shares with each other what we are doing and learning in assessment, and how (and how well) our various efforts in our different contexts help all our learners. The AFAL bulletin has certainly helped us to remain focused on our own learning, and clarified our identity and role as learners in assessment.

In that context, we are pleased to share in conversation pieces by Ms Devi Durka and Mr Tan Ken Jin. We hope that the contents of this issue will spark your learning, and provoke new thoughts on assessment that ensures learning for all leaners. And lead to opportunities for meaning filled and thoughtful assessment conversations that you can enjoy as well!

The Four Boxes of Assessment Feedback Literacy

Aligning Assessment Literacy to Learning Outcomes

by Tan Heng Kiat, Kelvin (NIE)

Assessment feedback efficacy should be understood as a vital component of Assessment for Learning (Tan, 2013). According to the Ministry of Education, Assessment for Learning “is primarily used for ensuring that the intended learning outcomes are achieved by students”. Hence, how well assessment is understood and used for learning should be directed towards ensuring that students achieve their intended learning outcomes. There is therefore a need to make connections between what teachers know about assessment, and how that eventually and systematically leads to students’ attainment of learning outcomes.

Four boxes can be used to illustrate how assessment literacy may lead to learning outcomes. This four-box theory of aligning teachers’ assessment literacy to students’ learning outcomes may be represented in the context of assessment feedback as follows: 


Given that assessment and feedback is one of the four teaching processes of the Singapore Teaching Practice, it is common for teachers in Singapore to spend a great deal of time attending professional development workshops and courses in assessment feedback literacy. It is in the interests of teachers and students for new knowledge of assessment feedback theories and practices to be translated into enhancing or even ensuring learning for students.

However, teachers who attend such training but fail to apply their assessment literacy knowledge to their classrooms and school contexts may be said to remain only in Box 1. Application of and reflection on assessment feedback theory is therefore required to ensure that teachers are able to move to Box 2 to adjust or improve their assessment feedback practices. This requires intentionality on the part of teachers, in particular to be clear whether feedback for students is intended at the task, process, and/or self-regulated levels (Hattie & Timperley, 2007).

However, what teachers intend for their students to derive from teachers’ feedback comments would not be actualised if students do not read or act on the feedback in the first place. Hence, to ensure that teachers’ feedback practices are not stuck in Box 2, how students understand and are inclined to act on feedback needs to be considered. This therefore requires teachers to ensure that feedback is actionable for students, and to persuade students of the utility and benefits of acting on feedback. Such agentic engagement with feedback is termed as proactive recipiences by Winstone et al. (2017), and is defined as ‘a state or activity of engaging actively with feedback processes, thus emphasizing the fundamental contribution and responsibility of the learner’ (p. 17).

Whilst Boxes 1 and 2 represent what teachers need to know and do, Box 3 represents the reality that feedback efficacy ultimately depends on what students actually do with feedback, rather than on what kinds of feedback are provided by teachers. But proactive recipience of feedback in itself merely means that assessment feedback efficacy is represented by Box 3. To ensure that Box 4 is reached as a goal, i.e., for students to actually benefit from acting on feedback, the focus needs to shift from students’ activity with feedback, to students learning from feedback.

The journey from Box 1 to Box 4 represents the chronology that begins with assessment knowledge, to application by teachers, to action by students, and finally for the learning benefits for students.  However, the thought process of a curriculum leader starts with the ends in mind, and begins with the vital curriculum question of what students need to learn (Box 4). This then indicates what students need to do (Box 3) in order to process, enhance, and evidence their learning. The question(s) that Box 3 poses about learning in turn prods the teacher to consider the feedback advice and actions (Box 2) that enables such learning to take place. And finally this raises questions of what else teachers’ need to know about assessment feedback theory (Box 1) that may improve their feedback advice and enactment.

Given the vast amounts of time and effort we expend on feedback, as well as attending various assessment courses and workshops, it would be timely and useful to consider whether and how we can ensure that feedback achieves its ultimate purpose of enhancing or ensuring learning. Obviously, we should look beyond knowledge telling in our assessment literacy efforts, and make concrete efforts to apply what we have learnt into improved practices that students can actively use for their learning. More importantly, we should also remember our educative roles as curriculum leaders, and ensure that clarity in what students should learn, and the forms of evidence and actions of such learning, should be the starting point of our assessment endeavours. 

References 

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Tan, K.H.K. (2013). A framework for Assessment for Learning: Implications for feedback practices within and beyond the gap. ISRN Education, 1-6.

Winstone, N., Nash, R., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52, 17-37. https://doi.org/10.1080/00461520.2016.1207538

English Language Feedback Pedagogy

Student Engagement with Feedback: 

Feeding Up instead of Getting “Fed Up” with Giving your Students Feedback

by Koh-Ng Yui Yun
     Brennan Kwa
     Shirley Yeo-Tham
     (Edgefield Secondary School)

The search for effective feedback strategies has been the focus of the English Language teachers in Edgefield Secondary School since 2020. During department meetings, discussions on marking effectiveness had focused on the use of band descriptors, detailed comments and marking symbols to improve student achievement. The opportunity to work with NIE to examine our feedback pedagogy proved helpful and insightful. We learnt from local research that EL teachers were giving copious amounts of feedback on students' work, however, there was limited uptake of the feedback by the students to improve their work because students may have issues digesting the comments made and be overwhelmed by their many grammatical errors.

The transferability of feedback became the focus of discussion for the English Language department during the Professional Learning Team (PLT) sessions. In 2021, a group of teachers looked into the factors that have influenced students’ transference of learning and motivation to apply teacher feedback to revise their work. The study examined the affective, behavioural and cognitive dimensions of the students’ feedback engagement. EL teachers’ beliefs on student engagement with feedback and how they impact the transference of learning were considered. Teachers opined that encouraging student voices, views and metacognition was essential in the feedback process:

Feedback as a pedagogy needs to be 

  • underpinned by positive teacher-student relationships of trust and patience;
  • routinised, multifaceted, and manageable for better uptake of feedback; 
  • formative, iterative, concise and specific before it can impact the transference of learning

Students need to
  • see feedback as a tool for improvement; 
  • be motivated to engage with feedback and work on their revisions
  • appreciate feedback
Teachers from the PLT set out to explore student engagement with feedback at the Secondary 1 level with Essay Writing. Teachers from the PLT continued to enhance feedback pedagogy at the Secondary 2 level with Situational Writing and at the Secondary 4 level with Literature essays, improving on feedback practices applied since 2020.

Rethinking Student Engagement with Feedback: Secondary 1 Express English Language Essay writing

Teachers involved in the PLT in 2021 experimented with the use of rubrics as a starting point to surface anticipated learning gaps of two Sec 1 classes. It was hoped that the rubrics would help students realise their language gaps: language accuracy, verb forms and punctuation. With this realisation, it would lead to their desire to self-regulate and an undertaking to improve their writing skills. Ultimately, students would be motivated to receive feedback from their teachers, respond to the feedback by revising their work and applying their learning on future written pieces of work.

With the Sec 1 classes, students’ engagement with feedback began with their understanding that good writing involves being mindful of organisation, paragraphing, content and language accuracy in the different parts of speech in the English language. To guide students in their success criteria for continuous writing, a set of rubrics was created by adapting the UCLES assessment rubric. Teachers focused on content and relevance, together with paragraphing to help students write coherent argumentative essays.

Watch this video to learn about the enhanced feedback practice designed to encourage students’ uptake of feedback in the Sec 1 EL classroom.

The Situation with Situational Writing: Secondary 2 Express English Language

In our Secondary 2 English Language classes, for Situational Writing, students often grapple with having a clear understanding of the situational writing rubrics and how to make use of the comments and feedback provided in their scripts.

Oftentimes even after copious annotations and markings on the scripts, most students, when submitting another piece of situational writing, would repeat the same mistakes that had been earlier pointed out.
 
Recognising that gap, we created a checklist of the marking codes and an explanation of the comments and rubrics. So when students received their marked situational writing pieces, they would have to go through the checklist and rubrics and understand the comments given. This would then culminate in a self-evaluation form on their scripts to identify the areas for improvement in their work.
 
The gratifying aspect of this particular project was that teachers were actually able to respond to students’ feedback in their self-evaluation, making a previously monological process, a dialogical one instead.

Watch this video to learn about how the teacher promoted dialogic feedback in the Sec 2 EL classroom. 

Making Meaning of Literature: Self-Evaluation and “No Marks” Essay Scripts

It is not atypical for our Secondary 4 Literature students (or really any student regardless of level or stream) to loudly exclaim upon receipt of their scripts, “I don’t know what I’m doing wrong” or (I’d argue even worse) “I don’t know what the teacher is saying in the comments!”. 

For Literature, this is an especially worrisome trend because marking of their scripts is done via Rubrics so how they get their marks can at times seem subjective and opaque to them. 

One of the big reasons behind their lack of understanding or knowledge of how to manage and better understand the comments and rubrics in their scripts, comes from a lack of self-evaluation and reflection of their writing and their essays. 

Self-Evaluation of Student Work

In the context of this project, self-evaluation would be how students make a concerted effort to look through their work and make a judgement about their work and what is present, what is missing and how that translates into quantifying and placing their work on the scoring rubrics.

Self-Reflection and how to improve

After the initial stage of evaluating their work (to look through and take note of what is present and what is missing), the next step that we wanted our students to take would be to reflect on their work and determine how it can be understood, revised and improved in the next iteration. We would argue that students can only improve if they are first able to evaluate on their own mistakes before they move onto the deeper and more demanding stage of reflecting on how to improve. 

Example of the self-evaluation and reflection spreadsheet:


Note: The self-evaluation component would be under the column Rationale for Marks while the reflection component would be under the column Improvement to essay.

To that end, we put in place the following measures for our Sec 4 Literature students when returning their scripts to them: 
  • Self-evaluation & Reflection of their Scripts (Annex B Tab 1-Self-Evaluation): Students, after getting back their respective assignments or exam scripts, would have to access a spreadsheet in their Google Classroom and evaluate their scripts based upon marks. They would have to explain the reason for their marks and how they would improve in the next assignment. 
  • “No Mark” Scripts (Annex B Tab 2-No Mark Scripts): Students’ scripts would not be given a mark, instead the teacher would have recorded the mark on a separate list. Students would only be able to look at the comments and annotations on their scripts and use that to guide them in determining the mark they would potentially have scored. After students completed this self-marking process, the teacher would reveal the mark that was allocated to them. 
Watch this video to learn how the teacher designed processes that made students pay attention to the written feedback, self-assess their performance and receive feedback to calibrate their self-assessment.

Future Directions

Several implications on the sustainability of the enhanced feedback pedagogy (establishing standards, noticing gaps and applying feedback to the next task) were discussed within the department. Teachers agreed that it was important for students to make sense of and apply feedback as it would most likely translate to improved writing skills. The teachers also noticed that students were most encouraged to engage with the feedback if it was specific and concise. The teachers’ reflection made them more cognisant about the clarity and specificity of comments in the feedback, and providing opportunities for students to read the feedback in class and seek clarifications.
 
In order to make time and facilitate some of these practices, teachers agreed that they could focus on using the enhanced feedback pedagogy when teaching students how to use the parts of speech more accurately for continuous writing. Teachers could also focus on bite-sized paragraphs instead of whole essays, thereby making the writing revisions more manageable and less onerous for students.  For the advanced learners, teachers could move students towards more self-regulation in terms of engaging in self and peer feedback when analysing their own writing before getting teachers’ input. These practices would lead to teachers engaging with more feed up practices rather than getting fed-up with students’ lack of action in feedback engagement.

Malay Language Feedback Pedagogy

The Use of Learner-Centered Feedback Design to Improve Oral Response for Conversational Questions

by Muhd Ammar Bin Abdul Aziz
     Faezah Noorahman
     Noorazlina Noordin
     Sharifah Noraini Aron
     Nurul Huda Jasmani
    (Edgefield Secondary School)

Context - Learning from Past Experiences

The team identified oral conversation as a key area for improvement among learners. Practicing for this component is challenging due to the broad range of themes that are tested. Learners typically encounter problems as they are not confident on how to respond or begin the conversation.

Traditionally, feedback is given after the practice, which does not particularly encourage feedback uptake by students. Learners also shared that they do not know how to improve based on the feedback given by teachers. 

Prior to this, the team attempted to build learner’s feedback literacy by introducing rubrics into the feedback process. However, the team found that learner's literacy level and feedback engagement doesn’t improve much with the mere introduction of rubrics.

The Project’s Foundations

The team took inspiration from Dr Jessica To’s (2021) study titled ‘Using learner-centered feedback design to promote students’ engagement with feedback’. Although the study was conducted on a group of adult learners, the team felt that there are key components within the study that could be beneficial to the secondary school context. 

With this, the team embarked on a research project which focuses on the use of learner-centered feedback design and learner autonomy to improve the quality of learner response to conversational questions. As this project takes reference from a research study in higher education, the team adapted the original feedback design in accordance with the school context and seeks to achieve the following objectives;

  • To understand if such a learner-centered assessment design will also be beneficial to secondary school learners.
  • To understand if learner motivation and behaviour towards taking action on feedback are improved when given autonomy in choosing the areas of feedback to work on.
Conceptualising the Project

Reflecting on the team’s various professional development courses to increase competency as oral testers, the team decided to adapt the processes found within the research study to suit a secondary school learner profile and began conceptualizing the solution. To address the challenges the team encountered in the years prior to this, the team reconsidered the approach taken. Instead of focusing on content building and exposure, the team decided to shift the focus towards building oral response competency and increasing learner feedback literacy to help them identify areas that they can improve on. The team also sought to increase learners’ self-efficacy and foster learner autonomy. 

This is so as the team tried implementing rubrics as a way to solve this problem in 2018-2019, however the results were not as encouraging as the team had underestimated the importance of learner feedback literacy. Although the rubrics was seen as a positive step forward, the team realized that introducing rubrics without first increasing the learners’ feedback literacy would limit the effectiveness of the feedback process. The experience from the previous attempt proved valuable in constructing an effective and meaningful action plan that would positively affect the learners. 

This new approach allows improvements to be made incrementally by incorporating feedback from peers and teachers before actual assessment. This also encourages feedback uptake. The team believes that learners would build a strong foundation of oral response competency that will help them in responding to questions regardless of themes as they understand the success criteria for a coherent response. 

Tailoring the Approach

The team overcame the challenge of  adapting adult learner feedback pedagogy to secondary learner feedback pedagogy by leveraging on the professional development sessions on feedback. The team gained insightful knowledge from dialogues and sharings on feedback pedagogy by experts. Key to which are the 4 principles of feedback pedagogy;
  • Meaningful task design
  • Judicious use of peer feedback and/ or self-feedback
  • Students’ proactivity in feedback interaction
  • Psychologically safe feedback environment
These principles form the basis of the approach and allowed the team to factor in key considerations of feedback literacy into the action plan which will make the 4-stage feedback process a more meaningful and impactful experience for the learners.  

To complement our existing lessons on current affair topics and content building, the team sought to first foster students’ understanding of success criteria. By incorporating peer feedback and learner autonomy into the learning process, the team took an innovative step towards increasing learners’ efficacy when responding to oral conversation questions.

The innovation consists of four broad stages;

Stage 1 - Learners will learn an oral response format to understand how to construct their response and achieve coherence. This involves the introduction of discourse markers to link their responses in a coherent manner. Learner will then complete an individual recording task.

Stage 2 – Learners will be introduced to a set of rubrics for reference. Learners will access recorded exemplars of mixed-abilities for reference and standardization. This will build learners’ feedback literacy by making them understand the various success criteria. Learners will then provide feedback on their peers’ individual recording task. Below is an example of a translated peer feedback form completed by learner;


Example A – Completed Peer Feedback Form

Stage 3 - Learners will receive the peer-feedback and engage in discussion with their peer to analyse the peer-feedback. Based on the peer-feedback received, learners will conduct a self-assessment and choose an area(s) for growth that they wish to assess and focus their improvement on. Below is an example of a translated self-assessment form completed by learner;

Example B – Completed Self-Assessment Form

Stage 4 - Learners will make an improved recording and submit for an evaluative teacher assessment. Teacher will focus on learner’s chosen area(s) for growth to determine learner’s progress in improving the chosen area(s) based on the self-assessment.

The Rationale of The 4-Stage Approach

In developing the tailored 4-stage feedback process, the team took into consideration learners’ profile. The feedback process has to be one that is not over-complicated whilst maintaining its efficacy. Not only that, the team has to ensure that the feedback process would be something viable for teachers to conduct in daily school context. The team was mindful in not creating a process that would be too time-consuming that would not be viable on a daily context. The idea to keep the process straightforward and viable is so that the team can adapt and scale the approach to other levels and possibly other language units should the outcome prove effective.

The 4-stage approach empowers learners to exercise autonomy in improving themselves as well as increase their feedback literacy, as they would be able to make meaning from the various feedback channels. Learners would be encouraged to take ownership of their growth as the quality of feedback increases. Compared to the traditional teacher’s comment and marks as feedback, this approach ensures learners clearly understand the areas that they require extra effort and focus in. 

The Outcome of The Project

Overall, the team found that the learner’s response toward this approach was encouraging and it showed more impact as compared to the previous attempt of merely introducing rubrics. The outcome showed that learners are able to display a better understanding of feedback and are now more empowered to enact improvements to accelerate their growth in the oral conversation competency component. Learners have benefitted from the multiple feedback received from their peers and teachers.

The team observed that learners were able to have a clearer understanding of the success criteria and are more conscious of the specific area for growth. The recorded exemplars and standardization process enabled learners to have a clear distinction between a quality response and an average response. This better grasp of the rubrics and its success criteria allowed learners to better pinpoint their weaknesses and make for a more targeted improvement. 

The team found that empowering learners with autonomy to work on their areas for improvement enabled them to experience a more meaningful feedback process. 59% stated that being able to indicate aspects that they require feedback on improved their learning. 55% indicated that their performance would improve when given autonomy to choose aspects of their work that they require feedback on. The self-reflection in Stage 3 really helped learners exercise their learner autonomy to ensure deep learning.

Team’s Hopes & Reflection

All in all, the team is delighted with the outcome of this project and has continued to strive further in 2022 by carrying on the process in order to obtain more insights that could help the team to explore wider application of this feedback process into feedback pedagogy for language learning and possibly other subjects within the secondary school curriculum. Throughout this collaborative learning process, the team gained the following key insights;
  • Understanding key principles in driving a specific pedagogical approach
  • Discussing and exchanging ideas through dialogue
  • Tailoring specific approach to target group
  • Empowering learners with learner-centered approach
  • Understanding learner uptake of feedback and how their attitudes shape their learning
  • Feeding forward effective feedback that enhances learners’ growth

Humanities Feedback Pedagogy

Please refer to the link below for the presentation on “Developing a Pedagogy of Feedback for Humanities Subjects​” by Jin Xiaoxi, Chong WenEe & Ng Kok Wah from Edgefield Secondary School

Link to PowerPoint slides: Sharing by Jin Xiaoxi, Chong WenEe & Ng Kok Wah from Edgefield Secondary School

Chinese Language Feedback Pedagogy

Rethinking feedback for student’ affective, behavioural, and cognitive engagement with interactive feedback cover sheets

by Lee Cheng Yen
     Sook Chiun Kew
     Chong Pey Yi
     (Riverside Secondary School)

Introduction

It is not unusual for feedback for Chinese essay writing to centre on students correcting their mistakes. Students would copy the corrections blindly without realizing their gaps, and they did not treat the feedback given seriously. Thus, they were unable to recall what they had corrected, and tended to repeat similar mistakes. Hence, completing corrections did not translate into actual learning. As a result, we could observe some students losing motivation after receiving feedback over time. And we also wanted students to address their learning gaps beyond correcting their mistakes. 

This prompted us to search and read assessment literature to address the situation. We understood that “The only way to tell if learning results from feedback is for students to make some kind of response to complete the feedback loop” (Sadler, 1989, p.121). If students are to benefit from teacher comments, they must deliberately reflect on and process them, and have opportunities to apply what they have learnt from the feedback to subsequent tasks (Boud and Molloy 2013; Evans 2013; Nicol 2013). Engagement with feedback involves receiving, perceiving, interpreting and understanding it, and using it in some way to improve learning (Handley et al., 2011;Hargreaves, 2011;Nicol, 2013). Lastly, feedback evokes achievement emotions in students. Emotions are a key component of self-regulation (Pekrun et al., 2002), and developing learners’ self-regulation in responding to strong feelings evoked by feedback is an important area of focus.

Formulation of objectives

With the above readings in mind, we formulated the following as the problem statement “After feedback is given, students did not find it important to address the learning gaps or feel motivated to act on the feedback. They tended to copy corrections blindly without realising their gaps and did not treat the feedback given seriously, they tended to lose motivation after receiving feedback” for our assessment feedback pedagogy research project in 2021 in three Chinese Language classes. This project offers concrete ways for students to remain motivated to bridge their learning gaps through using a teacher feedback as a form of introspection and engaging in reflective practices and applying teachers’ specific feedback in subsequent tasks. We hope to achieve the outcomes in three domains:

Affective: Students to have positive feeling towards teachers’ feedback.

Behavioral: Students are able to understand and apply the feedback given.

Cognitive: Students are able to show improvement in subsequent task.

We have identified three classes: Sec 3 Higher Chinese, Sec 3 Express and Sec 2 Express as our target group.

These were the stages of our action research: 


Stage 1: We started our implementation with a baseline survey for students to rate their emotions on a 5-point likert scale when they receive feedback from their teacher on their writing. The information from the survey was useful for us to find out how students perceive feedback.

Stage 2: However, some of the survey responses were unexpected and puzzling. For example, students informed us that they feel anger and hopelessness when teacher provides feedback. Hence, we gave our students an essay assignment with an evaluation form to clarify why the students responded in this manner.

Stage 3: The assignment evaluation form which students submitted after they had completed their essays served as a feedback cover sheet.  

There are 2 parts to the feedback cover sheet:

The first part is a self-checklist with task success criteria to see their self-assessing capability, also to be aware of their learning gap, while the second part is where students request for feedback on their essay. When teachers are aware of what students want to know, they will be better able to customise the feedback provided to meet students’ interests and needs. The written feedback can better prompt learners to think back on and think about their learning and what can be improved. 

Stage 4: We analysed students’ requests and provided  targeted feedback in order to optimize students engagement with teacher feedback in the affective, behavioural, and cognitive dimensions. 

Stage 5 & 6: After the feedback is given by teachers, students are required to complete the reflection form and complete their corrections.

The purpose of the reflection form is to allow students to see if teachers have provided feedback based on what they have requested, and students are able to understand the feedback given by the teacher, then demonstrate the understanding in subsequent tasks. We designed the worksheet with the aim of promoting of promoting a 2-way conversation between students and teachers. Hence, dialogue boxes were deliberately incorporated to prompt students to pen down their thoughts.

Stage 7: The last implementation step in this project is the post feedback survey which has similar questions as the baseline survey.

Results and artefacts from stage 5&6


We looked at how students responded to the teacher's feedback. On the left, is the student's original essay. The teacher commented that the student should give examples by providing evidence from news articles. On the right, the same student added in evidence from a news article in his correction. From this, we think that students did  understand and made use of the feedback to apply in their subsequent task.


Above is a sample of part 1 of a student’s reflection form  after students received their marked essay with feedback given to them. We asked the student, “Did the teacher respond to your request?” Student acknowledged that the teacher responded and required him to use the examples from the news article. This assured us that students were reading and acting on our feedback!


For reflection form part 2 (see above), we asked students to reflect on the feedback given to them. Zooming in to question 1 and 3, we asked students which feedback was most helpful to them and why. Student answered that he understood that he needed to use a news article as an example. This was because he did not know where to include these examples. Next, we asked him, based on the feedback given by the teacher, which part of the essay can be improved? What are the methods to improve? Student answered that he could elaborate further with examples from the news article. From this reflection, we can tell that the student was able to explain the enhancement made in the subsequent task.

Summary

Our feedback research project took place over three months, and we had the opportunity to use the interactive feedback cover sheet to engage our students in a feedback dialogue affectively, behaviourally, and cognitively. This can be summarized in the following table: 


Affection: Reflection helps students to better understand feedback given, hence more neutral feelings towards it, this can help students to be more open-minded and receptive towards feedback. For example, one of the students told us that he felt less anxious and angry after the reflection as he understood the intentions of the feedback and how to improve on his work. 

Behavior: Students treated the feedback seriously and demonstrated their understanding and were able to apply in subsequent tasks. This gave us confidence that our feedback advice to students were interpreted by them as actionable, and that they were motivated to act on the feedback.

Cognitive: Upon reflection, students were able to understand what the additional ingredient is to improve their work. This provided greater clarity of the cognitive pitching of our feedback in addressing their learning gaps. For example, in the argumentative essay writing, one of the students reflected that he could do better in elaborating his points. This helps teacher to provide targeted feedback by guiding the student on the effective use of news articles and statistics to finetune his essay.

Conclusion

In this project, we had learnt that instead of focusing on students’ errors, we can help students to reflect on their errors. This will subsequently help students to be more open-minded and receptive towards feedback, treat the feedback seriously and take the necessary actions to bridge their learning gaps. It would be ideal if they can transfer the learning to subsequent tasks and progressively learn how to close their gaps independently. Spurred by the encouraging response from our students and the improvement in their learning, our team is motivated to continue our efforts to improve our feedback practices even more.

Mother Tongue Language – Initiate to Differentiate

Please refer to the link below for the presentation on “Initiate to Differentiate” by Yang Shu, Head of Department of the Mother Language Department at Chua Chu Kang Primary school.

Link to video: Sharing by Yang Shu

Growth Mindset

Please refer to the link below for the presentation on “Growth Mindset” by Rasidah at Chua Chu Kang Primary school.

Link to video: Sharing by Rasidah

The Holistic Education Alternative Learning (HEAL) Approach to Assessment Stress

Towards better engagement of learners with SpLDs in assessments through HEAL

by Siti Asjamiah Asmuri, Lead Educational Therapist, Dyslexia Association of Singapore (DAS)



Let’s begin by revisiting this all too familiar quote that has generated much debate among educators and school leaders on assessments.  Most of us would have attempted to situate ourselves in the shoes of animals other than the monkey or the bird who had been predisposed with innate characteristics to be able to accomplish the task successfully.  While we question the term ‘fair selection’ used in this picture, a more pertinent question we need to ask ourselves is how we have responded to this since the time we first came across this quote.  What has changed and how far have we come?

Recent developments in Singapore’s educational landscape have been encouraging.  They reflect a growing interest in embracing diversity and accepting learner variability as the new norm. One example is the gradual replacement of streaming in secondary schools with the subject-based banding system in which students have the option of taking subjects at their level of proficiency.  Another landmark agenda in the pipeline announced by the then Education Minister Ong Ye Kung in 2019 is the introduction of a new common national examination to complement the subject-based banding system that would replace the current GCE O- and N-Level examinations (Mokhtar, 2019).  This could be a good time to reflect and re-examine assessment approaches and practices for our increasingly diverse learners, including those with diagnosed Specific Learning Differences (SpLDs).  SpLDs refer to differences or difficulties some individuals have with particular aspects of learning.  These affect the way information is learned and processed and may be diagnosed in learners where there is an observed lack of achievement at age and ability level, or a large discrepancy between achievement and intellectual ability.   Some examples of SpLDs are dyslexia, dyscalculia, dysgraphia and Attention Deficit Hyperactive Disorder (ADHD) (DAS, 2020).

Earlier this year, a friend who is a Special Needs Officer in a secondary school shared an image of a student’s work sent by her colleague, seeking advice on what appeared to be the student’s written response to a Geography test question, parts of which had been written in complete reverse as seen below.


Excerpt 1: sample of a written response to a test question in a Geography paper by a Secondary 2 student with dyslexia

This student had been diagnosed with dyslexia, a specific learning difficulty primarily affecting the skills involved in accurate and fluent word reading and spelling (MOE, 2011).  Though reverse or mirror writing is not to be construed as a symptomatic trait, it is not unusual that this is presented by some individuals.  Some studies have attributed this to working memory deficits or visual processing issues.  However, at least one study mentioned stress and anxiety as a possible contributor (Della Sala & Cubelli, 2009).   Fortunately for this student, her teacher was willing to try her best to decode the words and give her the marks she deserved.  At the same time, she also wondered how best she could support her in her attempts at assessments moving forward especially if this continued to persist.   

A number of studies have mentioned the occurrence of emotional disturbances such as anxiety, fear of failure, feelings of inadequacy and low motivation as negative consequences resulting from language learning deficits which were found to have caused learners to produce ineffective or incoherent writing (Piechurska-Kuciel, 2010;  Schweiker-Marra and Marra, 2000; Ganschow et al.  1994; Ganschow and Sparks, 1996). Test anxiety and the fear of disappointing parents are indeed evident in our Singapore students. This would be particularly true for students struggling with dyslexia and its effects could sometimes be seen in their writing and test performance as shown below. 


Excerpt 2:  Response from a Secondary 2 student with dyslexia on how he felt about exams and his performance in exams


Excerpt 3: Response from a former Primary 6 student with dyslexia on how she felt about exams and exam results

In another study by Nelson, Lindstrom & Foels (2015), college students with dyslexia were reported to have higher test anxiety than those without when they were required to attempt tasks that are heavily language-based and required extensive reading.  Although we can argue that there may be other stress-inducing factors such as culture and social expectations and that stress and anxiety are also commonly experienced by many students with or without learning difficulties,  accompanying executive function deficits such as poor working memory and processing speed experienced by students with learning difficulties may gravely impact their coping mechanisms, adaptability and ability to respond appropriately, hence affecting test performance that may not truly reflect their learning and capabilities.   What this might also suggest is that stress and anxiety could also be attributed to test formats, conditions or methods that may not be conducive to facilitate optimum performance (Elliott, Kurz & Schulte, 2015). 
 
A key implication here is that emotions do have the propensity to influence learner engagement with assessments and their test performance.  Assuming that existing assessment practices and methods remain status quo, equipping these students with relevant study skills, time-management and self-monitoring strategies are some ways they can be better supported to cope with the demands and rigours of assessments.   Test accommodations such as granting extra time to compensate for deficits in working memory and processing speed, and integrating mindfulness activities in the curriculum are other forms of support that schools and learning organisations have been providing.   However, the effectiveness of such adaptive measures has not been sufficiently investigated.  In fact, a study by Duncan (2017) conducted on undergraduate students with a valid diagnosis of dyslexia, dyspraxia and dysgraphia from the English, History and Law faculty who received examination adjustments in the form of extra time or extra time with the use of a word processor, indicated that they were not able to fully place these students on a level playing field with their peers.

Educational frameworks we know today that have garnered considerable interest place emotions as a key tenet.  The Universal Design for Learning (UDL) and Design Thinking (DT) frameworks are some examples.  UDL was conceptualised based on research guiding the development of flexible learning environments to accommodate individual learning differences by providing multiple means of engagement, representation, as well as action and expression.  By giving due consideration to how emotions may affect learner engagement and performance, UDL embraces a broader and more comprehensive view of assessment beyond acquiring and understanding learners’ skills and knowledge and providing timely feedback and information to improve learning outcomes.  It seeks to gather information about the way learners interact with learning environments and assessment methods and encourages educators to explore and implement alternative options to minimise barriers and maximise opportunities for all learners to grow and succeed (Meyer, Rose, & Gordon, 2014; Rose & Gravel, 2013; CAST, 2011; Rose & Meyer, 2002).  Likewise, DT recognises empathy as the first and most important step in the design thinking process.  By encouraging educators to put themselves in the shoes of their learners, it allows them to better understand the challenges faced by them, their physical and emotional needs, how they think about the world, and what is meaningful to them in designing curriculum and assessments (Plattner, 2013).  

UDL and Design Thinking are certainly useful frameworks in guiding our instructional and assessment practices for all students.  Students with SpLDs in particular, have additional challenges requiring a systematic approach to address their learning needs.  In my experience working with these students, I realise that they are unique individuals with distinctive needs.  They would require a bespoke approach to bring out the best in them and their learning.  With this in mind and taking inspiration from the key principles of the UDL and Design Thinking frameworks, I have formulated the following HEAL approach to guide us in our instructional and assessment practices.  

H - Help these students to identify themselves as people, not problems. I wanted this approach to focus on these students appreciating themselves as unique individuals and this calls for a humanistic approach which will be explained in greater detail below. 

E - Engaging students with SpLDs calls for educators to reframe their expectations of student engagement. What kind of behaviours would we expect from students with SpLDs to indicate that they are engaged? There is often the misconception and temptation to identify students being engaged with being ‘attentive’– watching and listening to the teacher. We should, instead, address students’ desire for learning by focusing on elevating their interest through the tasks and activities organised. Students have their own unique ways of engaging.

A - Alleviate factors that may pose potential barriers for students with SpLDs to demonstrate their learning. At times, the methods or materials used in an assessment may demand additional skills or understanding not directly connected to what is being measured or tested. Construct irrelevant variables such as requiring students to write proper sentences to explain the workings of a Mathematical problem, may hinder students with language difficulties from being able to demonstrate their learning, thus bringing to question the accuracy of the data derived. Regularly evaluating existing assessment measures and tools enable us to reflect on assessment validity so that students with SpLDs can be given more equitable opportunities to demonstrate their potential. Hence, the purpose of alleviating barriers that are usually inherent in standardised formats of assessments is not to make them easier, but fairer for them. 

L - In an age where learning is increasingly driven by technology, we should leverage on the use of technological devices and assistive tools not just to engage learners, but also break down barriers to learning and collaborating with others, that students with SpLDs often face in their efforts to access school curriculum. Some examples of technological applications are Text-to-Speech (TTS) to help with reading and Speech-to-Text (STT) to help with spelling and writing. 

Humanistic approach to assessments

Paolo Freire’s humanistic approach to education emphasises human liberation from oppressive systems and the importance of recognising the potential of the whole person in the learning process to facilitate growth (Freire, 2009). Adopting the humanistic approach therefore, places humans and being humane at the heart of curriculum.  This entails studying assessment data beyond the analysis of quantitative scores in assessments to include observations of learner behaviour and interaction with assessment methods and instruments. As shared earlier, emotions have been found to influence learner performance. Different test instruments generate different emotional effects for different learners.   Learners who work well under pressure may thrive in high stakes examinations. Students with poor working memory, on the other hand, may perform better and have a better chance at experiencing success and progress in frequent, bite-size assessments. Working under highly controlled or timed conditions of summative assessments such as End-of-Year or high-stakes examinations demand students to write quickly and accurately. These tend to impose additional cognitive load and exacerbate stress for students with SpLDs, thus placing them at a disadvantage. Emotions and the patterns of engagement they engender, may pose problems for accurate measurement of constructs, such as knowledge of Math computations or writing proficiency. It is possible that inappropriate choices of assessment methods and instruments could be the cause of a strong and differential variance on prescribed construct- relevant measures. One way to address this is to use flexible and comprehensive assessment tools to provide teachers with better insights into its possible reasons. Even students themselves can be a good source of such information as evident in the suggestion given by a student with dyslexia below.


Excerpt 4: Feedback from a secondary school student with dyslexia about examinations

‘H’ therefore reminds us that our overall approach to assessments should be humane, and this also implies the need to understand our learners to be humans who need to be engaged and interested in their learning, instead of viewing them as robots that are ready to perform on demand under time constraints and rigid test conditions.

Engage and elevate interest and motivation

In providing learners with appropriate and multiple means of engagement, assessments may have the potential to support interest, drive motivation and develop persistence. Just as students learn more effectively when they are engaged and motivated, their performance on assessments can be enhanced by increasing engagement. Learners’ interaction and engagement with test tools and methods provide valuable information on their ability to demonstrate learning and strengths at their most optimum level.  It is important that educators frequently observe and look out for behavioural indicators in students who may be experiencing challenges with applying the skills tested when using a specific tool.   Are there any available accommodations to address the challenge and will the accommodation(s) change the construct being measured by the test (Kettler, 2012)?  In giving such learners access to appropriate test accommodations or tools, the validity of the inferences made from test scores could be enhanced. Testing accommodations are adjustments made that do not alter the assessed construct and are applied to test presentation, environment, content, format (including response format), or administration conditions for particular test takers.   These may either be embedded within assessments or applied after the assessment is designed (AERA, 2014).  Some examples of common testing accommodations include varying test presentation (e.g. oral delivery, drawing and typing), test duration (e.g. extended time, delivery across multiple days or testing sessions) and the provision of reading support, among others (Sireci, Scarpati & Li, 2005).


Other than testing accommodations, learners could also be provided with a simple audio or video explanation of assessment instructions to enhance their comprehension of the assessment criteria.  Clear and explicit assessment criteria could also promote greater independence and ownership of the learning when learners are closely guided by it.  

Offering choices in formative assessment tasks such as quizzes or encouraging independent contributions in online class discussion forums or digital notice boards are some examples of formative assessments that can be embedded within instruction to add variety, entice learner participation, enable learners to monitor their own learning and develop a supportive community.  Summative assessment tasks could take the form of a poster design, journal writing, conducting mini Ted Talks or producing a video demonstration of a skill that could inspire greater intrinsic motivation for learners to harness the best of what they have learnt and invest effort accordingly.   Hence, providing options and allowing students to make choices about assessment conditions are not only some plausible ways to engage students in more optimal performance hence minimising differential threats, they may also help learners find relevance of what they have learnt to the rich experiences they bring and have gained in the process of learning. 



A valid question that may be on educators’ minds is – if learners’ emotions are so important and should be considered in assessments, how then can they best be accounted for to obtain accurate estimates of learner knowledge, understanding, skills and strategies?   One way to recognize the effects of emotion on performance is to measure emotions themselves as part of a more comprehensive approach to assessment.  Learner engagement in assessments could be extended to include their written or oral feedback at the end of lessons or assessments as a gesture of offering choices and extending a more agentic role in the learning and assessment process.  (Rose, Robinson, Hall, Coyne, Jackson, Stahl & Wilcauskas, 2018, p. 171).    Taking into account students’ interests and experiences with assessments is an important step towards more equitable instruction (Shepard, Penuel & Davidson, 2017).  

Apart from giving students options as a strategy to pique their interest, it is also equally vital to address potential barriers to help them manage their emotions and sustain motivation.  
     
Alleviate potential obstacles to engagement in assessments   

Although it can be inferred that learners’ failure to achieve desired learning outcomes demonstrated in their assessment results could be attributed to inherent learning difficulties or gaps, it could also indicate a lack of engagement.  There could be many factors affecting learner’s engagement with assessments.   For example, instead of presenting instructions and questions in lengthy sentences and littering texts with sophisticated linguistic jargons, learners with SpLDs who have weak memory or processing skills would appreciate instructions and questions that are brief, simple and direct as suggested by a student with specific language difficulties below. 


Excerpt 5 : Suggestion by a student on how assessment items can be made more accessible to learners with SpLDs

Students with dyslexia who struggle with decoding words or retrieving phonics concepts, spelling and grammatical rules to apply in editing a passage containing errors in spelling or grammar will be less frustrated from cognitive overload if they are given options to choose from as suggested by the student below. 


Excerpt 6 :  Suggestion by a student on how assessment items can be made more accessible to learners with SpLDs

Consider also how a student with selective mutism would perform in the typical environment and conditions of a school or high-stakes oral examination requiring him or her to be able to read a passage he or she has never seen before and be able to engage in a conversation with examiners he or she has never met before.  Such assessment methods assume that all learners are able to speak on demand.  Would the results obtained then accurately reflect the student’s ability to articulate his or her thoughts and ideas?  Alternative assessment methods should be explored in order to present more accurately, what students with significant learning difficulties know and can do (Frey & Gillispie, 2018).   In this regard, perhaps the potential of assistive devices or technological tools could be harnessed to enable such learners to respond optimally in assessment environments.   Technology could provide viable means not only to help students cope, but leverage on their interests in order to better engage them in assessments. 

Leverage on the use of technological tools

Fostering better student engagement in assessments is a learning process for educators.  One way to begin is by conducting a digital survey or online poll using Kahoot, Padlet or Mentimeter platforms at the end of lessons to find out the kinds of tasks or activities learners have found or would find most appealing.  Technological tools could also be used astutely as alternative platforms we can offer learners to demonstrate and apply knowledge.  A number of schools have explored blogging, vlogging and student podcasts as assessment options.   Tasks could also be varied according to levels of challenge or difficulty, without modifying the original construct.  Success criteria should also be made clear and explicit for the purpose of self-monitoring the achievement of learning goals and task accomplishment. 


Digital tools can also be harnessed to monitor learner engagement in reading and reading comprehension activities.  One example we can take inspiration from is a web-based tool called Udio that was designed by The Center for Applied Special Technology (CAST, 2011) to provide struggling readers with appropriate levels of reading challenge sufficient to motivate and develop reading interest and comprehension.  Potential frustrations with word-level decoding are minimised by making Text-to-Speech (TTS) function and online dictionary available.  Teachers are also able to observe students’ emotional engagement with the texts when they provide their affective response online.   Students can choose reading materials that interest them and will be directed to an emotional response screen upon completion of the reading activity.  Their response could then provide a valuable basis for conversations or discussions about the text and the reading activity thereafter (CET, 2016; Rose et al, 2018).  The potential of this application can perhaps be explored in the Student Learning Space (SLS) platform that is accessed by students and teachers in Singapore schools. 

Conclusion

Providing flexibility and options in assessments may be a mammoth task and requires effort in terms of time, cost, logistics and manpower, including a major paradigm shift at the systemic level but this can possibly begin in small steps and can be explored and implemented in schools.  The 21st Century Competencies Framework clearly places a strong emphasis on the holistic development of learners (MOE, n.d.).   Traditional and standardised examinations could only evaluate specific skill sets out of a whole range of the cognitive and affective abilities underlying learner performance.  New thinking and approaches that are more flexible would be required in order to cater to students with distinctive learning needs. 

Assessment is a process of discovery about our learners and this should extend beyond evaluating what they know or do not know.  Cognitive and affective factors beyond this realm of ‘knowing or not knowing’ are just as capable of influencing school outcomes and ultimately, growth and success beyond the classroom, and these, more often than not, tend to be overlooked.    Recognising the value of these factors in determining learner engagement and performance in assessments is important in guiding instruction aiming to close achievement gaps, and designing appropriate assessment measures that would best address the learning challenges of learners with SpLDs.   Assessments should seek to enable these learners to showcase their knowledge and potential, instead of further disabling them.

References

American Educational Research Association (AERA), American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

CAST. (2011). Universal design for learning guidelines version 2.0. Wakefield, MA: Author.

DAS. (2020). Other Specific Learning Differences. Retrieved June 27, 2022 from  https://www.das.org.sg/about-dyslexia/what-is-dyslexia/other-specific-learning-differences.html

Della Sala, S., & Cubelli, R. (2007). “Directional apraxia”: A unitary account of mirror writing following brain injury or as found in normal young children. Journal of Neuropsychology, 1,3 –2

Duncan, H., & Purcell, C. (2017). Equity or Advantage? The effect of receiving access arrangements in university exams on Humanities students with Specific Learning Difficulties (SpLD). Widening Participation and Lifelong Learning, 19(2), 6-26.

Elliott, S. N., Kurz, A., & Schulte, A. (2015). Maximizing access to instruction and testing for students with dis-abilities: What we know and can do to improve achieve-ment. Smarter Balanced Assessment Consortium. UCLA: McGraw Hill. Every Student Succeeds Act (ESSA) of 2015, Pub. L. No. 114-95. (2015).

Freire, P. (2009). Chapter 2 from Pedagogy of the Oppressed. Race/Ethnicity: Multidisciplinary Global Contexts 2(2), 163-174. https://www.muse.jhu.edu/article/266914.

Frey, J. R., & Gillispie, C. M. (2018). The accessibility needs of students with disabilities: Special considerations for instruction and assessment. In Handbook of Accessible Instruction and Testing Practices (pp. 93-105). Springer, Cham.

Ganschow L. and R.L. Sparks (2000). Reflections on language study for students with language learning problems: Research, issues and challenges. Dyslexia 6: 87-100. 

Ganschow L., R.L. Sparks and J. Javorsky (1998). Foreign language learning difficulties: An historical perspective. Journal of Learning Disabilities 31: 248-258.

Ivcevic, Z., & Brackett, M.  A. (2014). Predicting school success: Comparing conscientiousness, grit, and emotion regulation ability. Journal of Research Personality. http://ei.yale.edu/publication/predicting-school-success-comparing-conscientiousness-grit-emotion-regulation-ability-2/

Kettler, R.  J. (2012). Testing accommodations: Theory and research to inform practice. International Journal of Disability, Development and Education, 59, 53–66.

Plattner, H. (2013). An introduction to design thinking. Institute of Design at Stanford, 1-15.
Sergio Della Sala, Clara Calia, Maria Fara De Caro & Robert D. McIntosh (2014): Transient involuntary mirror writing triggered by anxiety, Neurocase: The Neural Basis of Cognition

Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal design for learning: Theory and practice. Wakefield, MA: CAST Professional Publishing.

Ministry of Education, Singapore (n.d.) 21st Century Competencies. Retrieved from https://www.moe.gov.sg/education-in-sg/21st-century-competencies

Ministry of Education. (2011). Psychoeducational Assessment and Placement of Students with Special Educational Needs: Professional Practice Guidelines. [PDF file]. Singapore: Ministry of Education. Retrieved from https://www.moe.gov.sg/docs/default source/document/education/specialeducation/ files/professional-practiceguidelines.pdf

Mokhtar, F. (2019, March 5). GCE O- and N-Level exams to be replaced by new national common exam in 2027.  Today Online. https://www.todayonline.com/singapore/gce-o-and-n-level-exams-be-replaced-new-national-common-exam-2027

Nelson, J. M., Lindstrom, W., & Foels, P. A. (2015). Test anxiety among college students with specific reading disability (dyslexia) nonverbal ability and working memory as predictors. Journal of Learning Disabilities, 48(4), 422-432.

Piechurska-Kuciel, E. (2010). Reading anxiety and writing anxiety in dyslexia: Symptomatic and asymptomatic adolescents. Advences in Research on Language Acquisition and Teaching: Selected Papers, Gala, 375-386.

Rose, D. H., Robinson, K. H., Hall, T. E., Coyne, P., Jackson, R. M., Stahl, W. M., & Wilcauskas, S. L. (2018). Accurate and informative for all: Universal Design for Learning (UDL) and the future of assessment. In Handbook of accessible instruction and testing practices (pp. 167-180). Springer, Cham.

Rose, D.  H., & Gravel, J.  W. (2013). Using digital media to design student-centered curricula. In R.  E. Wolfe, A.  Steinberg, & N.  Hoffmann (Eds.), Anytime,  any-where: Student-centered learning for students and teachers  (pp.  77–101). Cambridge, MA: Harvard Education Press.

Shepard, L. A., Penuel, W. R., & Davidson, K. L. (2017). Design principles for new systems of assessment. Phi Delta Kappan, 98(6), 47-52.

Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American psychologist, 52(6), 613.



Please click on the sidebar to access these remaining articles
  • In Conversation with Ms P. Durka Devi, Teaching Fellow, Learning Sciences and Assessment, NIE.
  • In Conversation with Mr Tan Ken Jin, School Staff Developer, Bartley Secondary School.
  • Assessment Practices with academically low progress learners in a Singapore Primary School by Mr Jerome Chong
  • Recommended Publications