When we asked a longtime computer science teacher at a large public school to describe his classroom, he said, “Mostly, what you would see is people making stuff.” In his class, students explore their personal interests and questions through self-directed projects, no two of them the same. The teacher was excited about how much his students seemed to be learning about coding and programming, but he was having trouble figuring out how to gauge the specifics of their progress: “I don’t know how to assess that — how to know that they actually learned something. That’s my challenge for creative, open-ended, project-based work.”
This teacher’s conundrum is not unique. In our research, we’ve come across many K-12 computing teachers who are excited about the powerful learning that can occur when students engage in open-ended and personally meaningful projects. Yet, these same teachers are uncertain how to determine what those projects reflect about students’ content understanding and what the work means to them personally. In some cases, teachers go ahead and design project-based experiences but do not include assessment of what students know and still need to learn. Other times, teachers’ doubt and uncertainty about assessment makes them reluctant to proceed with project-based learning at all (Brennan, 2015).
As a recent report from the National Academies of Sciences, Engineering, and Medicine (2021) makes clear, self-directed project-based learning is a key strategy for broadening students’ participation and deepening their engagement in computing. Indeed, our own research and teaching are informed by a tradition that views self-directed projects as a rich context for creation, expression, and learning in the field of computer science (Papert, 1980). However, and like the teacher we interviewed, we’ve always found such work difficult to assess, and we’ve found little consensus among experts in our discipline about how best to assess student-led projects (Blikstein, 2018).
Recognizing and supporting learners as individuals requires teachers to attend to not only what students create but also how they create it.
To better understand how teachers think about the complexity of assessing student-directed projects, we conducted an empirical research study of how 80 K-12 computing teachers who included student-directed programming projects in their curricula assessed students’ work. At the same time, we completed an interdisciplinary review of the research literature on assessing open-ended work. We knew that other disciplines — such as visual arts and language arts — have long wrestled with such questions, and we hoped that they might offer insights into how to determine evidence of learning in computer science projects, as well as how we can provide feedback that acknowledges students’ aspirations and efforts (Ozaki, Worley, & Cherry, 2015).
Rather than pointing us toward specific assessment tools (e.g., the use of scoring rubrics), our research led us to four principles that can guide teachers as they begin thinking about how their assessments can consider learners’ current interests, knowledge, and capabilities while supporting their ongoing growth and development. In short, assessment should:
Each of these four principles is supported by research from across disciplines, and while we’re particularly interested in the assessment of project-based learning in computer science, we hope that they will prove equally useful to teachers in other fields.
To illustrate how these principles can be applied in practice, we describe the ways in which Erin, a grade 6-12 computing teacher, used them to assess the learning of students who designed and built apps for phones and tablets. Though we focus on Erin, we found that these principles were used, to at least some extent, by all of the 80 teachers in our study.
The standardized forms of assessment that prevail in most public schools tend to focus on “bringing every student to the same standard rather than looking at individual gains and personal bests” (Richard, 2010, p. 193). The vast majority of the teachers in our study described assessment as a means of understanding the individual learners in their classrooms, both their current capabilities and what they intended to learn and achieve through their projects. Most said that the goal should be to determine what is personally meaningful for each student, while acknowledging the potential variability of the work that different students will produce.
We found a useful model of this sort of assessment in the research on writing instruction. In English language arts classrooms, students are often asked to produce open-ended work that is grounded in personal perspectives, interests, and experiences. And when responding to and assessing student writing, teachers are advised to begin “with the subjectivities of the students and their desire to realize (rather than simply produce) meanings” (Robinson & Ellis, 2000, p. 75). Also, teachers often involve learners in articulating their own goals and aspirations for the work and in designing assessment criteria (Taggart et al., 1999). Developing trusting relationships with students and asking them about their goals can reveal important (and potentially hidden) facets of the work (Zhang, Schunn, & Baikadi, 2017).
In her computing classroom, Erin works to build precisely these kinds of relationships with her students. At the beginning of every project-based assignment, she schedules a one-on-one check-in interview with each student, asking them about their interests, the kinds of apps they like, what they aspire to create, their current capabilities, and what skills and content knowledge they hope to learn, and she uses these goals as the basis for individualized assessment:
They give me a parameter of what they’re going to produce. How they actually do that is up to them. My assessment is: “Did you do that? Did you meet your own requirements for what you proposed you would do?”
At the end of their app development project, each student submits a project documentation that summarizes the purpose, attributes, and development process of their app. Erin then compares these documents to the student’s initial goals and plans to gauge the extent to which they achieved their vision. Of course, students’ final projects do not necessarily include every feature they had hoped to include. However, the comparison between goals and eventual designs is not meant to penalize students for changes in creative direction or challenges they could not overcome by the due date. Rather, the comparison informs Erin’s understanding of learners’ evolving capabilities in relation to the goals they set for themselves and what they learned while completing their individual projects.
Recognizing and supporting learners as individuals requires teachers to attend to not only what students create but also how they create it. Accordingly, this principle invites teachers to view projects at multiple points in time, in contrast to most traditional forms of assessment (including standardized tests), which gauge students’ performance on a particular day or evaluate the end product of their work.
In our review of the research on assessing open-ended projects, we found that scholars and practitioners in a number of fields have long urged teachers to use multiple forms of assessment, spanning the length of the project (Earl, 2012; McGuinness & Brien, 2007; Orr, 2010; Richards, 2010). For example, professional visual artists, writers, and craftspeople often describe their work as an ongoing process, and within those fields, regular feedback and revision tend to be viewed as essential practices (Brocato, 2009; Eisner, 2004). Likewise, when teachers check in with students at regular intervals, ask questions about their work in progress, and provide feedback meant to guide them in their next steps, they communicate to learners that redrafting and revising are not signs of failure and that every open-ended project depends on an iterative process.
For example, early in the app development process, Erin asks her students to give short (two- or three-minute) elevator pitches about their project to the class, using storyboards. Then they receive feedback from the class, guided by carefully structured questions from Erin, such as, “Is the app presented socially useful?” Students then reflect on the feedback they have received, using it to guide what the next iteration of their proposed designs will look like. And when students finish their projects, Erin asks them to create a final document that describes the different iterations of their work, with an emphasis on the challenges they encountered and how they worked through them. Erin uses this reflection on iteration to understand students’ evolving comfort with perseverance and adaptability in response to challenges, which are necessary dispositions for engaging in open-ended projects in computer science and every other field.
The majority of the 80 computing teachers in our study described the classroom learning community as essential to the success of student-directed projects. Similarly, researchers in many other fields have argued that the teacher should not be viewed as the sole audience for or judge of student work. Rather, classroom assessment should incorporate multiple perspectives. Feedback from a range of people (teachers, peers, parents, or others) increases the opportunities for students “to reflect on their learning and their learning needs” (Earl, 2012, p. 93). Additionally, when classroom peers offer feedback, the process can benefit everyone involved: Learning how to assess others’ work can inform students about what to look for in their own creations, helping them develop their ability to critically self-assess (Cennamo & Brandt, 2012; Mendonca & Johnson, 1994). Engaging with their peers’ work can also expose students to various solutions and strategies that they might not encounter otherwise (Sadauskas et al., 2013).
The teacher should not be viewed as the sole audience for or judge of student work.
For example, Erin knows that peers can serve as an authentic audience for students’ creations, so she makes sure learners have opportunities to get constructive feedback from other students. And to ensure that such exchanges are valuable, Erin developed a peer feedback guide to help learners understand both how to share their work and how to give one another useful comments. For instance, it offers advice on specific topics to consider when providing feedback (e.g., the clarity of the app’s purpose, its features, its aesthetic appeal) and how to structure that feedback (e.g., pointing out the features of the app that work well, highlighting one or two features that are confusing).
Erin asks students to keep this guide in mind while their classmates give 10-minute demonstrations and explanations of their projects. During the presentations, half the students sit with their laptops and share their projects with one person at a time. The students watching presentations listen carefully, take notes, ask clarifying questions, and then share their critiques. After rotating through several rounds of presentations, the two groups swap, and the students who have already presented now provide the feedback. Erin notes that this strategy engages everyone in the room: “There is no kid sitting in the back of the class finishing up their project, because they have a role just as much as the presenter. Throughout this whole thing, the presenter is getting continuous feedback.”
While assessment is traditionally seen as something teachers provide to students, the computing teachers in our study tended to believe that the goal of assessment should be to develop learners’ autonomy, decreasing their dependence on teachers’ evaluations and increasing their own ability to exercise meaningful personal judgment and self-direction. This aligns with research from a range of disciplines, describing the ultimate goal of student assessment as the development of self-assessment skills (Ross, 2006; Sefton-Green & Sinker, 2000).
Put another way, researchers from many fields recommend a shift in perspectives: from focusing on the assessment of learning to seeing assessment as learning; not as a one-time judgment of students’ knowledge and skill but as a “personal, iterative, and evolving conversation” that helps students “make their own decisions about what to do next” (Earl, 2012, p. 45). When learners see assessment as part of their own growth and development, rather than as a performance for someone else to judge, they are “more likely to take risks, seek out challenges, and persevere in the face of difficulty” (Beghetto, 2005, p. 259). These qualities are essential to student-directed projects, where learners need ownership over the work and the process of creating it, exercising judgment and flexibility in navigating open-ended tasks.
For Erin, promoting her students’ autonomy is prompted not only by deeply felt pedagogical values but also by practical considerations. Erin wants her students to have constant feedback as they are working on their projects, but the time that she can spend with each learner is limited. Thus, she creates daily opportunities for students to reflect on their work in writing. Erin notes that having students self-assess daily mimics real-world work, where learners will have to be able to judge for themselves what is and is not working and how to adjust. But beyond practical considerations, her desire to make learners what she calls “participants in the process” of assessment is grounded in core values that she wants to convey to students about who they are and the importance of their contributions. Speaking about what informs her classroom learning design, Erin explains:
When we engage them and make this a place they can succeed, it goes so far beyond the actual content. It basically says, “You belong, you have a right to be here, and you have a right to expect a lot from the world. You have great skills and ideas, and the world needs you.”
Despite continued enthusiasm about the value of learning through student-directed projects, multiple barriers have limited the widespread incorporation of such activities in K-12 classrooms. Some scholars have noted the need for new forms of teacher development (Grossman et al., 2019) and a rethinking of the underlying grammar of schooling to enable teachers to provide students with these kinds of open-ended, personally meaningful experiences (Mehta & Fine, 2019). Our conversations with teachers have shown that assessment can also be a significant challenge, but we hope these four principles — recognizing individuality, illuminating process, engaging multiple perspectives, and cultivating capacity for personal judgment — can help teachers in all disciplines to incorporate student-directed work into their classrooms.
Researchers from many fields recommend a shift in perspectives: from focusing on the assessment of learning to seeing assessment as learning.
In the context of our own field, computer science education, we acknowledge both the difficulty of assessing student-directed projects and the possibilities for such assessment. On one hand, computer programming makes aspects of the learning process visible — it’s easy for students to examine each other’s lines of code and to solicit feedback from multiple audiences by sharing work via digital platforms. On the other hand, computer science has strong cultural traditions of didactic teaching and of learning as an accumulation of predefined skills and knowledge, and these traditions work against the embrace of individuality, process, multiple perspectives, and self-direction. While each discipline will navigate these contextual issues in its own way, we benefited enormously from learning about assessment practices in other disciplines that share a commitment to supporting open-ended and personally meaningful work. Reciprocally, no matter the content area, we hope these assessment principles can inspire and support more teachers in designing for learning through student-directed projects, creating opportunities for young learners to imagine themselves and their contributions to the world in new ways.
Note: This research was supported by Google, through CS-ER Grant No. 93661905.
References
Beghetto, R.A. (2005). Does assessment kill student creativity? The Educational Forum, 69 (3), 254-263.
Blikstein, P. (2018). Pre-college computer science education: A survey of the field. Google LLC.
Brennan, K. (2015). Beyond right or wrong: Challenges of including creative design activities in the classroom. Journal of Technology and Teacher Education, 23 (3), 279-299.
Brocato, K. (2009). Studio based learning: Proposing, critiquing, iterating our way to person-centeredness for better classroom management. Theory Intro Practice, 48 (2), 138-146.
Cennamo, K. & Brandt, C. (2012). The “right kind of telling”: Knowledge building in the academic design studio. Educational Technology Research and Development, 60 (5), 839-858.
Earl, L.M. (2012). Assessment as learning: Using classroom assessment to maximize student learning. Corwin Press.
Eisner, E.W. (2004). What can education learn from the arts about the practice of education? International Journal of Education & the Arts, 5 (4).
Grossman, P., Dean, C.G.P., Kavanagh, S.S., & Herrmann, Z. (2019). Preparing teachers for project-based teaching. Phi Delta Kappan, 100 (7), 43-48.
McGuinness, C. & Brien, M. (2007). Using reflective journals to assess the research process. Reference Services Review, 35 (1), 21-40.
Mehta, J. & Fine, S., (2019). In search of deeper learning: The quest to remake the American high school. Harvard University Press.
Mendonca, C.O. & Johnson, K.E. (1994). Peer review negotiations: Revision activities in ESL writing instruction. TESOL Quarterly, 28 (4), 745-769.
National Academies of Sciences, Engineering, and Medicine. (2021). Cultivating interest and competencies in computing: Authentic experiences and design factors. The National Academies Press.
Orr, S. (2010). Collaborating or fighting for the marks? Students’ experiences of group work assessment in the creative arts. Assessment & Evaluation in Higher Education, 35 (3), 301-313.
Ozaki, C.C., Worley, D., & Cherry, E. (2015). Assessing the work: An exploration of assessment in the musical theatre arts. Research & Practice in Assessment, 10 (Summer 2015), 12-29.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
Richards, R. (2010). Everyday creativity: Process and way of life — Four key issues. In J.C. Kaufman & R.J. Sternberg (Eds.), The Cambridge handbook of creativity (pp. 189-215). Cambridge University Press.
Robinson, M. & Ellis, V. (2000). Writing in English and responding to writing. In J. Sefton-Green & R. Sinker (Eds.), Evaluating creativity: Making and learning by young people (pp. 79-97). Routledge.
Ross, J.A. (2006). The reliability, validity, and utility of self-assessment. Practical Assessment, Research, and Evaluation, 11 (1), Article 10.
Sadauskas, J., Tinapple, D., Olson, L. & Atkinson, R. (2013). CritViz: A network peer critique structure for large classrooms. In J. Herrington, A. Couros & V. Irvine (Eds.), Proceedings of EdMedia 2013 — World Conference on Educational Media and Technology (pp. 1437-1445). Association for the Advancement of Computing in Education.
Sefton-Green, J. & Sinker, R. (Eds.). (2000). Evaluating creativity: Making and learning by young people. Routledge.
Taggart, G.L., Phifer, S.J., Nixon, J.A., & Wood, M. (Eds.). (1999). Rubrics: A handbook for construction and use. Rowman & Littlefield Education.
Zhang, F., Schunn, C.D., & Baikadi, A. (2017). Charting the routes to revision: An interplay of writing goals, peer comments, and self-reflections from peer reviews. Instructional Science, 45 (5), 679-707.
This article appears in the December 2021/January 2022 issue of Kappan, Vol. 103, No. 4, pp. 44-48.
KAREN BRENNAN is an associate professor of education in the Harvard Graduate School of Education, Cambridge, MA.
SARAH BLUM-SMITH is a doctoral candidate in the Harvard Graduate School of Education, Cambridge, MA.
PAULINA HADUONG is a doctoral candidate in the Harvard Graduate School of Education, Cambridge, MA.