Design a site like this with WordPress.com
Get started

The Problem with Computational Thinking

Why do educators seem to like CT so much more than computer scientists?

There’s an ongoing debate in computer science education about what constitutes computational thinking (CT). Computer scientists tend to advocate for a strict understanding of the “computational” half of CT and say that CT without computers is just plain problem solving. On the other side, there are researchers that see the value of integrating the “thinking” half of CT into core curriculum. It’s likely both the “computational” and “thinking” parts of CT are important, but there’s a lot of research that needs to be done to confirm exactly what those parts actually are.

As a CT researcher who has studied up on the CT debate and who has contributed on a project introducing CT to K-5 educators this past year and, I have seen firsthand the frustration in CS with the lack of consensus surrounding the definition of CT and at the same time the growing interest of elementary educators in integrating CT into the classroom. It’s an odd situation.

Is there something about the way elementary educators use CT that contribute to their satisfaction with CT? Can we even say they are satisfied with CT? My video presentation below explores these questions:

So twenty computer science education graduates walk into a Zoom room…

Portfolio of Learning

As a final assignment, a portfolio of my learning on electronic assessment this past semester, fulfilling the assessment trifecta: an assessment as learning (self-assessment), an assessment for learning (formative assessment), and an assessment of learning (summative assessment). What I write here serves as a vehicle for self-reflection on what I have learned, as a way to learn how I would want my own students to construct their own portfolios, and as a demonstration of all that I have learned.

Unit Zero: I Have Doubts

Honestly, I waffled back and forth about whether there was anything in this course I needed to learn, but in the end, my curiosity was piqued by a classmate’s tweets about the course, trust in program designers, and the nagging suspicion there was much that I did not know that I did not know about assessment. Even so, I was a reluctant registrant for the course, and my pride had me believe the course and the instructor had to prove they were worth my time.

Unit One: I Learn How Little I Know

The first assignment was to write about my three beliefs about assessement. I thought of two, and had to make something up for a third. By sheer chance, my three beliefs remain essentially the same, but I now actually understand what I’m espousing.

I loved learning the history of assessment, from social efficiency to behaviorism to social constructivism which explained and challenged the basis for my own assumption about assessments only being tests. Students didn’t merely need more repetition or practice; there were other ways for students to show and develop their understanding. It was at the end of this first week that I sent this message to a former instructor:

Evidence of an attitude adjustment.

I was sold.

Unit 2: I Realize the Need for Fundamental Change in My Practice

This was an especially fruitful unit for me. My tweets during this time showed my discovery of the sore need for formative assessment in my practice as a computer science (CS) instructor. My work on my Assessment Design Checklist also had me come to grips with my lack of understanding of formative assessment, as demonstrated by my initial first belief about assessment.

Belief #1: Assessment can be used to measure not only student understanding, but teacher effectiveness.

This belief, even though very nearly the definition of formative assessment where assessment is used to elicit student understanding to inform practice, represented a twisted understanding in my mind that testing would reveal student understanding to me to help me improve my practice. This was a very teacher-oriented, test-centric understanding of assessment, and oh so misdirected. The purpose of formative assessment is to help students understand the progress of their own learning and to inform instruction. The role of the teacher is to pave the way to student understanding by clarifying learning goals, designing activities that elicit student understanding, and providing opportunities to generate teacher, peer, and student feedback. The primary beneficiary of formative assessment is ultimately the student, not the teacher.

It was during this unit that I also confirmed my second belief about assessment…

Belief #2: Students can get an answer right, but still not understand the underlying concepts.

…and wrote a corresponding analysis of the typical in-class short coding project used in CS classrooms—my best and most inspired work to date in this program, an analysis which may very well become my manifesto as a CS teacher. My hope is that focus on understanding instead of project completion will lead to student-efficacy and retention in CS.

Unit Three: I Immediately Regret My Reflection Tweet From the End of Unit Two Disavowing My Third Belief

Belief #3: Feedback fosters more substantial learning than a grade.

Turns out, feedback is important, and my made-up belief was right on the mark. I tweeted about the need for more than feedback on the task in CS so that students would learn to focus on the process of using and developing their understanding to inform their coding. I also edited my Assessment Design Checklist and Formative Assessment Design to include time to process teacher and student feedback.

It was also at the end of this unit, when I had to look up the differences between assessment as learning, assessment for learning, and assessment of learning to complete a short digital quiz, that I realized I needed to develop a better system to keep track of my learning! After spending way too much time looking at options, I opted to use Simplenote, the simplest and fastest cross-platform way to jot down notes.

Unit Four: I Encounter an Old Nemesis

I complained to the twitterverse about my continued struggles with Universal Design for Learning (UDL), but in a later unit would finally incorporate it into my Assessment Design Checklist. Also, after reading about portfolios as a vehicle for digital formative assessment, I searched for portfolio use in CS Education (CSE) and found a great paper I ended up tweeting about. Portfolios in CSE allow students to communicate their understanding of computational concepts, as opposed to merely using them in projects. Like formative assessment in general in CSE, computational communication is under-practiced in the classroom and students will need scaffolding and time to improve.

Unit Five: I Publicly Criticize My Instructor

In my blog post about instructor and a peer feedback, I ended up criticizing both of them. In retrospect, what I should have learned—and did not until this reflection despite my instructor feedback on my post—is that yes, feedback of any sort is going to be uneven and some days of giving and receiving will be better than others. That is the very nature of feedback and social constructivism. But the feedback process itself, despite its inconsistent results, is valuable either way. Feedback teaches us and reminds us that as individuals, no matter our expertise or how highly we think of ourselves, we do not and cannot know everything; that as individuals with disparate backgrounds and experiences, feedback will always have the potential to show us something of which we were previously unaware. A little humility can go a long way.

Unit Six: As a Vehicle for Equity, I Finally Accept UDL

I tweeted about bias in assessment and critical race theory, and realized that to address inequity in CS, UDL could be utilized to give students other means of representation with which to understand computational concepts and other means of expression to show computational understanding. I made UDL my sixth and final Assessment Design Checklist item and vowed to remember the need for equity on my next encounter with UDL which will hopefully preclude me from becoming overwhelmed by it again.

In an assignment to apply my Assessment Design Checklist to an assessment designed before taking the course, I discovered just how much I have learned. I wrote, “Can one be both ecstatic at progress in learning and horrified that progress needed to be made?”

Unit Seven: Final Thoughts

My final tweet was a recapitulation of my main theme that developed during the course: that CSE needs to focus on student understanding (mastery goal) rather than project completion (performance goal). My Formative Assessment Design ended up being a direct application of this theme, where I hope to teach and assess students on the computational thinking skill of decomposition to understand their code better.

It’s been an exhaustive but far from exhausting semester! This course on electronic assessment has been extremely rewarding and pertinent to my practice as a CS instructor, and I’m eager to get back into the classroom to put into practice everything that I’ve learned. Many thanks to my instructor!

A Decomposition Assessment for Scratch

I’ve had to made a lot of key additions since my previous version!

It’s taken a couple of revisions, but I’ve finally developed a solid formative assessment for fourth thru sixth grade student understanding of decomposition in Scratch! Young students, or any novice coding student for that matter, typically struggle with reading and understanding code–even code that they’ve written themselves. Decomposition, the computational thinking process that is used to break something large into smaller pieces, should give students practice identifying the smaller chunks of their program to use as guideposts for reading and understanding their code, allowing them to make better logical jumps from one part of code to another, instead of the more random, less informed jumping around that novices exhibit.

I described the basic idea briefly in a previous post: I would first have students decompose a written story and then have them employ the same process with code. Since then, based on good peer feedback, I have had to develop the greater context surrounding the single assessment and identify additional applications of decomposition in coding practice to repeat, emphasize, and ensure understanding of the primary learning goal: to understand how to apply decomposition across the spectrum of coding practices.

The first application of decomposition would be in the context of adding new code to an existing project–the reading comprehension that I had already described (decomposing code). The second application of decomposition would be in the context of debugging and identifying the different parts of program behavior as it runs (decomposing program behavior). The final application of decomposition would be in the design of a program and identifying major sections of needed code before any code is written, to guide the implementation of a program (decomposing design). The emphasis on decomposition in all three lessons will go much farther in clearly communicating its importance as a learning goal than its use in a sole assessment will have.

Also based on feedback, I have also had to develop a more detailed plan for…feedback. Under the assumption that students will struggle with both the unfamiliarity of decomposition and code reading comprehension, I plan to give detailed explanations and reasons for why code should be decomposed in certain ways–showing them what makes for a good chunk, how to better summarize or explain code behavior in a chunk, and then identify that behavior with a short phrase or name. I shouldn’t expect students to pick this up right away; that always gets me into trouble in the classroom!

The final addition to the design of my formative assessment was to figure out what tools students should use for decomposition. Although Scratch offers the ability to include comments in code, there is no way to visually circle or bracket sections of code in Scratch with digital drawing tools. There is a fairly easy solution: take screenshots of the code, and then have my students annotate the code with pen and text using either Preview on Mac or Microsoft OneNote on Windows; or for a distance learning solution, OneNote or Google Jamboard, which can be shared between teacher and student.

These three additions to my formative assessment: defining the place of the assessment in an entire unit on decomposition, considering the finer details of the feedback my students would likely need, and researching what digital tools would assist in students understanding were essential in the full development of an assessment that students can use to understand and develop both their decomposition skills but their code reading comprehension. I am excited to try this out!

My First Survey!

Why do educators and computer scientists differ in their acceptance of computational thinking?

I have been researching computational thinking (CT) professional development for K-5 educators for just over a year now, which is nearly as long as I have been a graduate student in educational technology. Anecdotally, educators seem far more excited about CT than my peers in Computer Science Education (CSE). At a recent conference for CSE graduate students, the joke was that if you put twenty computer scientists in a Zoom room together, you’ll have twenty different definitions of CT to debate. This frustration is not surprising: computer scientists have spent the last decade struggling to define CT, and that uncertainty seems to have led to skepticism in the field about its legitimacy. Educators, on the other hand, appear less concerned about an exact full definition of CT, but are more focused on identifying specific parts of it that can be useful in the classroom. It’s a classic theory vs. practice dichotomy.

But is there more to the discrepancy? My 7th grade daughter has been studying Mendelian genetics, which reminded me that genes could be expressed differently depending on environmental factors. Could it be that computational thinking—even it if is defined in a single coherent way—might end up being understood or used in different ways, depending on outside factors like context, prior experience, and personality traits?

To answer this question, I developed a preliminary survey, which was a frustrating but rewarding experience in leaving questions on the cutting room floor. To keep the survey short and quantifiable, I only asked multiple choice questions; for the purposes of improving the survey, I included short answer “Other” options. Eventually, a better survey and a better understanding of the outside influences on CT might help clarify what is inherent to CT.

Question Development

No end in sight yet, but feeling much better about the process.

In my last post, I shared how deflating it was trying to think of quality questions to ask about Computer Science Education. But, after mulling over my questions and applying the five whys root-cause analysis to one of them, I feel much more optimistic. I especially like my two “What If” questions.

Questioning my questions.
What if Computer Science (CS) were dominated by women and underrepresented minorities?

What if CS were a K-12 core subject?

Neither of the situations described in the questions will ever be realized, but in imagining the possibilities posed by those questions, much could be learned. Perhaps a contrast between CS as it is with CS as dominated by others would inform potential changes to retain others in CS. Perhaps imagining all of the content and learning progressions that could be taught in K-12 CS would shed some light on the current pedagogical needs in CS.

Root-cause analysis had me ask a five successive why questions about computational thinking based on my initial computational thinking (CT) question:

Why can't computer scientists agree on CT?

Why are there different definitions of CT?

Why do computer scientists have different criteria for what should be included in CT?

Why do computer scientists have different values and beliefs for what's important in CS?

Why do values and beliefs makes a difference in computer science?

I am not sure the assumptions behind the last two questions are correct; I still need to consider other possibilities. But what would really be exciting is thinking of five What If questions for CT, questions that would imagine the possibilities beyond what CT is currently mired in. Maybe something like

What if computer scientists collaboratively applied their powers of computational thinking to define computational thinking?

Just four more to go.

Quickfire Questions in Computer Science Education

Five minutes to think of pressing questions in CS Ed.

Is there such a thing as either too many questions or questions that are too large to answer? I ask this question because I completed a five minute question brainstorming session, whereby the goal was to identify problems of practice, in my case Computer Science Education (CSE), through questioning, and the session fell flat. At the end, I felt weary and disappointed. But, not to worry. These are familiar feelings for which I have developed strategies to overcome.

In A More Beautiful Question, Warren Berger supplies anecdote after anecdote of individuals who were not afraid to ask the important questions that led to great innovation. Knowing how to question is key, yet in schools, young students are actively discouraged from asking them and instead become conditioned to simply answer questions posed by teachers. That pattern persists as students move into the workplace, where questions are considered disruptive to productivity. This five minute exercise was designed to overcome these ingrained barriers to questioning.

My eight uninspiring questions.

I thought of eight questions, none of them particularly inspiring, ranging from gender inequity to computational thinking, the Maker Movement to pedagogical content knowledge (Shulman, 1986). Interestingly, the problem may not be a lack of quality questions. From a neutral observer’s point of view, there are at least one or two questions here that have potential, especially the ones about the lack of support for CSE research and the mystery of the derision computational thinking faces from those in Computer Science.

Rather, the issue I am struggling with seems to be with the resulting disappointment and weariness. I was expecting to be inspired, despite Berger’s caution against over-reliance on the eureka! moment to fuel change and advice about taking a more level-headed approach to developing questions. But, it is difficult to read a book that is essentially designed to inspire change and not expect or hope to be inspired. My expectations went unmet, and my negativity took hold.

Pessimism happens to be a family trait. I am often initially disheartened by anything that requires significant change or effort, like systemic racism, Universal Design for Learning, or Thanksgiving dinner. For me, to formulate questions that might lead to change in CSE was to uncover and be overwhelmed by the many daunting systemic issues in CSE. There is just an enormous amount of work that needs to be done in CSE to make Computer Science accessible, teachable, and understandable.

I have been struggling with issues in CSE since the moment I set foot into the computer science classroom five years ago, from trying to figure out how to teach young children, to evaluating the tools and methods to use, and researching what to teach. Once I started studies and research, I further encountered the larger systemic issues with access, engagement, and racism. Writing down these questions all at once reminded me of the long journey I have made and the long journey I have ahead of me.

Berger does not warn readers that big transformative questions can be daunting. His stories about medicine, education, and business are carefully chosen, depicting individuals who struck upon a question, a long-term vision, and overcame all obstacles and failures to succeed. Their questions had the power to disrupt industries, businesses, and institutional inertia, but Berger never describes how questions have the potential to disrupt the individuals who ask them.

Luckily, this is not new ground for me. Over time, I have developed strategies to overcome this familiar negativity by taking a short break, then diving into the work that is needed to start addressing these large problems. Computational thinking in particular has been a revelation to me, specifically decomposition, which when applied to large problems can break them down into multiple smaller, bite-sized pieces. Once I have identified a subproblem, a starting point, and begin to make incremental progress, pessimism gradually gives way to focused optimism.

Brainstorming these questions was an uncomfortable process for me. But despite these feelings of disappointment and weariness, my long experience with them gives me confidence that this questioning will end up being helpful. I may not have followed Berger’s script exactly, but the next step will be the same: a launching into more inquiry that will eventually lead to action.

Reference

Berger, W. (2014). A more beautiful question: The power of inquiry to spark breakthrough ideas. Bloomsbury USA.

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4-14. https://doi.org/10.2307/1175860

Self-Reflection on the Assessment Design Checklist

I have completed the final version of my Assessment Design Checklist (ADC)! The most significant change I made for version 4.0 was including Universal Design for Learning (UDL). While giving peer feedback for version 3.0, I noticed UDL in a classmate’s ADC, thought it a good idea, and promptly eschewed the incorporation of it into my ADC. My feelings towards UDL are ambivalent: I recognize the excellence of its concepts, but I am overwhelmed by its broad scope. It wasn’t until my recent study of racism in assessment that I finally acknowledged the need to integrate UDL into my own ADC as an essential vehicle for equity in my classroom.

Overall, I started my ADC with just the barest of knowledge, only a surface understanding of assessment, not much more beyond needing to test for students knowledge and collecting data to figure out what to do next. It amazes me that assessment itself could contain such depths to plumb. I cannot claim to know now all there is to know about assessment, but I know a hell of a lot more than when I started.

(Self-reflection is hard! I was hoping to deliver some insightful detail into the decisions that I’ve made over the past couple of months, but sadly ended up with some uninteresting generalizations and some nice platitudes. Part of the struggle is that I don’t really remember all the reasons behind the choices I made. Perhaps I should’ve commentated on my ADC as I was writing it, using sidebar comments. I’ve tried to do this before while writing longer pieces using a text editor with split-window function: one window for the outline, one window for the writing, and one window to comment on the intention of each paragraph. For the purposes of this reflection, maybe I can focus on the main concepts that I have learned during this process...)

Overall, the ADC identifies some key concepts in assessment that I wasn’t fully aware of beforehand: the need for clear learning goals, feedback, and equity. Having those concepts in my checklist as future reminders will ensure I practice those elements in my classroom; returning to the ADC and updating it from time-to-time will also make me engage in periodic self-reflection on assessment, which intrigues me. I have reflected on individual lesson plans before, but I do not recall ever just reflecting on an aspect of practice. Also, the ADC will allow me to reflect on each individual assessment concept! The ADC has turned out to be a potential lifelong tool for self-reflection on assessment itself! Très cool!

Video-Based Modeling for Accommodating Students with Autism Spectrum Disorder

I might be the robotics and computer science teacher at the local high school next fall! Having already taught a couple of elementary students with autism spectrum disorder (ASD) during previous summer coding camps, and assuming I might encounter other students with ASD in the future, I have done some initial research to learn more about ASD. Wie et al. (2012) confirmed popular belief that students with ASD have a high interest in coding and robotics, showing a higher percentage of students with ASD pursue computer science at the university level than students in the general population. With research also supporting inclusion of students with ASD in general classrooms (Kasari et al., 2011), my assumption may easily become reality.

There is no known cause nor cure for autism. But, over the years, a slew of evidence-based practices have been developed to support students with ASD, including the use of video based modeling (VBM) to alleviate difficulties in learning in a typical classroom (Fleury et al., 2014). Similar research by Wright et al. (2019) explored the use of (VBM) in teaching middle school students with ASD to code Ozobots. Three types of VBM can provide differentiated instruction for students, ranging in length from one single instructional video to a series of multiple short videos. Classroom teachers can use VBM to allow students with ASD to learn more independently and complete classroom activities.

Loom is a screen-casting tool that I have recently discovered and recommend for teachers looking to use VBM in the classroom. It’s free for educators, has multiple means of integration (mobile, desktop, Chrome extension), and uploads video as it is being recorded, saving a non-trivial amount of time. Its ease-of-use is somewhat offset by the limited number of tools for editing; but advanced users can download and edit videos separately. In short, it’s a perfect tool for novice screen-casters.

I’ve recorded a screencast of my using Loom to…record a screencast…of a super short instructional video. Take a look:

Quick Loom tutorial for classroom VBM.

References

Fleury, V. P., Hedges, S., Hume, K., Browder, D. M., Thompson, J. L., Fallin, K., El Zein, F., Reutebuch, C. K., & Vaughn, S. (2014). Addressing the academic needs of adolescents with autism spectrum disorder in secondary education. Remedial and Special Education, 35(2), 68-79. https://doi.org/10.1177/0741932513518823

Kasari, C., Locke, J., Gulsrud, A., & Rotheram-Fuller, E. (2011). Social networks and friendships at school: Comparing children with and without ASD. Journal of Autism and Developmental Disorders, 41, 533-544. https://doi.org/10.1007/s10803-010-1076-x

Wei, X., Yu, J. W., Shattuck, P., McCracken, M., & Blackorby, J. (2013). Science, technology, engineering, and mathemathics (STEM) participation among college students with an autism spectrum disorder. Journal of Autism and Developmental Disorders, 43, 1539-1546. https://doi.org/10.1007/s10803-012-1700-z

Wright, J. C., Knight, V. F., Barton, E. E., & Edwards-Bowyer, M. (2019). Video prompting to teach robotics and coding to middle school students with autism spectrum disorder. Journal of Special Education Technology. https://doi.org/10.1177/0162643419890249

Instructor Feedback and Peer Feedback: A Comparison

Did my experience of receiving both types of feedback meet expectations? In short, no.

All feedback is useful.

At least, that’s what I thought up until I actually took the time to think about it. What I immediately realized was that instructor feedback should always be more constructive than peer feedback; after all, as the designer of the assessment in question, the instructor would have strongly developed opinions about what constitutes proper fulfillment of assessment objectives. Moreover, the instructor would likely have seen multiple versions of the completed assessment, further informing the instructor of the general tendencies, misconceptions, and exceptional cases of completed assessments. As the sole expert on the assessment, it is the instructor who should easily provide the best feedback.

Peers, in comparison, are woefully unequipped to offer expert feedback, usually possessing a still-under-constructivism understanding of related content knowledge, but it is at least paired with the fresh experience of their own recent attempt at completing the assessment. At worst, peer assessment is a messy attempt at divining and comparing each other’s misunderstandings. At best, perhaps peer assessment could be an exercise in academic empathy: if I made the same early decisions as my peer on this assessment, what would have been the result?

What does research say about peer feedback? A perfunctory summary based on a single research paper: it can be both useful and useless, but it can also be improved (Geilen, Peeters, Dochy, Onghena & Struyven, 2010). Perhaps the best takeway from Geilen is that peer feedback which provides justifications–i.e., accurate reasoning for an assessment’s qualities–is most effective.


My experience in receiving both instructor and peer feedback on my assessment of Canvas as a vehicle for assessment proved enlightening, though not for the reasons you might think. I believe my eagerness to receive the feedback raised my expectations unreasonably high, despite my own struggle to give constructive feedback to a peer myself earlier in the week. I am also a bad gift receiver, as my wife constantly reminds me around the holidays. I would utterly fail the avocado test that this 3-yr passed with flying colors.

So, needless to say—hopefully, I’ve primed you enough—I was disappointed in both versions of feedback! My instructor, aside from failing me now, provided feedback about being clearer around one of the assessment objectives. This is actually good feedback, but I wasn’t particularly receptive to it for reasons that I will not divulge because I am stubborn. I suppose I believed the issue of clarity to be unimportant, as the objective was met implicitly. My peer offered a suggestion to further explore solutions for a problem I discovered but had already explored solutions for. This feedback, which I was even less receptive to, turned out to be more helpful than I initially thought it would be, after I made myself explore potential extensions to Canvas.

Geilin (2010) did mention that feedback was constrained by how well it was received. I am such a curmudgeon. Has there been any research done on the correlation between age, the avocado test, and how peer feedback is received? Because I would make for quite the grumpy data point.


There is one final thing I need to mention–digital assessment can be helpful, especially during this time of pandemic. And despite the many fears of privacy invasion, digital assessment is also proving to be unenforcible by young students who are simply opting out en masse. Take that privacy doom naysayers! You can’t doomify digital assessments if nobody is actually doing them!

References

Gielen, S., Peeters, El, Dochy, F., Onghena, P., Struyven, K. Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20(4), 304-315. https://doi.org/10.1016/j.learninstruc.2009.08.007

Testing Assessment Through Canvas

I am in my second year teaching computational thinking as a professional development instructor for teachers in two local school districts. About three quarters of the teachers are in a district that chose to use Canvas for comprehensive distance learning (CDL), and the other district chose Google Classroom.

In exploring the potential use of a content management system (CMS) for organizing, managing, and administering assessments, I created a three minute video of my testing an assessment on Canvas:

Using Canvas for assessement.

One drawback that I found was that students are unable to use Canvas annotations and rubric commenting on their own assignment submissions as ways to self-assess. They can, however, provide assignment comments. Having students annotate their own assignments prior to submission while referring to the rubric would be an acceptable workaround.

The other issue I ran into was that instructors and peers cannot annotate the same assignment submission. Providing two assignments for copies of the same submission, one intended for instructor review and the other intended for peer review, was a simple enough solution.

I suspect that districts that moved to CDL and purchased CMSs for the first time would be hard-pressed to end their contracts after the pandemic ends. The organization, affordances, and functionality of CMSs such as Canvas should enhance instruction no matter the situation: in-person, virtual, or hybrid.

Blog at WordPress.com.

Up ↑