Design a site like this with
Get started

Major Issues in Computer Science Education

For the last six years, I have been involved in Computer Science Education (CSE) as a teacher; the last two years, additionally as a researcher and a student. Here are the major issues in CSE I have stumbled upon as a relative newcomer:

In higher education, CSE research continues to be severely undervalued and underfunded, resulting in a lack of positions for CSE researchers and a lack of pedagogical content knowledge (PCK) (Shulman, 1986). CSE has been stunted through years of neglect in CS departments and is still primarily dependent on students learning through experimentation instead of instruction. What other educational field expects students to figure out fundamental concepts out by themselves, using tools, demos, and persistence? This would be akin to offering a child a mathematical tool, like a calculator or a protractor, and asking them to figure out the Pythagorean theorem. Stuck? Oh well. Keep trying. Tragicomically, “Keep trying” in CSE is a relatively new mantra. The refrain used to be “Maybe you should try something else!”

The lack of PCK hampers CSE at the primary and secondary level. Teachers are left to fend for themselves to research, evaluate, and develop instructional material at all grade levels, or attend conferences to find the new and improved. This also is progress, as CSE has recently supplied all sorts of instructional tools and recommendations, but after years of research and evaluation and experimentation, I find myself longing for research-supported and cohesive CS instructional curriculum and details as I tinker with my haphazard collection of tools and methods. Common Core Math Ed never looked so inviting!

I also suspect the lack of PCK might be one of the reasons for the well-recognized problem of low overall engagement and representation. Here’s a theory: students will stay in CS, if they understand it. And students will understand it, if it’s taught better. And teachers will teach better, if they’re equipped not just with tools, but given the pedagogical knowledge of a fully valued and funded educational field.

Oh jeez. The problem is money, isn’t it?!


Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher15(2), 4-14.

An Update on my Checklist for Assessment Design

In my Assessment Design Checklist, I’ve changed the wording for item #4:

4. Does this asssessment ask students to reflect on their understanding?

And I’ve added a final checklist item:

5. Does this assessment give students the opportunity to learn from each other?

The addition of these last two checklist items, which can be grouped together as self- and peer-learning (or awkwardly as non-teacher learning), represent a fundamental shift in my understanding of how to teach computer science. My previous assumption is that students would learn best from their own coding and debugging experience; but now I understand how important both student reflection on computational concepts in code and engaging with other students in analysis and explication of each other’s code will develop much needed understanding and computational verbal skills.

Recent research by Lui, Walker, Hanna, Kafai, Fields, & Jayathirtha (2020) showed the potential that portfolios have as a vehicles for self- and group-reflection of computational artifacts and programs. In the absence of specific guidelines, student written explanation of portfolio artifacts ranged from vague to detailed; provision of targeted questions and guidelines should result in better discourse around computational concepts.

I am eager to develop those targeted questions and guidelines. My initial ideas involve presenting examples of explanations of code and highlighting what makes for a good example so that students understand what to say and how to say it well. I would also like to model student interactions with each other as well so that my students understand how to explain their code to each other and then offer meaningful feedback to one another.

I do have some reservations about how exciting–or boring–this process might be. Anyone have ideas for how to make peer interactions around explaining code to one other fun and engaging?


Lui, D., Walker, J. T., Hanna, S., Kafai, Y. B., Fields, D., & Jayathirtha, G. (2020). Communicating computational concepts and practices within highschool students’ portfolios of making electronic textiles. Interactive Learning Environments, 28(3),284-301.

More to Add to My Checklist for Assessment Design

I’ve added a couple more questions to my Assessment Design Checklist:

3. Did I provide classroom time for students to receive and process teacher feedback?

4. Did I provide students with a way (likely a survey) to gauge their understanding of content?

Similar to questions #1 and #2 discussed in my previous checklist post, question #3 is basic teacher practice. However, it’s basic teacher practice that I have not been practicing! (Hence, my still-unlicensed status). My previous working assumption had been that students learn enough through debugging their programs; but I realize now the role of a CS instructor isn’t to merely provide CS experiences; but also CS education.

And that means giving students feedback not on just how to improve their programs, but how to improve their understanding of CS concepts.

#4 is interesting. Students, especially young students, are unlikely to have developed metacognitive skills surrounding their understanding of CS. More likely, they’re just overwhelmed trying to get a program to work. A survey with prepared multiple-choice answers that helps students identify areas of their own lack of understanding would go a long way to showing them what metacognition in CS is and how to do it.

The In-Class Small Coding Assignment as Formative Assessment

In computer science (CS) education, the in-class small coding assignment (SCA) is commonly underutilized as merely a way for students to practice CS concepts and programming syntax immediately after instruction, similar to a practice problem set given in mathematics class. Transforming the SCA into an effective tool for formative assessment will require a fundamental change in both teacher perspective and instructional goals.

Computer lab has a long history in CS education. Functioning as both the time and space for students to complete assignments or make progress on their own projects–because prior to the personal computer, workstations were only accessible through well-funded institutions–computer lab developed into an exclusive club where those interested in computing could find each other and develop a shared identity marked by suffering: late nights with error messages and cold pizza that turned into early mornings with error messages and multiple cups of coffee. Napping at a computer or under a desk was common. Today, with the democratization of CS education through CSforAll initiatives, computer labs are no longer the domain of the select few, but entire classrooms of students, now increasingly younger and more diverse, led by an assortment of teachers, many inexperienced in CS, both struggling and persisting to learn and teach CS respectively. Computer lab as a physical space for likeminded individuals will never be the same.

However, computer lab as a time for programming still exists (though instead of a time for suffering, the current version has now been wisely renamed a time for persisting). Persistence time takes the form of the in-class small coding assignment (SCA), a post-lesson activity where students write and debug fully-functioning short programs that employ lesson content, with assistance from the teacher or from peers. Here is a brief example of functional requirements (fancy industry jargon for “directions”) for an SCA that might be given after a lesson on numeric input:

Write a program that asks a user to type in temperature highs from the past seven days, and then returns the average temperature.

A student, given those functional requirements, would then proceed to write a program line-by-line, and test it and debug it error-by-error, using the CS concepts and language syntax presented in the earlier lesson. That the entire process serves as a model for what CS professionals do makes it an appealing and practical instructional tool.

As an instructional tool, there are two primary assumptions that guide the widespread usage of the SCA in CS ed:

  1. A fully-functioning program is evidence of student understanding. After all, they’ve taken lesson content and applied it successfully. Application of knowledge shows understanding.
  2. Since error messages eventually lead to a fully-functioning program via debugging, and assumption #1, error messages can be considered formative feedback.

Unfortunately, these assumptions are wrong. Simply providing time for students to mimic what CS professionals do, forgets the fact that CS professionals already understand. Any keen CS instructor who has taught for any period of time begins to suspect a divide between CS doing and understanding. And now a spate of very recent research supports this suspicion.

Direct research on the divide between doing and understanding by Salac and Franklin (2020) found use of Scratch blocks in completed projects by young students (aged 7-12) could not be correlated with student performance on student assessments on the same block usage. In other words, students could use coding blocks without understanding them; completed projects could not be used as evidence of understanding.

Research into effective CS teaching strategies, like subgoal worked examples (Margulieux, Morrison, and Decker, 2020), and the TIPP&SEE method (Salac, Thomas, Butler, Sanchez, and Franklin, 2020), also revealed the need to scaffold understanding by further decomposing encounters with CS concepts, ones previously assumed to already be cognitively accessible to new learners, into even smaller pieces to teach students how and in what order to develop their understanding. The common thread in all this research: the need to teach to finer grain and ordered understanding of CS concepts to new students, because they do not understand even though they have been required to use those concepts.

Application of my Assessment Design Checklist (ADC) reveals similar shortcomings of the in-class SCA. The learning target of the SCA is often conflated with simply completing the assignment without any errors. Or, more accurately, the learning target is simply presented by the instructor as completion of the assignment without any errors, with little or no emphasis placed on being able to explain new CS concepts used in the project. The focus on fixing errors and debugging by student and teacher alike further contributes to that misplaced focus on doing rather than understanding. Even presenting the SCA as a set of “functional” requirements emphasizes the doing rather than the understanding!

The second question in my ADC asks whether the in-class small coding assignment is based on domain knowledge–specifically whether it uses current pedagogical content knowledge. It does not. Integration of a type of reverse TIPP&SEE or subgoal labeling would provide scaffolding to guide student thinking and design strategy for the assignment. This guided design is key to helping students understand details of the code they use and how a program can be planned for in advance (based on understanding), rather than developed through error-checking (based on doing).

The current state of math education serves as a model for the development of the SCA into an effective tool for formative assessment. Having helped my current 7th grader with her Common Core math homework since 3rd or 4th grade, I have been struck by how frequently she has been asked to explain her solution in writing. I have been pleasantly surprised at the conceptual understanding she has demonstrated and how much more complete her mathematics education is in comparison to my rote mathematics education in the 80’s, where doing (problem sets and exams) was assumed to be evidence of understanding. But now? Understanding is evidence of understanding!

So, just as math teachers ask for explanations of understanding, CS teachers must do the same. Some recommendations:

  1. The SCA should be presented not as a set of functional requirements, but decomposed into a set of step-wise design requirements that scaffold the decision-making process that students need to use to translate requirements to code.
  2. Teachers, when assisting students, should no longer settle for helping students debug their code. Instead, teachers need to debug errors in students’ conceptual understanding. This can be done casually during code debugging by asking students questions, but also formally by asking students to submit along with their fully functioning code a written explanation of how their code works and how it was designed.

The recommendations above should turn the in-class SCA into a formidable tool for formative assessment, while also giving additional support to students new to CS. The idea of persistence as an essential disposition in CS, an idea birthed in the crucible of the computer lab, should decrease in necessity as CS Education matures, more is discovered about CS pedagogy, and teachers teach for understanding and not for doing.


Margulieux, L. E., Morrison, B. B., & Decker, A. (2020). Reducing withdrawal and failure rates in introductory programming with subgoal labeled worked examples. International Journal of STEM Education, 7(1), Article 19.

Salac, J., & Franklin, D. (2020). If They Build It, Will They Understand It? Exploring the Relationship between Student Code and Performance. Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, 473–479. 

Salac, J., Thomas, C., Butler, C., Sanchez, A., & Franklin, D. (2020). TIPP&SEE: A Learning Strategy to Guide Students through Use–>Modify Scratch Activities. Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 79-85. 

Using Checklists for Professional Development

It’s taken just over a year, but I have just realized that I need to create a checklist for when I create a new lesson. I’ve learned so much in this past year about education, computational thinking, and computer science education, that there’s no possible way I can remember and apply all the lessons I’ve learned, unless I have all those lessons listed in a handy dandy checklist.

I wish I were so organized. Photo by Glenn Carstens-Peters on Unsplash.

What better place to start than with an assessment design checklist? This checklist is more than a list of things that I need to do when I design an assessment, but a list of questions for me to consider during the design process and a way to get me thinking about how students can show their understanding.

I have two questions on my assessment design checklist list so far:

  1. Do students understand the learning target?
  2. Does the assessment leverage existing specific pedagogical knowledge about student understanding of CS concepts?

These are extremely basic educational ideas. And that’s okay. I’m a novice teacher. Moving on from “complete this project that I just demoed” and “this is the most recent thing that I just learned about CS ed” is a huge step for me, and I’m looking forward to seeing the results of this improved systematic preparation.

Assessing Assessment

I remember being confused when I first encountered the term “pedagogy.” I must have been in middle school, or maybe high school, and the dictionary definition, as usual, was painfully indecipherable to me. It wasn’t until later that I realized teachers needed to be taught how to teach–I thought they just shared what they knew–and that it was the how that was pedagogy. Which begs the silly question: if teaching is taught, how could pedagogy have ever come into existence?

This is all to say that the Formative Assessment Design that I’ve begun is just as mind bending an experience to me. It’s an assessment for designing assessments! And it’s something that I sorely need in my professional practice. I forget if I’ve mentioned this in this blog space before or not, but the very first assessment I ever gave to my students–what I thought at the time to be a very simple and modest two-question assessment–was a complete and absolute failure.

Not one student got any of the questions correct.

Okay, that’s not exactly right. The exact truth is even worse somehow.

The one person who did write down a correct answer ended up erasing it. I know because after catching a glimpse of some ghosted letters, I held her composition book up at an angle under a light, and read the correct answer in the indentations of the erased pencil marks. That one small sign that someone (anyone!) in my class understood, the one small sign that would make me a mere failure of a teacher instead of a complete failure of a teacher? Blown away by the uncertainty of a student who had been given neither decent instruction nor confidence.

I digress.

I’m excited about the assessment I’ve designed thus far: it’s captures how well students use decomposition to comprehend longer-than-they-are-comfortable-with blocks of code. The part that I’m most excited about is that I was able to think of a way to leverage ELA reading comprehension in the assessment to prime students for code comprehension.

In the coming weeks, I’ll get to develop the idea more and figure out how to implement it using online technologies and perhaps enhance the transfer between ELA and CS. I’ve set the bar so improbably low, making significant improvements on my past attempts at assessment will be easy. But, making all the right improvements will be more difficult.

Wish me luck!

Assessment Analysis

What lessons might be gleaned from revisiting an assessment?

I created this Kahoot! assessment a year ago as a way for my students to get some practice reps in with both conditional concepts and syntax. As an assessment tool, Kahoot! is rather limited–only multiple choice questions with annoying character limits; but for a new teacher, the limits can be surprisingly beneficial.

Screenshot of Kahoot Quiz on conditionals

Assessment Theory

It’s difficult for multiple choice assessments to be anything other than vehicles for behavioralist theory, no? I clearly remember thinking that my students needed practice reading and recognizing conditional concepts, which is behavioralist through and through.

However, Kahoot! assessments can be used in a more formative way than one would think: after students give their answers to a single question, they are shown how many students answered the question correctly. In my classroom, after each answer was revealed, I took the time to explain all the answers, both the correct ones and the incorrect ones to address any misconceptions students might have had. Although, even better would have been to have students explain to each other why they chose the correct answer, making the assessment more of a social constructivist tool by allowing students to learn from each other.

Assessment Assumptions

I made a few assumptions in this assessment:

  1. Students are familiar with pseudocode or the use of english language phrases as substitutes for coding syntax.
  2. Students are familiar with python conditional syntax.
  3. Students have experience with parental limitations on student behavior at home; and in particular with TV.


The juxtaposition of those first two assumptions made me realize that understanding coding concepts through pseudocode may be more important than memorizing coding syntax of a specific language. Pseudocode, if understood fully by students, can be a flexible way for students to mentally map coding concepts and apply them to specific programming languages that they learn! Learning coding can then be reduced to a two step process.

  1. Learning coding concepts through pseudocode, then
  2. Understanding how to convert pseudocode into programming syntax.

This may ultimately prove to be a better but also more difficult way to teach students coding.

The third assumption may need to be corrected. The school district I teach in has a historically strong hippie vibe, so there’s no guarantee students have experience with parental limitations. Similarly, there is no guarantee that students have a television at home. I will have to come up with at least one other more apropos example.

Three Beliefs

Referring back to my previous post about my three beliefs about assessment, this assessment seems to reflect those beliefs:

Assessment can show teacher effectiveness as well as student understanding. To be honest, I didn’t use the assessment results to make adjustments to my practice! But, looking just now, I discovered Kahoot Reports on assessments given, and there are metrics on what questions students struggled with, who struggled the most, and what percentage of students got what right. I can definitely use those metrics to develop additional followup questions or assignments.

Students can get an answer correct, but still not understand the underlying concepts. Some of my assessment was designed to uncover this behavior. For example, question 13 about variables might have shown that a student successfully memorized variable syntax, but it is question 2 that should have revealed whether a student understood what a variable was for in the first place. The assessment could definitely be improved to systematically assess conceptual understanding of the other conditional concepts like boolean expressions and branching.

Feedback fosters more substantial learning than a grade. I believe explaining the answers as the students took the assessment provided the feedback they need to develop understanding.


Analyzing an assessment I designed after the fact was an extremely useful exercise. During the initial design, I remember simply struggling to decide what individual questions and answers should be on the assessment and did not have the wherewithal to consider the theory and assumptions behind my assessment. The analysis above will allow me to create a greatly improved second version!

Three Beliefs About Assessment

Hello reader!

As part of my master’s in educational technology, I have just started a course on Assessment and we’ve been asked to explain three beliefs we have about assessment. My guess is that we’ll circle back to these beliefs near the end of the course to look for evidence of teacher change! Formal instruction rocks!

Belief #1: Assessment can be used to measure not only student understanding, but teacher effectiveness.

If every student gets the same question or questions incorrect on a test, perhaps it’s not the students who failed to learn, but the teacher who failed to teach. And if every student gets every question incorrect, then perhaps it was an untrained new teacher’s first month of teaching a class that met only an hour a week on a brand new topic that students had never encountered, and I had forgotten how difficult computer science concepts were to understand!

Belief #2: Students can get an answer right, but still not understand the underlying concepts.

I see this a lot in coding, where it’s possible for a student to get some code working through experimentation but without understanding why or how they achieved their result. This is not to say that experimentation is inherently flawed–much can be learned from experimenting and debugging. But, young students must be taught to stop and analyze why something worked before moving on to the next thing.

Belief #3: Feedback fosters more substantial learning than a grade.

For students who are competitive, a grade can be a great motivator. But, for those students who are not motivated by a number, or who do not know how to translate a grade into actionable learning, feedback can guide a student towards improvement and understanding in ways that a number cannot.

In the coming months, I’m looking forward to seeing if there are any theories that support or debunk my beliefs about assessment, and in particular, how theory informs how computer science understanding can be assessed. Stay tuned!

A Stay-At-Home Coronavirus Adventure

A Stay-At-Home Coronavirus Adventure (Google Drive Downloadable Version, MSU-only access)

A Stay-At-Home Coronavirus Adventure (Playable Embedded Version on, public access)

Interactive fiction and digital photography. Made with Twine 2.3.7.

Under the ever-permeating stress of exponentially increasing US coronavirus cases, can you, as me, make enough of the right choices that will keep you mentally strong and free from infection?

The line “You didn’t make good choices, you had good choices,” from “Little Fires Everywhere,” succinctly described how intersectionality can be viewed as the limited choices one has through multiple disadvantages. The Stay-At-Home Coronavirus Adventure reveals my particular complex intersection of privilege and self-consciousness through the combination of individual choices and social interactions, respectively.

Making the Stay-At-Home Coronavirus Adventure, while not a typical making activity involving hardware, tools, or circuitry, is closely related to making because the project let me connect to learning and understanding of my context and identity via an alternate mode of expression. All making is not merely “doing” which is what I thought previously, but instead an opportunity to learn in a way that students can find more beneficial than when only offered a single choice of expression.

I am a maker, even though I never thought of myself as one. But, all activity that creates rather than consumes and that leads to learning and understanding is making. Aside from the Stay-At-Home Coronavirus Adventure itself, the choices within the adventure like cooking, studying, and making spreadsheets(!), all activities that I have done during the pandemic, are all making activities.

I just don’t plan on introducing those particular making activities to my students. I intend to author other choices for them.

After we all make it through the pandemic.

Something Borrowed, Something Reimagined

As an elective teacher I don’t have my own classroom.

One of the classrooms I borrow is Kim’s middle school classroom:

Fig. 2. Kim’s ELA and social studies middle school classroom.

Kim’s room (Fig. 2) is nice. There’s the sofa on the far side of the room with bean bags around it. Large tables are spaced throughout the room, and there is a white board on one wall, a chalkboard on another, and a touchscreen on a third; two teaching desks, and a shelf containing student mugs and a hot water dispenser for tea. Back glass sliding doors lead to a grassy backyard, so there’s plenty of light. Whenever I get to the room early to setup, students are often scattered on couches, beanbags, at tables, or sitting comfortably on the floor reading. (I asked my daughter how they decide who gets to sit on the sofa. “There’s a system. It gets complicated.”) Sometimes, there are making materials stationed on the largest table near the front door, sometimes students are down by the creek in her outdoor classroom making structures for the classroom. It is a superb combination of indoor and outdoor learning spaces, providing plenty of light, choice, flexibility, and complexity to impact learning (Barrett, Zhang, Moffat, Kobbacy, 2013).

In sitting face-to-face, students are given the physical structure necessary to support socio-constructivism through group discussion and collaborative work (Brown, Collins, Duguid, 1989). With the tables large enough to each contain different sets of maker materials similar to other makerspaces (Sheridan, Halverson, Litts, Brahms, Jacobs-Pierre, and Owens, 2014), Kim’s classroom is also great for discourse around the constructionist creative process and sharing of final meaningful artifacts (Rob, Rob, 2018).

I have borrowed other spaces as well. Here’s my approximation of the media lab I used for youth summer coding camp administered by the local university:

Fig. 3. Media lab at the local college.

I have likely left out a row and column of computers, so there are 25 computers in the room instead of 16, but you get the gist.

The space is designed for students to sit down at their computer and not move until the end of class. There is no aisle along the windows! A student closest to the windows who wants to talk to another student has to walk all the way across a row to access the aisle on the other side. My third to fifth graders had no problems fitting in these tight spaces, but I can’t imagine how full grown college students manage! I had trouble reaching students seated by the windows to give one-on-one assistance.

There is no shared working space for students to share designs and work collaboratively. The building does have other shared space for collaborative endeavors, just not in this particular room. While the bank of windows let in plenty of light, there is not enough flexibility for movement in the room, nor complexity in how the space can be used (Barrett, Zhang, Moffat, Kobbacy, 2013).

Here is my first redesign.

Fig. 4. Media lab 1st redesign.

I have always wondered what it would be like to be in a room where all I need to do is stand in the middle of the room and spin around to see everybody’s computer screen. The benefit is more for me than for my students though. Especially for students whose screens face the large bank of windows on the one side of the room and have to deal with glare.

Here’s a 2nd attempt:

Fig. 5. Media lab 2nd redesign.

Open avenues between tables ease student movement and dialog. Computers offset towards the sides of the tables give more space for shared making materials to be on hand. A central table serves as a separate meeting area for design and discussion or for even more maker materials, giving students flexibility in where and on what they want to work (Barrett, Zhang, Moffat, Kobbacy, 2013). None of the computer screens face the windows, so no one has to deal with glare. Much better!

Computer labs are traditionally designed for constructivist meaningful artifact creation, where the individual student constructs knowledge on their own. These are not bad spaces–I have fond memories of my time in computer lab my freshman year in college programming and chatting with Kai, who was seated at the computer to my left. But traditional computer and media labs, while very functional for constructivist activity, could be enhanced to provide opportunity for more constructionist activity.

Unfortunately, a summer camp instructor does not have the clout to initiate changes like these, but one can always dream!

Blog at

Up ↑