How we compute marching band grades

You may have noticed a few minor changes to our marching band coverage this year. We have done away with the Survey Monkey grading survey and opted for a method of scoring artistic work that more closely parallels how movies are rated. Plus, our new system is much better grounded in good research.

The biggest change: At the top of each band’s page, you’ll see two new buttons: a green “A/B” button for “thumbs up” and a red “C/D” button for “thumbs down.” These are live voting buttons. If you like what a band is doing—i.e., if you would give it an “A” or a “B” in a more traditional grading system—press the “thumbs up” button to cast your vote. If, on the other hand, you don’t like what the band is doing at this point, press the “thumbs down” button.

We collect the IP address of the computer you’re voting from, and we’ll block any attempt to vote more than once for a given band within 15 minutes.

But assuming your vote goes through, it will figure into our calculations, which resemble the Tomatometer on Rotten, which evaluates movies based on critics’ evaluations of those movies. Rotten simply scores each review from a movie critic as “fresh” or “rotten.” We’re asking you to score your own review for us.

Good research also supports this method of scoring when it comes to the president’s approval rating. People are simply asked, “Do you approve of the job the president is doing?” and the overall rating is based on the percentage of people who answered the question in the affirmative.

We assert that this methodology is far more appropriate for marching bands, first because the activity is artistic in nature and lends itself more to opinions than to numerical scoring, and second, because numerical scores, such as those given by top adjudicators at even the best marching band festivals, have been shown to be unreliable and invalid.

The overall grade for the band will be based on the percentage of people who vote “thumbs up” and “thumbs down,” provided we have at least three votes to tally.

  • A = 77 percent or more of the votes are “thumbs up”
  • B = 57–76 percent of the votes are “thumbs up”
  • B/C = 43–56 percent of the votes are “thumbs up”
  • C = 23–42 percent of the votes are “thumbs up”
  • D = less than 23 of the votes are “thumbs up”

We will compute an average grade since the beginning of the performance season, over the last seven days, from 14 to 7 days in the past, and over the last 30 days. We’ll also tell you the number of votes and the number of unique IP addresses during each reporting period.

As the season ends, the reporting will switch to a single grade based on a formula where the most recent votes will get a higher weight than those cast closer to the beginning of the season.

Because we’re trying to assess improvement over a performance season, it’s important that you vote frequently, as the band improves.

Other options

I considered a few different modes of writing this feature, and perhaps we can explore some of those in the future, after we determine how this voting worked and collect a few pieces of feedback.

One possible model was the Danielson framework performance levels for evaluating teachers: instead of grades like A, B, and C, or a thumbs up/thumbs down vote, users could indicate that a band was unsatisfactory or minimal, basic, proficient, or distinguished. Such a system would still depend on the number of votes collected.

Then I thought about using a model that is sometimes used to describe student achievement in a subject: Entering, Emerging, Developing, Expanding, Bridging. These are the performance levels described by WIDA for the English Language Development standards. The majority of states, including both Illinois and Maryland, are part of the WIDA consortium.

Unfortunately, these are also “snapshots” of proficiency development, taken at a given moment in time. Plus, the ideas of developing and bridging require some application of previous levels, which voters on our system might not be aware of.

Connection to the Illinois Learning Standards

We strongly believe in supporting the learning standards adopted by the state of Illinois with this endeavor. National standards in music, now in the works at the National Association for Music Education, have not been adopted by the Illinois State Board of Education at this time.

Therefore, we turn to the adopted standards in the fine arts. Learning Standard 25.A.5 for late high school, available here, says that students should be able to “Analyze and evaluate student and professional works for how aesthetic qualities are used to convey intent, expressive ideas and/or meaning.”

Clearly, this system supports the subgoal of “evaluating student works.” Any time students have to grade a student performance, even if they’re not in the performing ensemble, it supports this goal. According to the Illinois State Board of Education, this goal is important because

Through observation, discussion, interpretation and analysis, students learn the “language” of the arts. They learn to understand how others express ideas in dance, drama, music and visual art forms. In addition to acquiring knowledge essential to performance and production, students become arts consumers (e.g., attending live performances or movies, purchasing paintings or jewelry, or visiting museums) who understand the basic elements and principles underlying artworks and are able to critique them.

As such, we encourage all student voters to post a comment to support their vote. Start with a statement like, “I gave it a thumbs up because …” Start a discussion. Again and again.

About the Author

Paul Katula
Paul Katula is the executive editor of the Voxitatis Research Foundation, which publishes this blog. For more biographical information, see the About page.