It wouldn’t be Oscar season without a few snubs and surprises. This year, however, the controversy seems to have called into question not just a few nominations, but the nomination and award selection process as a whole.
Many argue the fairness and validity of the Oscars’ selection process, based on what they see as a visible lack of diversity in the list of nominees. Others say that the awards are meaningless – an outdated and overpriced celebration of the entertainment elite. Whatever your opinion may be, you can’t deny the massive marketing power that comes with an Academy Award nomination. It can determine the financial success of a film and ultimately shape careers.
The more I learn about the process to collect nominations and decide winners, the more I realize the paradoxical nature of the process. To be nominated, you must be seen by enough voters; to be seen by enough voters, you need to have the marketing power that often can only come with a nomination. But it’s not just marketing; this honor brings with it an implication that the artist’s work stands above the rest. If this is the case, does the Academy’s method of evaluation actually measure quality – or does it measure something else entirely?
In higher education, professionals regularly ask themselves: How can we measure success and quality? We consider this when reviewing programmatic efforts across campus; we consider it when assessing student performance in the classroom. There are a variety of assessment methods that can be used to measure everything from learning within the classroom to programmatic effectiveness on an institutional level. Choosing the proper method of assessment is a meaningful part of the process. So is a critical reflection post-assessment to see if there are ways to improve for the future. With all of the controversy surrounding this year’s nominees, perhaps this is the perfect time for the Oscars to borrow from the field of education and assess the effectiveness of their process.
To anticipate how much the Oscars could benefit from a change, it’s important to understand how inadequate the current process is. Firstly, votes are cast by members only. Even to be considered for membership, a person must either be nominated for an award or sponsored by an existing member. The voting members are sent a blank ballot to vote within their own branch (e.g., acting, directing, cinematography); it can be filled out on paper or online. Voters simply write down the names of the individuals or works they’d like to nominate. There are no descriptions or criteria to consider.
Despite listing up to five people on their ballot, voting members get only one vote per nomination. If their first-choice nominee is eliminated or meets the minimum for a nomination before their ballot is counted, their second choice counts as their vote, and so on. It’s possible that someone could be listed on every single ballot and not receive a nomination if they didn’t appear higher in the rankings. There’s no way to verify if voters have actually seen the work or if they’re simply voting for friends, colleagues, or names they’ve seen pop up on Twitter.
Once the nominations are selected, voters are expected to watch all the nominated performances and movies. But who can guarantee that this will actually happen? In 2014, at least two voters openly admitted to not watching that year’s Best Picture winner, 12 Years a Slave, despite voting for it. Of course, the selection process for Best Actor or Best Director is as simple as selecting an option on a multiple choice question (as illustrated in the unofficial ballot provided by the Academy). This does not require any consistent evaluation of the performance or work. Voters are basically checking a box based on what they liked the most.
This is precisely where the Academy can learn something from higher education professionals. A subjective statement such as “I like it” reveals very little about something’s quality or effectiveness. So what could be used to help the Oscars maintain their reputation for nominating and selecting the most deserving (and not simply the most commonly named) works in entertainment?
I’d like to nominate rubrics.
What if voters had a tool to evaluate the quality of the work and let the data speak for itself? We often talk about our many methods of collecting data, including surveys and focus groups, while recognizing that different questions require different methods. Like many campus administrators, the Oscar organizers have to ask themselves: Do we want to know what performers voters like the most, or do we want to confirm the performers that voters believe represent the best at their craft? This important distinction is the difference between a collection of names and an evaluation of skill and achievement. That’s not to say that both of these measures aren’t important, but we should in fact be collecting the data that answers the question we are asking. In this case, the Oscars may be using a data collection method that doesn’t allow them to really assess what they want to assess: quality of performance.
The Academy Awards organizers have already begun talking about how to improve their nominating and voting process. My hope is they will explore data collection and evaluation as a way to refocus on great film performances instead of reacting to controversy. If they pursue this path, rubrics would be a great place to start. Rubrics allow you to identify a set of criteria to evaluate the overall performance on a project, task, or assignment. You can then define the achievement levels for each or some of these criteria. This means that each film or performer could be evaluated based on a consistent set of measures and expectations of high quality work in film.
Some may argue that you can’t standardize art, and this may be true. However, that doesn’t mean we can’t attempt to measure it. It is possible to describe an achievement level for each of the criteria, and this is why rubrics can be especially helpful for evaluating learning or performance in the classroom. The flexibility, however, makes this tool an especially good fit for the Oscars. Using a half-naked rubric (as seen in the example below), or a rubric where there are some undefined criteria, would allow voters to worry less about differentiating between each achievement level. A rubric would also accommodate those that not only meet the definition of a high-quality performance, but also go above and beyond expectations. It would also address those that accomplish the elusive and indescribable “it factor.”
The ability to evaluate the performance while considering various dimensions of what constitutes high-quality work in film may allow voters to identify performances that they wouldn’t otherwise recognize. It also may encourage evaluators to separate their impressions of the performance from those aspects of the film that may be unrelated to the performance, or that might have its own category for evaluation (e.g., costume and set design). The aggregation of scores would allow Academy voters to look more objectively at how they responded to the many aspects of a performance. This will help them look beyond the initial “feeling” they had when viewing the work, and eliminate the influence (both positive and negative) of film promotion. Perhaps, and this may be best of all, the use of rubrics can help voters compile their ballots for both nominees and overall winners with a broader perspective that values the art of the performance over the familiarity of a name or title.
We may not all agree on the value of an awards show like the Oscars. Still, at the very least, if the Academy of Motion Picture Arts and Sciences would like to hold the reputation for recognizing great work, the organizers should make sure their evaluation processes are built to do just that. This year’s Oscar controversy comes at a time when social media and technological innovation has allowed more artists to be visible in more formats each year. We don’t watch movies the way we used to – certainly our ability to evaluate and recognize this work deserves the same evolution.
Before joining Campus Labs, Siobhan May worked as a student engagement coordinator at the University of Delaware. She has also been an adjunct faculty and staff member for Adelphi University.