
Cornell University’s Center for Teaching Innovation created the Canvas Course Spotlight program — a program offering Cornell students the chance to highlight and recognize design elements of Canvas courses that have positively impacted their learning experience. The program solicits nominations from students via an online survey. Once reviewed, course nominations are sent to course instructors and are reviewed to determine standout courses and for overall trends in the cumulative data.
Objectives
- Amplify great course-design
- Help faculty to understand the student experience
Context
In Fall 2024, CTI surveyed Cornell faculty to learn about their experiences in Canvas, the university’s learning management system (LMS). As we analyzed the results of this survey, we soon realized how valuable it would be to have feedback on Cornell students’ experience in Canvas – what types of course facilitation help them learn, and where their learning may run into roadblocks due to how the platform is being used. We could then share those responses with faculty in a way that was actionable, to help everyone have a better experience with using Canvas to facilitate and enhance student learning.
We explored creating a student-centered survey program of our own, with inspiration from programs at other institutions, including our neighbor Ithaca College, the University of Notre Dame, Northwestern University, the University of Minnesota, the University of Iowa, Chapman University, and Duke University.
Implementation
Research & Development
Communications
We developed a multi-channel plan to reach students. Our most effective tool is the Canvas global announcement banner. To make this program stand out from standard announcements, we modified the banner to match the program's branding and color scheme.

Our other promotional channels included:
- A student-facing webpage explaining the program and linking to the survey.
- An image & link on our custom Canvas login page.
- Postings on the university's event calendar.
- Digital boards across campus.
- Submissions to institutional student email mailers.
You can find example communications here.
Survey
We gathered a group of instructional designers to review survey examples from other institutions and design our own. Our goal was to gather authentic, open-ended feedback, but we also wanted to help students think about the types of positive impacts a course can have.
Instead of a rigid checklist, we provided a descriptive intro with examples, followed by one primary, open-ended question.
Have you ever thought, 'I wish all my instructors used Canvas like this!'? Now's your chance to give a shoutout to those standout courses...
There are many ways a professor may use Canvas to positively impact your learning. Examples might include:
- Organizes class materials clearly and logically.
- Uses multimedia (videos, audio clips, etc.) to make learning more engaging.
- Provides opportunities to interact with peers and content.
- Communicates clearly and consistently through announcements.
- ...and others.
What aspect(s) of this course's use of Canvas would you like to highlight? Please be descriptive and specific about how it positively impacted your learning experience.
You can find the full language of the survey here.
Nomination Collection
Our campus-wide digital push to the survey did not begin until the nomination collection window opened, allowing users to take part instantly. To complement this broad digital push, we ran an in-person tabling session on the second day of survey availability. While this drove a smaller number of submissions that the Canvas banner, its real value was in personal engagement. We set up in a central, high-traffic building during lunchtime with a prize wheel and learning technology swag for survey participants. This face-to-face time gave us the opportunity to plead our case directly to students. We could explain the "why" behind the program and help them understand the real impact their feedback could have on faculty and future course design.
Review and Analysis
Initial Triage
The program lead manually reviewed every single submission. The goal at this stage was not to categorize, but simply to filter for appropriateness. We were looking for comments that were positive, constructive, and actually about the Canvas course — leading to 11 improper comments being removed.
Thematic Analysis
The program lead read all remaining, qualified comments multiple times to identify naturally emerging categories. This involved:
- Tagging: Applying descriptive tags to each comment (e.g., "chronological modules," "homepage direct links," "timely grading").
- Refining: Consolidating similar tags (e.g., "homepage navigation" and "homepage direct links").
- Grouping: Combining refined tags into larger, high-level categories (e.g., Organization, Navigation, Assessment).
This qualitative process was vital. Even though our survey prompted with examples, this bottom-up analysis allowed us to see what actually mattered most to students.
Weighted Scoring System
From the thematic analysis, we created a weighted scoring system based on how frequently students mentioned each category. This ensured our final selections would reflect what students valued most.
Our category weights were:
- Organization = 72 points
- Navigation = 44 points
- Assessment = 28 points
- Communication = 24 points
- Clarity = 17 points
- Timeliness = 11 points
- Flexibility = 6 points
Standout Selection
Initial Review: We assembled a team of four CTI staff (instructional designers and educational specialists) to review all qualified recognitions using a formal set of guidelines. Each nomination was reviewed independently; reviewers did not access the courses, judging only the quality of the student comments. Using the guidelines, reviewers assigned "checkmarks" for specific examples and impacts. Through this process, we identified 13 courses that received two checkmarks from at least two reviewers. An additional five courses were included after a secondary consultation, bringing our total pool to 18.
Final Selection: Courses that were praised across multiple high-weight categories (like Organization and Navigation) received the highest scores. This process revealed three clear standouts, each recognized in five or more categories.
Final Verification: As the very last step, the program lead entered these three standout courses (with the explicit permissions of the associated faculty) to verify that the student praise was logical and accurate.
Challenges
Determining the right program cadence: Do you keep your survey open year-round? Should the program be run annually or run semesterly?
For the second run of the program, we opted for Fall 2025 (since ~80% of the Spring 2025 nominations were for Spring 2025 courses and we wanted to ensure equity between the main semesters. This time around, we only let students choose the current semester (Fall 2025) or the recent shoulder semester (Summer 2025). We gained about half as many nominations as the first run of the program and are planning to run it again in Spring 2025. After that run, we will again review future cadence. The key is maintaining enough student engagement to provide us enough nominations to actually draw any conclusions and to not overwhelm ourselves with too much feedback to review.
Achieving a critical mass of submissions: Will you have enough nominations to draw any conclusions or select any winners? In our first run of the program, we had not publicized any sort of award ceremony or outcomes from the program other than sharing nominations — what we then called “recognitions” with the nominated faculty. This was a strategic choice, as we were unsure how successful we would be in gaining student participation.
In a preliminary review of the Fall 2025 nominations, it was obvious it would be hard to choose more than a single “winning” course and that, though there are new tags trending (consistency and self-contained), there wasn’t enough to draw any new major conclusions.
Ensuring quality feedback from incentives: While a prize wheel is effective in garnering student attention, requiring students to fill out surveys in order to take a spin led to less robust or thoughtful nominations.
In Fall 2025, we opted to not require survey completion in order to spin the wheel. Instead, we leaned into the qualitative connections, ensuring students understand the value of taking part in the program.
Student confusion over program ownership: The Center for Teaching Innovation is primarily a faculty-facing organization at Cornell. As such, while tabling, it wasn’t surprising to have students assume we were from Canvas (Instructure).
Devising a “fair” way to select “winners”: Without really knowing whether we would even declare any winners and having no idea what kind of recognitions students would submit, it was hard to come up with a process for determining which courses stood out amongst the rest. In the end, we decided on a mix of rating what we knew about courses (whether students shared both “what” worked for them in a course and “how” it impacted them) and then assigning them points based on what had been prominent amongst most courses. We lucked out that there happened to be three courses that showed well in both metrics. We’ll see next time around if this method really is fair.
How to Adapt This for Your Institution
Determine the scale and scope of your program — and remember that it can be adapted over time.
Start by defining exactly what you hope to achieve. What do you hope to learn or prove? Who is the audience for your findings? Answering these questions early will guide your decision-making down the line. Also, consider how you might "scaffold" your program over multiple years. When we first ran this, our agreed-upon scope was simply gathering recognitions and sharing them with faculty. We hoped to expand to blog posts and an awards ceremony, but we treated those as "stretch goals" rather than requirements.
Assess your existing relationship with students.
Reflect on whether your department has the recognition and trust necessary to motivate students to participate. If your team is primarily faculty-facing, students may not know who you are or trust that giving you their opinions is worthwhile. If you aren't sure they would participate out of the "kindness of their hearts," consider incentives. Some schools offer gift cards, while others (like us) use prize drawings. Reach out to your vendors to see what swag they might be able to contribute to help you promote the program.
Decide on your data strategy: Pre-defined categories or open-ended discovery?
Are there specific elements of course design you are already looking to highlight? Some institutions use pre-determined award categories (e.g., "Best use of video") or Likert scales for specific features. This makes analysis easier but narrows the feedback. Alternatively, you can give students the freedom to define what was beneficial to their learning experience. The choice depends on whether you are looking for specific validation or general discovery—and how much energy you can commit to reviewing the open-ended responses.
Plan your selection methodology early.
If you plan to declare "winners," how will you choose them? If you use Likert scales, will the winner simply be the highest score? Do courses with more nominations get extra weight? There is no "right" way to choose—and you might not be able to finalize your method until you see the actual data. However, your selection method should align with your goals. If your goal is to amplify student voices, you may want to minimize ID judgement calls. If your goal is to find champions for specific tools (e.g., peer annotation), you may want to use student nominations merely as a starting point for an internal expert review.
Reach out to those who have done it before.
This blog includes a non-exhaustive list of other institutions that have run programs like Cornell’s Canvas Course Spotlight. Reach out to those folks to see what they are willing to share. Don’t have time for deep research? Use the resources provided here to get started. And don’t hesitate to post on this blog to ask questions of fellow community members as you explore!
Have you run a course recognition program at your institution, or tried something similar to learn more about the student experience in Canvas? If yes, what strategies have you used to recognize standout courses or instructors in Canvas? What lessons did you learn along the way that might help others launch a similar program? I look forward to discussing these questions with you in the comments.