9 AI Rubrics
Jessica Kahlow
This chapter applies to various contexts and has been used to create rubrics in various disciplines, including accounting, biology, communication, criminal justice, education, nursing, physics, political science, and social work. This has also been used in undergraduate and graduate courses and for various assignment types, including everything from short answer quiz questions to discussion boards to research papers.
After spending hours perfecting your assignment directions, you might be glad to just be done with that step; but an important—yet often overlooked—component of assignment design is rubric creation. This chapter discusses the benefits of rubrics for both instructors and students. Then, it outlines directions and an example of how to use OpenAI’s ChatGPT version 3.5 or Microsoft Copilot to make rubrics for your existing assignments.
Why Use Rubrics
Rubrics are assessment tools that outline important criteria and expectations for specific graded assignments or discussions (Andrade, 2005; Taylor et al., 2024). Rubrics are a way for instructors to communicate to students what counts as quality work by outlining different criteria specific to the assessment and provide multiple ratings for each of those criteria (Brookhart & Chen, 2015). In other words, rubrics describe both the desirable qualities expected within student work and pitfalls in student work, which help students think through the assignment more and produce higher quality work (Andrade, 2005). Good rubrics help ensure consistent grading and lead to more objective evaluations of student work (Chan & Ho, 2019). In short, we know rubrics are helpful for both instructors and students.
How Rubrics Help Instructors
Instructors have long-held predispositions toward not liking rubrics (Reddy & Andrade, 2010). Still, in Reddy and Andrade’s (2010) literature review, they noted that when instructors do use rubrics, they appreciated that the rubrics helped them grade more consistently and efficiently.
- Rubrics help instructors give more informative feedback without spending as much time doing so (Andrade, 2005; Gezie et al., 2012).
- Rubrics help instructors provide more consistent evaluations of student work (Taylor et al., 2024), particularly when multiple instructors are grading the same assessment (Chan & Ho, 2019).
- Artificial Intelligence (AI) accelerates the rubric creation process by efficiently generating rubrics based on input information (Estrada & Fabián, 2024).
How Rubrics Help Students
The benefit of rubrics on student learning has also been well-documented in the literature.
- Rubrics help students understand the goal of the assignment and focus their efforts accordingly (Andrade, 2005; Reddy & Andrade, 2010; Gezie et al., 2012), which also helps boost their confidence and lessen their anxiety about the assessment (Taylor et al., 2024).
- Students perceive assessments with rubrics as more transparent (Chan & Ho, 2019; Gezie et al., 2012; Jonsson, 2014; Reddy & Andrade, 2010).
- Students perform better on assessments and have deeper learning when there are rubrics available (Reddy & Andrade, 2010).
- Feedback received from descriptive rubrics improves students’ learning (Andrade, 2005).
How to use AI for Rubrics
Making good rubrics with specific and detailed criteria for each rating can be difficult and time-consuming. Enter—ChatGPT. ChatGPT can transform existing assignments, and discussion prompts into detailed and specific rubrics. ChatGPT is great at restructuring information (Estrada & Fabián, 2024), so it is a great way to make a rubric if you already have the assignment directions made. Such an approach expedites the rubric-making process and ensures that the rubric aligns well with the existing assessment. ChatGPT can be very helpful in developing assessment rubrics (Estrada & Fabián, 2024). Estrada and Fabián (2024) found that the AI-generated rubric in their study provided a clear and transparent framework for evaluating student work. So, ChatGPT allows instructors to efficiently generate rubrics, so long as the goal is to use the generated rubric, or parts of it, that align with the goals and outcomes of the assessment (Estrada & Fabián, 2024). While instruments are not fully replicable due to ChatGPT’s unique responses in each interaction, their responses can serve as a great starting point for developing rubrics. Estrada and Fabián (2024) recommend developing a clear and specific prompt in the message sent to ChatGPT. The directions should include instructions for what you want ChatGPT to do, the format you want the output to be in, and any other restrictions you have for the rubric (Estrada & Fabián, 2024). As with anything generated with AI, they noted the importance of reviewing the generated content for accuracy.
Step 1: Select the assignment you want to make a rubric for.
Before you can use ChatGPT to make a rubric, you need to have your assignment directions and grading criteria finalized.
The assignment directions outlined in Box 1 are from a graduate course in assessment and evaluation. The assignment is for students to write a 1-2 page reading reaction where they engage with the materials assigned in that module.
Box 1: Reading Reaction Assignment
Assignment Directions
After reading the assigned materials for the module, you will write a two-page double-spaced (or one-page single-spaced) paper to react to the assigned readings in this module. Do not simply summarize the reading. Instead, engage the content by agreeing, disagreeing, applying, questioning, and/or extending the course materials. In doing so, you may write about anything that stood out to you as valuable in the materials. I have included questions below to get you thinking about the material more, but you can reply to some or none of these.
- How do diverse assessment methods like simulations and e-portfolios enhance learning engagement and authenticity?
- What are the benefits and challenges of integrating peer assessment and multimedia in educational settings?
- Reflect on how technology can be effectively utilized to improve assessment practices in your own teaching or learning context.
- What are the key factors influencing the high-stakes nature of assessment in higher education?
- How can educators balance the pressures of quality assurance, employability needs, and diversity considerations in assessment practices?
- Discuss strategies or reforms that could potentially alleviate challenges such as grade inflation and student dissatisfaction.
- Compare and contrast formative and summative assessment approaches. How can each type contribute to enhancing student learning outcomes?
- Reflect on how assessment data can be effectively used to inform instructional decisions and program improvements.
- Discuss the importance of aligning assessment strategies with educational goals and student needs.
- Why is it important for assignments to resonate culturally and hold significance for students? How can instructors achieve this in diverse educational settings?
- Reflect on your own experiences with culturally affirming assignments. How did they impact your learning and engagement?
- Discuss practical strategies for communicating course design and assignment expectations effectively through the syllabus.
- How are you doing? Is there anything you would like me to know or share?
Guidelines
- Refer to the rubric for additional grading criteria.
- Do not simply summarize the reading. Instead, engage the content by agreeing, disagreeing, applying, questioning, and/or extending the course materials.
- Cite any additional materials used in APA format.
Step 2: Decide if you have any specific criteria and ratings in mind.
Criteria
You do not need to know exactly what your criteria will be, but it does help to have a general idea of what they should be. Ideally, the criteria for the rubric should align with the assessment and the learning outcomes associated with that assessment (Chen & Ho, 2019). Generally, you should aim to have about three to nine criteria for any assessment. Unless you tell it to give you fewer, ChatGPT will often give you a lot of criteria, so even if you do not have something specific in mind you still might want to specify the number of criteria you are looking for.
Ratings
As Chan and Ho (2019) describe, writing the descriptions for each rating is the most challenging part of creating rubrics (p. 542). This is where ChatGPT really shines when you are using it to help you make rubrics. Again, if you want all your rubrics to use a consistent rating scale, it would be good to tell ChatGPT which one you want to use, or, again, at the very least, the number of ratings you want to have. Some common ratings include:
- Meets expectations, approaching expectations, does not meet expectations (this is my favorite)
- Excellent, Average, Poor
- A, B, C, D, F or 5, 4, 3, 2, 1
In any case, you will want to include an explanation of the expectations for each rating, including a description of the characteristics of that level of work. This is where ChatGPT comes in.
Step 3: Write and enter your ChatGPT prompt
For this example, I used this prompt: Can you make a rubric in table format that uses the following ratings: meets expectations, approaching expectations, and does not meet expectations; and the following criteria: content, examples, and writing quality; for this assignment: [here I copy and pasted the entire assignment directions from Box 1]?
Now that you have your prompt and any criteria or ratings identified, you can enter your prompt into ChatGPT using the following template:
Box 2: Sample Template
“Can you make a rubric in table format with the following ratings [insert ratings you selected in step 2] and with these criteria [insert criteria you identified, if any, in step 2] for this assignment: [copy and paste assignment directions from Box 1]?”
The complete response from ChatGPT is provided in Box 3.
Box 3: ChatGPT’s Rubric
Absolutely! Here is a rubric for the Reading Reaction Assignment using the ratings “Meets Expectations,” “Approaching Expectations,” and “Does Not Meet Expectations”:
Reading Reaction Assignment Rubric
Criteria | Meets Expectations | Approaching Expectations | Does Not Meet Expectations |
---|---|---|---|
Content | Thoroughly engages with the reading material. Demonstrates a deep understanding by agreeing, disagreeing, applying, questioning, and/or extending the course materials. Directly addresses relevant themes or questions. | Engages with the reading material, showing some understanding. Response includes some agreement, disagreement, application, questioning, and/or extension, but may lack depth or direct focus on relevant themes or questions. | Minimally engages with the reading material. Lacks depth, analysis, or a direct focus on relevant themes or questions. Response may be mostly summary or off-topic. |
Examples and Reflections | Effectively uses specific examples from the readings to support analysis. Includes thoughtful reflections that connect personal insights or experiences to the readings. Demonstrates originality and depth in reflections. | Uses examples from the readings, but may lack specificity or relevance. Reflections show some connection to personal insights or experiences but are not fully developed. Demonstrates some originality in reflections. | Lacks specific examples from the readings. Reflections are superficial or unrelated to personal insights or experiences. Lacks originality or depth in reflections. |
Writing Quality | Writing is clear, coherent, and well-organized. Free of significant grammatical or spelling errors. Follows assignment guidelines, including length, format, and citation requirements. | Writing is mostly clear but may have minor organizational issues or some grammatical and spelling errors. Mostly follows assignment guidelines, with minor deviations in length, format, or citation. | Writing is unclear, poorly organized, or difficult to follow. Contains significant grammatical or spelling errors. Does not follow assignment guidelines, with major deviations in length, format, or citation. |
This streamlined rubric should provide clear guidance on the key aspects you want to assess in the Reading Reaction Assignment.
Note: The information in this box was generated completely by ChatGPT 3.5 using the previously disclosed prompt.
Step 4: Regenerate, revise, and make your final rubric
Then, I used the parts of ChatGPT’s rubric that I liked to finalize the rubric for the assignment, which is provided in the table below. Be sure to also assign point values that match the point values in your assignment and syllabus, since sometimes ChatGPT likes to put random values in there for you. Keep in mind that the point value you assign to each criteria helps highlight the priority or importance of each (Chen & Ho, 2019).
For the rubric in Box 3, most of it looks pretty like the rubric ChatGPT gave me in Box 2. I only summarized or combined pieces of information to make it more workable for the assignment and my preferences.
Box 4: Final Edited Rubric
Criteria | Meets Expectations | Approaching Expectations | Does Not Meet Expectations |
---|---|---|---|
Content | 10 to > 7 pts
Thoroughly engages with the reading material by agreeing, disagreeing, applying, questioning, and/or extending the course materials to address relevant themes or questions. |
7 to > 5 pts
Engages with the reading material, showing some understanding but may lack depth or focus on relevant themes or questions. |
5 to > 0 pts
Minimally engages with the reading material; response is primarily a summary of the readings. |
Examples and Reflection | 10 to > 7 pts
Provides insightful analysis and reflection on key points. Clearly addresses at least one of the provided questions or independently raises valuable points. Effectively uses specific examples from the reading and/or personal experience to support points. |
7 to > 5 pts
Offers some analysis and reflection but may lack depth or clarity. Partially addresses at least one provided question or raises some points. Uses examples, but they may be vague or not fully support the points made. |
5 to > 0 pts
Lacks analysis and reflection. Does not address provided questions or raise independent points. Does not use examples or the examples provided are irrelevant or insufficient. |
Writing Quality | 5 to > 4 pts
Writing is clear, coherent, and well-organized. Free of significant grammatical or spelling errors. Follows assignment guidelines, including length, format, and citation requirements. |
4 to > 2 pts
Writing is mostly clear but may have minor organizational issues or some grammatical and spelling errors, but they do not hinder understanding. Mostly follows assignment guidelines, with minor deviations in length, format, or citation. |
2 to > 0 pts
Writing is unclear, poorly organized, or difficult to follow. Contains significant grammatical or spelling errors. Does not follow assignment guidelines, with major deviations in length, format, or citation. |
Note: The information in this box is based on the content generated by Chat GPT 3.5 in Box 2.
Once you have everything just the way you want it, you can copy and paste the rubric into Canvas or whatever LMS you are using. Refer to the links below for help adding a rubric to your assignments and discussions in Canvas.
Then, as always, be sure to test out the rubric. If you can, you can use a previous student’s assignment to ensure the rubric works the way you want it to before using it in a course. After you use it in a course, consider how well students met the criteria and whether it was easy to use it to grade the assignment; you want to make sure that the rubric aligns well with the assignment, outcomes, and the quality of work you get from students. As you use the rubric for grading, be sure to note anything you did not like or would want to change for next time.
- If the rubric is resulting in unexpected grades, you may need to rebalance the number of points for the criteria to better align with your expectations.
- If you consistently scored students lower on specific criteria, consider whether you can make the wording clearer or more objective.
- If you had to frequently adjust how you interpreted the rubric, you may need to revise your criteria to better match the outcomes associated with the assignment.
- If a criterion was consistently difficult to grade, you might need to add more criteria or split one criterion into multiple so you can describe them with a bit more detail.
Remember that regularly reflecting on and revising your rubrics helps make sure it is doing what it is supposed to be doing and is helpful for both instructors and students.
Disclosures
This chapter contains AI-generated content; the example rubric was created using ChatGPT 3.5.
References
Andrade, H. G. (2005). Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53(1), 27–31. https://doi.org/10.3200/CTCH.53.1.27-31
Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. https://doi.org/10.1080/00131911.2014.929565
Chan, Z., & Ho, S. (2019). Good and bad practices in rubrics: The perspectives of students and educators. Assessment & Evaluation in Higher Education, 44(4), 533–545. https://doi.org/10.1080/02602938.2018.1522528
Estrada, B. O., & Fabián, A. D. H. (2024). Generación de rúbricas con herramientas de inteligencia artificial para la evaluación de aprendizajes en educación superior. DIDAC, 84, Article 84. https://doi.org/10.48102/didac.2024..84_JUL-DIC.211
Gezie, A., Khaja, K., Chang, V. N., Adamek, M. E., & Johnsen, M. B. (2012). Rubrics as a Tool for Learning and Assessment: What Do Baccalaureate Students Think? Journal of Teaching in Social Work, 32(4), 421–437. https://doi.org/10.1080/08841233.2012.705240
Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7), 840–852. https://doi.org/10.1080/02602938.2013.875117
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859
Taylor, B., Kisby, F., & Reedy, A. (2024). Rubrics in higher education: An exploration of undergraduate students’ understanding and perspectives. Assessment & Evaluation in Higher Education, 49(6), 799–809. https://doi.org/10.1080/02602938.2023.2299330