Evidence Records

Part 4 of 4 in a series of blog posts about Standards-Based Grading

No more MarkBook

We’ve arrived at the last piece in the A&E puzzle; the evidence record. This is where you will be recording the levels each student received for the various expectations on each test/task. MarkBook (a marks management software) no longer meets our needs as it will simply calculate an average based on the weighting of each test which would be set out ahead of time and remain the same for each student. An average is not necessarily the best representation of what a student knows in a course. Growing success describes how to determine a student’s final grade as such:

“Determining a report card grade will involve
teachers’ professional judgement
and interpretation of evidence
and should reflect the student’s most consistent
level of achievement,
with special consideration given to more recent evidence.”

Most consistent, more recent.

The idea being that if a student performs poorly at the beginning of a course, but puts in the work needed to catch up and learn the material, they should be assigned a mark that reflects their current ability and knowledge in the course at the end (despite the fact that they had a rough start). An average will negatively impact this student by always pulling their grade down due to the low marks early in the course.

For example, let’s look at the example below which looks at the marks that sky diving students received for packing parachutes:

All 3 students have the same average. But when asked “which student would you want to pack your parachute?” I believe we would all have the same answer. So clearly, an average does not always give us the most accurate picture of student achievement.

Evidence Record Template

Below is an example of an evidence record for a math course. This would be printed out as one page per student. It does mean a binder full of these sheets in order to track a student’s grade and progress. Some teachers are not keen on these being a paper-pencil tool. There are electronic versions of evidence records starting to be made; the OCDSB is piloting the online MaMa+ version this semester, and Bruce McLaurin over at Glebe C.I. has created some excel spreadsheets that self-populate the marks into evidence records for you. However, I have found that trying evidence records for the first time with pencil and paper gives you a much better feel for how they work & what they represent. I find that by handwriting a student’s achievement on to this paper record, I notice trends in the students’ marks much better than if I were relying on a program to create the evidence record from my class list of marks.

Math evidence record

Update (2014.12.04): This fall, OCDSB teachers have access to the MaMa (Marks Manager) software online which allows us to keep electronic evidence records.
Here are some screenshots of what it looks like.
Inputting test marks by class list:
MaMaMarkEntryMaMa then automatically populates each student’s evidence record with the marks from the test:MaMaEvidenceRecord

 How to use evidence records

Here’s a video clip explaining how to take the levels of achievement earned on the test and record that on your evidence record.
(skip to 2:50 for evidence records)

So a completed evidence record might look something like the following:

Evidence Record U

We can see, for example, the students quiz marks (code Q) and test marks (code T). Each code may appear in more than one row because a test can evaluate multiple expectations. The numbers attached to each code indicate the chronological order of the tests: T1=Test 1 and T2 = Test 2. The exam mark has been labeled with code E in the grey row above each strand. Notice the exam receives 3 separate marks; one for each strand of the course. The summative, code ST, has been recorded in its own row below the term work.

What final level of achievement would you assign to the student with the above evidence record?

Here’s a video clip of the type of discussion one has with colleagues as we learn to interpret these evidence records, and still later on when we collaborate with colleagues to interpret evidence that may be inconsistent and therefore difficult to assign a final grade. Not they are using a different evidence record in the video than the one I showed above:

This is the type of moderation I have teachers try when I offer workshops to my colleagues about this new A&E. I find it really useful to hear my colleagues interpret the evidence presented, and comforting to see how consistent we usually are in determining a final grade.

Levels VS Percents

Unfortunately, at this time, the Ministry of Education still requires us to convert each student’s final overall level of achievement into a percent for the report card. The OCDSB has provided the following “peg chart” in order to do so:PegChartOCDSB

It’s not ideal to be converting the levels back to percents in the old style, but it will do until the Ministry changes report cards to match the levels of achievement from their curriculum documents.

Here are a couple more completed evidence records to look at and practice determining an overall final grade. Note that MT is Makeup Test.

evidence record Z. . . . and another . . .

evidence record E

The more familiar we become with this template, the more conversations we have with our colleagues, the more comfortable we become with using our professional judgement in order to determine the student’s final grade. This professional judgement replaces our reliance on software that calculates averages for us that feel accurate, but that I don’t think are any more accurate than my own well-informed professional judgement (but that’s a conversation / debate for another blog post!).

The End! But it’s only the beginning . . . 

So here we are at the end of my blog series about the new A&E in the OCDSB. It’s really only the beginning; this process will evolve as we implement it in our classrooms. We’re learning how to best use these templates in order to inform not only our evaluation practices, but our instructional practices as well.

How are you feeling about the new A&E?

Start a conversation with your colleagues in your school or in your department.

Update (2014.12.04): Here is what the OCDSB published for parents this fall on the topic of their assessment & evaluation policy:
Parent Guide to Assessment, Evaluation and Reporting – Grades 9-12

I would love to hear your comments, questions, concerns . . . Leave a comment below or get in touch via Twitter!

– Laura Wheeler (Teacher @ Ridgemont High School, OCDSB; Ottawa, ON)

Advertisements

Tests/Tasks & their Rubrics

Part 3 of 4 in a series of blog posts about Standards-Based Grading.

Making a test, task, or project

The first step is to decide which overall expectation(s) [OEs] you will be evaluating with the current task. You may evaluate more than one OE at a time. Then you’ll need to design a task or a set of test questions that allow your students to demonstrate their ability or knowledge for that skill or content. I find it helpful to organize my test so that all of the questions for a given OE are together (as opposed to grouping them by the categories – K/U, T, C, A – as we used to do). For example, in the grade 9 math test below, each page has questions for one of the 4 OEs being evaluated. Additionally, I have organized each page so that the simpler problems (more K/U  or level 1-2 type questions) are first, near the top of the page. The more complex problems (more Application or level 3-4 type questions) are at the bottom of the page.

mfm1P testa

mfm1P testb

 

PDF version of above Math test: Test 3 no answers

“Is it necessary to make a new, different rubric for each and every test or task?”

No, it is not. In fact, you could simply print out the achievement chart rubric from your curriculum document and attach that to your task or test.

achievement chart

However, I find that rubric too wordy (the students don’t bother reading it) and sometimes too vague for a specific project.

“Are we simply getting rid of the categories; K/U, T, C, A? Why include them in the assessment plan if we’re not using them to organize our tests?”

We are not getting rid of the categories. We will embed them into our test questions and tasks and often even use them to build our rubrics. For example,for the math test above, I used three of the categories from the achievement chart in order to build my rubric for the test. Notice also that I repeat the rubric 4 times; once for each of the OEs being evaluated on that test:

Math rubric

I check off the appropriate level for each of the 3 categories – which then allows me to determine an overall level for that OE. I do this 4 times; once for each OE (which also happens to be once for each page of the test since each page corresponds to a separate OE). At the end, I return the test & rubric to the student with the 4 levels. There is no “overall average” on the test, the student attempted for 4 different skills/expectations and so receives 4 separate levels of achievement.

How do we record these levels in our mark book? For that, we have Evidence Records.

More examples of tests/tasks w/ rubrics:

Gr. 9 geography test (using checkbricks): 3 test human geo

Do you have a test w/ a rubric you’d be willing to share here for colleagues in your subject area to see?
Get in touch: laura.wheeler@ocdsb.ca

– Laura Wheeler (Teacher @ Ridgemont High School, OCDSB; Ottawa, ON)

The Assessment Plan

Part 2 of 4 in a series of blog posts about Standards-Based Grading.

We all learn about backwards planning in teacher’s college; for a given unit, that means you start by creating the unit test (deciding what are the most important skills & topics to evaluate) and then building learning & assessment activities that build towards that unit test appropriately. And this makes all the sense in the world.

But then the real world of teaching smacks you in the face; three new courses to prep, very little free time what with all the extra-curriculars, etc. you’re helping with . . . all of a sudden it’s all you can do to prep something for the next day’s lessons, let alone backwards plan your entire unit for each of your courses.

The Assessment Plan (AP) brings us back to this idea of backwards planning. We use the AP to plan the number of evaluation tasks & tests we will have in a semester and to outline which of the overall expectations (OEs) and achievement chart categories (K/U, App, T&I, Comm) that the test covers. The intent is to have a variety of task types that cover a variety of OEs and categories. If your AP is a good one, there should be no gaps in OEs evaluated or in the categories that they incorporate.

Let’s have a look at a completed AP for a course, in this case a math course:
AP math

For a closer look, here is a pdf version: MPM2D Assessment Plan 2012 rev

The AP is broken into two sections of columns; the first section lists all of the Overall Expectations, divided into their strands, while the second section lists the category expectations (K/U, App, T&I, Comm). Each row represents an evaluation task or test. The left-most column is where you write the title of your evaluation tool (ie. “Quiz 1 – solve by graphing”). There are checkmarks for each of the OEs that are being evaluated by that tool, and also each of the category expectations that are built into the tool.

For example, let’s look at “Test 3” in the above AP. The title of the test is written in the first column along with the code “T3” which will be used to record marks on each student’s evidence record. Then there are symbols, in this case *, marking which of the OEs are covered on the test. So this test 3 covered 3 out of the 4 overall expectations in the first strand. We can see the 3 *s under the OEs. We can also see that this test incorporated all of the Knowledge & Understanding (K/U) and Application category expectations as well as some of the Thinking and Communication expectations.

What we can easily see is that each of the overall expectations is evaluated more than once over the course of the semester and that all of the category expectations have been met across the span of the course as well.

Here’s a video by the OCDSB that might help illustrate how the assessment plan fits into the overall framework:
https://docs.google.com/a/ocdsb.ca/file/d/0B5hknKM3CbwyZ0NDY1BkNWVyUXc/edit

Homework: Here is my suggestion for the best way to practice using an assessment plan before next September.

  1. Pick one course that you are currently teaching and feel most comfortable in.
  2. Get a copy (electronic or printed) of the AP for that course. You can find the AP for any course you teach on OCDSB’s Desire to Learn platform:
    Go to http://ocdsb.elearningontario.ca and sign in using the same username & password you use to sign on to any OCDSB computer.d2L AP
    Under “Teacher Resources” on the right hand side of the page, click on “Secondary Assessment Plan templates by Subject”.
    From there you will click on the folder for your subject area and click on the AP for your course.
    The AP will open in a viewing window.
    Click on the download button google drive download button in the bottom right-hand corner to save the file to your own computer – and from there you can open the file, edit the file, and print  a copy.
    I suggest you print out a copy.
  3. Fill it out with the quizzes, tests, & tasks that you have been using so far this semester for that course; anything for which you recorded a mark in your mark-book. Start by listing the tests/tasks in chronological order down that left-hand column.
  4. Next, check off the OEs that it touched on as well as the categories that it touched on. If you need a refresher on the OEs or categories for your course have a read through your curriculum documents. I can’t emphasize enough how you need to be intimately familiar with your curriculum documents.
  5. Finally, have a look at your AP as it stands & take a moment to reflect:
    • Have you checked off each of the overall expectations at least once?
      • If not, which ones are missing? What sort of test/task could you add in order to evaluate them?
      • If so, have you provided multiple opportunities for each overall expectation? How can you build your tests/tasks so that student have more than one chance to provide you with evidence of their learning on a particular OE?
    • Have you checked off each of the achievement chart categories at least once?
      • If not, which ones are missing? How could you modify/amend your tests/tasks to incorporate them all. Alternatively, what sort of test/task could you add in order to touch on those that have been missed?

Don’t panic if you notice big gaps in your assessment plan’s checkmarks. This is the time to notice the gaps and work to fill them in. The AP is a handy a tool to help you ensure that you have a well-balanced set of evaluation tools which will allow your students to provide the best evidence possible of their learning across the expectations of your course.

Next up: Creating tests/tasks & the rubrics to go with them.

– Laura Wheeler (Teacher @ Ridgemont High School, OCDSB; Ottawa, ON)

Standards-Based Grading: the Framework

Part 1 of 4 in a series of blog posts about Standards-Based Grading

For myself, I have spent several years shifting into a Standards-Based Grading (SBG) model of assessment & evaluation (A&E). I have gradually adopted what I considered to be the best practices of colleagues I met within my school board, teachers from other boards at educational conferences and from teachers in the educational blogosphere.

For many OCDSB teachers, they have simply heard that the school board’s A&E policies & documentation are changing, effective September, 2014. I’ve heard many teachers express their anxiety over what they perceive to be a lack of training with these new documents. The school board recently put in place some on-line training modules, but is also hoping that the teachers who have been implementing this type of A&E over the last few years will lead a bottom-up training approach to getting their colleagues on board.

What we were doing before:
Teachers taught their courses by unit. Sometimes those units came from the curriculum documents, sometimes they came from the textbook which divides the course up into chapters. We taught each unit, and at its end assigned a test on that unit or chapter. Those tests were divided up into 4 sections of problems/questions; Knowledge/Understanding, Application, Thinking, & Communication (the achievement chart categories; can be found at the beginning of each curriculum document).

achievement chart

Students received 4 different marks, one for each category. Those marks then got input into a grading software like MarkBook by “bin” that would weight the categories the way we wanted and calculate an average for us.
Marks were collected & recorded by unit/chapter and by the 4 categories.

What we’re doing now:
Use the overall expectations in our curriculum documents as a way to divide up our teaching and our evaluation. We test a student on whether or not they are proficient at a certain curriculum expectation. We evaluate their proficiency using levels (R, 1, 2, 3, 4).
OE vs SE

The overall expectations (OEs) are what we need to evaluate or test. The specific expectations (SEs) are what we need to teach. Of course we will evaluate some of the SEs since they make up the OEs, but we do not need to test students on every single SE.
“All curriculum expectations must be accounted for in instruction, but evaluation focuses on students’ achievement of the overall expectations. A student’s achievement of the overall expectations is evaluated on the basis of his or her achievement of related specific expectations (including the process expectations). The overall expectations are broad in nature, and the specific expectations define the particular content or scope of the knowledge and skills referred to in the overall expectations.Teachers will use their professional judgement to determine which specific expectations should be used to evaluate achievement of the overall expectations, and which ones will be covered in instruction and assessment (e.g., through direct observation) but not necessarily evaluated.” Gr. 9-10 Mathematics Curriculum in Ontario

Most curricula have anywhere from 9-12 overall expectations. These OEs are the “standards” according to which you will be evaluating your students. For example, in my mathematics curriculum I might evaluate my students’ ability to meet the expectation of “solving a linear equation”.

One thing I love about this is that it forces us, as teachers, to really become familiar with our curriculum documents. This is in contrast to a reliance on textbooks created by companies for profit but not necessarily well-matched to the curriculum (but that’s a conversation for another day).

Another thing that is so great about SBG is the ability to pinpoint the topics/skills (by OE) that a student is strong in or in which they need to improve. Even better is when you have the students track their progress too so that they always know what areas they need to work on. More to come on this when we talk about Evidence Records later on.

The placemat:

The OCDSB has created what they call “the placemat” which is meant to give an overview of the documents we will be using to support this shift in our assessment & evaluation practices:

placemat

placemat (in pdf)

Get reading:

The best way to really get a good grip on the framework and philosophy behind SBG is to read a lot about it. That’s what I did to really understand what we were trying to do beyond the A&E documents the school board is focusing on (Assessment Plan & Evidence Record).

Here’s a reading list about standards-based grading … jump in!
Daniel Schneider
Frank Noschese
Sam Shah
Jason Buell
Shawn Cornally
Dan Meyer
Jim Pai
#SBGchat on Twitter

Stay tuned for part 2: The Assessment Plan.

– Laura Wheeler (Teacher @ Ridgemont High School, OCDSB; Ottawa, ON)

Assessment & Evaluation in the OCDSB

A big shift is happening in my school board, the Ottawa-Carleton District School Board (OCDSB) with respect to our assessment and evaluation policies. A lot of teachers are feeling anxious about this shift, not only about how to implement the new policies and supporting documentation, but also about the reasons behind it. The new approach is modelled (I feel) on a framework called standards-based grading.

I don’t purport to be any expert in this new approach. I have been shifting towards this new method of A&E over the last 4 or 5 years and wanted to share my understanding of it with you. My hope is to start a conversation about what this new policy looks like in each of our classrooms.

Over the course of several blog posts I will delve into 4 different aspects of the new A&E policy (I will add links here as the posts get written):

1. The framework and philosophy behind standards-based grading.

2. The Assessment Plan

3. Your assessment & evaluation tools (“what will my tests look like?”)

4. The Evidence Record

“Determining a report card grade will involve teachers’ professional judgement and interpretation of evidence and should reflect the student’s most consistent level of achievement, with special consideration given to more recent evidence.” – GROWING SUCCESS

Update (2014.12.04): Here is what the OCDSB published for parents this fall on the topic of their assessment & evaluation policy:
Parent Guide to Assessment, Evaluation and Reporting – Grades 9-12

If you comments, questions, and/or ideas you’d like to share on the topic, please leave a comment below or connect via Twitter @wheeler_laura.

– Laura Wheeler (Teacher @ Ridgemont High School, OCDSB; Ottawa, ON)