To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

List of Northeastern University student organizations

From Wikipedia, the free encyclopedia

This is a list of student organizations at Northeastern University.

YouTube Encyclopedic

  • 1/3
    Views:
    854
    2 538
    2 392
  • ✪ Standards-Based Grading and Assessing Student Mastery of Content
  • ✪ It's Your Time - Harrisburg - Stuff to Do
  • ✪ Giving With Purpose | LBGx on edX | Course About Video

Transcription

LESLEY HERGERT: ...from the University of Kentucky, and Bob Marzano from the REL Central, and the Marzano Research Lab. We’ll have time for each of them to speak for about a half hour. We may have time for some Q&A in between, but if not, there's definitely time for discussion and questions at the end of both of them. So, you can type your questions into the chat room whenever you have them, and we will come back to them when we have time. So, I want to start by giving you an overview, if you don't know about the regional laboratories. There are 10 regional laboratories across the country, each serving a region. We're funded by the US Department of Education Institute for Education Sciences. And our charge is to increase the use of evaluation data, and research to identify problems, choose programs and strategies, and learn from initiatives. We are co-sponsoring this series of two events with the REL Northwest, who also has a research alliance on college and career readiness. And the next one of these events that they will be hosting is on June 6th, and will focus on staying on track for college readiness. We have excellent presenters for that also, including Jenny Nagoaka from the Consortium on Chicago School Research. So, we hope you can join us for that. All of the regional labs are organized with research alliances, and REL Northeast and Islands has eight research alliances, most of which are cross-state or jurisdiction. This bridge event is being sponsored by the Northeast College and Career Readiness Research Alliance. And there are two other research alliances in our region that focus on related topics. The Puerto Rico Alliance focuses on dropout prevention, and the US Virgin Islands Research Alliance focuses on college and career readiness also. We do some things together, and other things separate because our needs are different. But the Northeast Research Alliance on College and Career Readiness includes the six New England states and New York State. The focus of our work is to provide research to support the work of those seven states, as they focus on improving secondary school initiatives to increase graduation rates, and ensure student's readiness for college-level work and the workforce. And we've combined both college and career readiness, and are focusing specifically on proficiency-based learning, and multiple measures of readiness. So, we're very pleased to have this particular event, because it focuses on specific aspects of that work. So, today, we're going to focus on standards-based and proficiency-based grading, and how those are used in practice, as well as looking at learning progressions of proficiency scales. And we're very pleased at the way those two topics blend together, complement each other, and are specific aspects of the work of ensuring that all students are competent and proficient in the knowledge and skills we ask them to. So, we want to begin with a little poll, just to ask you what state or jurisdiction do you work in, and the poll will be posted, and you can check off there. (pause) Oops. Sorry, I went too fast. OK. So, I know that we have quite a wide range, thanks to our partners in the Northwest, we have a lot of people from the Northwest, from Oregon and Washington State. We have a large number of people from the Northeast, especially the northern tier of New England, but also from Rhode Island and New York State. And we've seen people from many other places as well, from Idaho, Louisiana, Iowa, the [inaudible]. So, we'd also like to ask you'oh, here we go. We got back to this. So, you can check off what state or territory you primarily work in. I'm going to skip over this and go to the next poll, which is about what level of work you work in. Do you primarily work with elementary, secondary, post-secondary, or none, or across? Good, thank you for doing this. And you can see, the numbers as they come up. It looks like most people are working at the secondary level, which is our focus, also, here. But some people are working at the elementary, and then post-secondary as well. So, I know we have a number of people from regional labs, and other intermediary organizations. And now, I'm going to review the agenda, so thank you for doing that, and sorry about the first one, but I think we got a sense of where people are from. After this introduction, our feature presentation is from Tom Guskey from the University of Kentucky. Most of you probably know his work on grading policies and practices, but he also does a lot of research and professional development on professional development and teacher practice. And then we'll have additional comments from Bob Marzano, who will add to the conversation by discussing learning progressions and proficiency scales. Bob began his work through— Bob conducts his work, both as the director of REL Central, and as the president of the Marzano Research Lab. And then we'll have time for discussion, and we'll wrap up around 2:30. So, here are our presenters, and I will turn it over to Tom. Tom? TOM GUSKEY: Thanks, Leslie, and thanks to everybody for being a part of the session today, especially given all of the challenges that you have at this time of year, I know that most folks are either just completing their school year, or in the first week of break. And to come together like this, and take on what is clearly one of the most challenging issues in all of education says a lot for your dedication to this idea. We are, in the brief time we have available this morning, going to take on, just, the nastiest and dirtiest issue there is in education today, that whole idea of grading. It's the major issue that lies before you, especially as you move toward a standards-based approach. Because we spent a lot of the, sort of, 1990s, and early 2000s trying to clarify what it is we wanted the students to learn, that was the whole idea of standards, and then we sort of transitioned a bit to proficiencies, and now competencies. But if it's just an effort to make clear the kind of things we hoped students would learn and be able to do as a result of their experiences in school. Having done that, then we turn to the issue of assessment, gathering information on the standards, and how we can clarify the achievements of those standards. The one that lies before us, still unresolved, though, is this notion of grading and reporting. And it's always been a troublesome issue for me from the very first time I started teaching. I began in education, actually, as a middle school teacher. And I can recall the dilemmas I had in trying to grade my students in ways that were fair and equitable, but at the same time, really communicating to parents what was going on in school. And so, over that long history, it's always been troublesome to me about how we can really pull this off. So, to start our discussion of these grading issues, I have a couple of guiding questions that I'd like you to take a moment to consider. For yourself, just to give us a sort of framework for starting our discussion. What do you believe are the major reasons that we use report cards and assign grades to work. Second question I'd like you to consider is, ideally, what purposes do you think report cards and grades should serve. And finally, we know that when teachers assign a grade, or particularly choose a grade to put on a report card, they turn to a wide variety of elements of student performance, sources of evidence. We typically look at major assessments, compositions, but then, often, we consider homework, attendance, class participation, all kinds of things. What do you believe should be included when teachers really determine students’ grades. So, I'd like you to take just a few moments to think about that for yourself, recognizing that there are no right or wrong answers here, but they're really just aspects of behavior. And what we're going to do is take these, sort of, first questions, and use that to then guide us into those discussions. So, I'm going to give you, just, 30 or 40 seconds to think about this, and then we'll come back and use this as sort of a framework to start our discussion of this. (pause) OK, did you have a chance to give some thought to these issues? I asked you to do this for couple of reasons. First, we do want to use this to sort of begin our discussion of grading. But a second reason is that these questions, as you see them here, have actually been posed in research studies. Researchers have asked educators, specifically, these questions. And when they asked the question about purpose, they find that the answers of educators can generally be classified into six broad categories. That's what you see here. One of the reasons that we do use report cards and assign grades is we're trying to communicate information about student achievement to parents and to others. The second reason is that you provide information for the students themselves, especially for the purposes of self-evaluation. Third, we do use grades to select, identify, or group students for certain educational programs. You must have high grades to get into advanced classes, you need to have decent grades to be promoted from one grade to the next, and you certainly need high grades to get into college and university. Low grades are typically the first indication that perhaps some additional assessment may be necessary to determine if a student has special needs. Four, to provide incentives. Now, people argue about this all the time, but the evidence is pretty clear. What's the first thing that students ask when a teacher announces there's going to be a quiz or an assessment. Does it count? Is it for a grade, or how many points will be on it? And if the teacher says, 'Well, no, it doesn't count,' well, who studies for a test that doesn't count? So, there clearly, in grading, is some incentive value, for students at all levels. Five, we do use grades as one source of evidence when it comes to evaluating the effectiveness of various educational programs. And then finally, we use grades to provide evidence of students’ lack of effort, or inappropriate responsibility in the school tasks. Now, we could certainly argue that any of these could be considered legitimate purposes, but what I'd like you to do is to look at this list and rank order these from what you would consider the most to the least important. In other words, among these six, what do you think is the most important reason, and finally, the least important reason? And we'd like you to indicate your response for the most important in this little poll. We're going to see how, as a group, we come out. In other words, among these six, what would you say is the most important purpose for grading and reporting? (pause) OK, well, I think you can see from the responses here, we had quite a bit of variation. Clearly, to communicate with parents, or communicate with students, the purposes itself, does seem to dominate. But also, the idea of using this evidence to evaluate instructional programs tends to be very popular. OK, then, here's the second question, among these six, which would you say is the least important purpose? (pause) OK, well, here once again, you can see there's quite a bit of variation in response, but clearly, the one that seems to be the least important is the idea of providing evidence of lack of effort and inappropriate responsibility. I'd like you to keep that in mind. Because what this poll shows is what we find with any group of educators anywhere. The variation that you saw in looking at these purposes would be comparable to the variation we would see if we were all on the faculty of a single school. And what that shows is that we don't agree. We don't agree on why we're doing it in the first place. And see, that's the first issue that, as leaders in this effort, you're going to have to face. Because when we don't agree on the purpose, what we often try to do is come up with some reporting device that serves all of these purposes, and what we come up with is one that doesn't serve anything very well at all. You cannot serve all of these purposes with a single reporting device, it's impossible! In fact, in some cases, these purposes are actually counter to each other. Here, for example, to the degree you focus on one and two, you must be willing to give up three and five. Because, if you focus on one and two, what you actually hope is that all of the students do well. But if all of the students do well, they all get assigned high grades, then there's no variation in grades. Well, if there's no variation, you can't use it for selection identification, and you can't use it for evaluation. Those two things depend on there being variation. And right here is where I would say that the majority of schools throughout the United States fail in their efforts to revise a report card. Because a charge (inaudible) changing a method without dealing with the core issue of purpose first. That has to come first. As leaders, you must sit down as a group of professionals in the school, and decide the purpose. I actually advocate that you should write the purpose on the report card itself, right there on the front, first page, even online, highlighted box, this is the purpose. Once you decide the purpose, then questions about method become a lot easier to address. Now, in addition, researchers have also asked educators about what source of evidence they use when they determine grades. And here's the list that they've generated. Now, it would be clearly unusual to find a teacher who incorporates all of these, but some of the elements we find typically included would be major exams and compositions are almost always there. Classroom assessments in their various forms including formatives. Reports and projects, student portfolios, are increasing in popularity today, as are exhibits of their work. We have lab projects in some classes, notebooks and journals in others. Classroom observations tend to be a bit more prevalent on the elementary level than you find on the secondary level. Oral presentations, in their various formats. Homework is a big element, both completion and quality. Completion says that they do it, and quality says that they do it correctly, or accurately. Class participation is counted by many teachers, work habits, the neatness of their work, the effort they put forth. Class attendance, punctuality. Many teachers tell me, 'Oh, no, I don't count punctuality.' But a typical policy is, a day late, a grade lower. That's counting punctuality. Same way with their class behaviors and attitudes, most teachers tell me, 'Oh, no, I don't count their attitudes and behaviors, but every teacher knows, in every class they teach, there's always one student who is kind, and sweet, and smiles at the teacher every day, and volunteers to do things for them. And for unexplained reasons, her name is usually Jennifer. Then you have another student who is downright obnoxious, and seems to try their best to get to the teacher, and they may be Ralph. And if they have exactly the same number, there is something in us as teachers that doesn't want to give them the same grade.' Then, of course, the progress they made. Now, once again, we could argue that any of these could be considered legitimate sources of evidence. What I would like you do to is go through this list and just give me a count, a total number of those that you use, or would advocate using in determining student's grades. Among those that you see here, what would be the total count? How many of these would you say should be used, or you do use in determining students' grades? (pause) OK, well, that's great! Once again, you can see, we don't agree here. We're pretty much spread across all of those categories from three or less to 12 or more. And you see, that is the second major problem we face. Because not only do we disagree on the purpose, we disagree on what counts. And think of the message that sends to kids. What it says to a kid is, when you walk out of one class and into another in the same school, all of the rules change. The purpose of grading could change, what counts as a part of the grade, all of that changes. Now, some students see this as a huge game, and they become strategists in the game, want to understand it well, and how to play. But for many students, this is a total mystery. And so, before the report card comes out, their parent turns to them at the dinner table and asks, 'What grade are you going to get in this class'? They respond, in all honesty, 'I don't know.' And so, you see, if it's to be a communication device, we're just not serving that purpose very well at all. So, what I'd like to do in the few moments that we have remaining is share with you a bit of what we do know about grading. And in particular, two major conclusions from that research on grading. Grading is an area where we have a really substantial research base, but very little is finding its way into practice. These are just sort of two highlights of that research, but I think that they are critical inclusions from the research that hopefully will be able to guide you in your efforts, as you move ahead in trying to develop grading and reporting policies that are better, and more educationally sound. The first conclusion from the research (inaudible) that I'd like to share with you is this, that grading is not essential to the instructional process. I mean, that, we sort of need to get out of the way up front. We have lots of evidence to show that teachers can and do teach many things very well without grades. And students can, and do, learn many things very well without grades. And so, if it's not essential for teaching, and it's not essential for learning, then we must use it for some other purpose. Now, there is an aspect of this, though, that you must keep in mind. Although grading is not essential, checking is. But checking is different from grading. They are not the same, and this is a very important distinction to keep in mind. We do know that in any successful teaching and learning exchange, a teacher must provide regular and specific checks on learning progress, and you must pair with those checks, guidance and direction to students as to how they can improve. It's the second part that has been missing in so much of our work on formative assessments, and assessments for learning. We've concentrated on the development of the assessments, and not very much on what teachers are expected to do with that evidence once they gain it. And that's really the critical element, it's how teachers use that evidence to alter their instruction, provide instructional alternatives, approach ideas and concepts in new and different ways, engage students in learning in new and different ways that really makes this effective. But, you see, checking is different from grading. When you're checking, you're on the student's side. It's a diagnostic process, you're trying to find out what's learned well and what's not, and what, as a teacher, you can do about it. When you're checking, the teacher is an advocate. Grading changes the rules. Grading is evaluative. Grading requires the teacher to put students into various categories. And hence, the teacher's in the role of a judge. And this is one of the strangest anomalies we have in all of education, because we've always recognized how difficult it can be for a principal to be both and advocate for teachers, but also they're evaluator, and yet, we put teachers in that role every day with regard to students, and nobody has recognized a challenge. This is a very important distinction to keep in mind. Your students know, and must remember, that there are consequences to what they do, or they don't do, in school. But the consequences do not always have to be reflected in a grade. And one of the ways in which we are getting into really serious trouble today as educators is we often punish kids academically through grades for what are often behavioral infractions. And every time we've done that, it's been challenged legally, we have lost. So, keep in mind, checking is essential. It is essential to the instructional process. It's also essential that students know there's consequences to what they do and do not do in school, those consequences do not always have to be reflected in a grade, and checking is different from grading. That's number one. Number two is this, well, allow me first to state what the implications of this would be. This means you always must begin with a clear statement of purpose, and then recognize that if it's multiple purposes you want to serve, that that means you're going to develop a reporting system, not just a single element. For this reason, I never recommend have a report card committee, I believe it narrows your focus too much. That instead, you should have a reporting committee that looks at all of the different elements that could be incorporated. Where you consider things like email messages to parents, and telephone calls, and parent-teacher conferences, student-lead conferences. All of these different devices can be used to communicate. But always keep in mind that method follows purpose. Start with your purpose, then choose the method after that. Next, we know that grading and reporting always should be done in reference to learning criteria, and never on the curve. There is probably not another area in all of education where the research is more confirmatory than this. We know that if you grade on the curve, first of all, it tells you nothing about what students have learned and are able to do. When students are graded on the curve, they're graded according to a relative standing on classmates. And it could be that in the class, everybody learned miserably, as some have learned less miserably than others, right? In addition, if you grade on the curve, if you grade students according to their relative classmates, it makes learning a very competitive situation for students. Students must compete against each other for the few scarce rewards, those high grades, that the teacher will administer. We find that that is detrimental to the relationship between students, and detrimental to the relationship of teachers to students. Everybody can recall situations where they were in a class when you were graded on the curve. Do you recall that for yourself? I do. And if you do, you can probably remember how you hated those students who got the highest scores, because they blew off the curve. Or, perhaps, you were the one in the class that the rest of us hated. We always have to grade according to learning criteria, simply because it's more meaningful. Now, that doesn't end the challenge, because when we look at the criteria that teachers use, we find that they fall into three broad categories, and here they are. The first are what we call product criteria. Product criteria are those culminating demonstrations of learning. When you focus on product criteria, you do not worry about how they got there, you worry about what they're able to do at the end. Those demonstrations of what they've learned, those performances, those assessments that capture, in a summative way, what was accomplished as a result of their experiences in school. Product criteria have a lot of advantages, and they are the typical achievement criteria we use when we assign grades. But they have a disadvantage. Suppose you teach a class that's really performance-oriented. The example I would use is a class on physical education. In my physical education class, I have one student who is a well-coordinated athlete, who, no matter what I ask my students to do in terms of performance criteria, he can do it better than anybody else in the class, and doesn't even have to try, and he doesn't. He's also disruptive and unsportsmanlike. In the same class, I may have another student who, at this time in his life, he's struggling with a weight problem, and no matter what performance criteria I set, he struggles. But, he tries his hardest. He puts forth all kinds of effort; he's also very sportsmanlike in his content. If I grade only according to those culminating demonstrations, I have a problem. So, most teachers also consider what we call process criteria. Process criteria consider how they got there. So, if you count homework, you're grading in terms of process. If you count formative assessments, you're grading in terms of process. If you count class participation, punctuality in assignments, effort, all of those, are process criteria. And finally, we have what we call progress criteria. With progress criteria, you worry about not where they are, but how far they've come. Sometimes, you refer to it as improvement grading, or value-added grading, or gain grading. And most of the research evidence we have on progress criteria comes from individualized education programs. Now, here's what we know. We know that most teachers employ a combination of these three when they determine students' grades. We don't really have any evidence to show that one is clearly superior to another, or any combination is best. The problem is this, if all of three are combined into a single grade, interpreting the grade is impossible. If you got an 'A,' what does that mean? Does it mean you learned everything well? Perhaps, but maybe it means you just tried really hard, or if you knew where she started, perhaps she has come so far during that time. Who knows? Now, here's the really odd part. The odd part is that if you go to many schools in Europe or Asia, or if you go to many schools in Canada, and you look at their report card, they separate these on the report card. We are only of the few developing nations in the world that persists in this idea of combining everything into a single grade. Why do we do that? If I were to suggest to you that I was going to take a measure of your height, and a measure of your weight, and a measure of the calories you intake per day, and a measure of the number of the minutes you spend exercising per day, I was going to combine those things and get an overall grade or number that represents your physical condition, you would say to me, that's a really dumb idea. How could you combine those very, very different kinds of measures into a single number, and it would be meaningful. And you'd be absolutely right. But every day, teachers take information on achievements, behavior, punctuality, responsibility, progress, combine them into a single grade, and we think it makes sense. The bottom line is, it really doesn't. In these schools, they give multiple grades. They have an achievement grade, and they use that for class rank, and GPA, and all of the kind of stuff we do. But, beside that, a separate grade for homework, a separate grade for class participation, a separate grade for punctuality, a separate grade for effort. Now, I must tell you, when I first saw this kind of system, my first response was, 'Well, it looks great!' But it looks like so much extra work.' They turned back to me and said, 'It's easier than what you silly people do in the States. We collect the same information you do, nothing more, we just don't worry about combining at the end. And so, all of those fights and arguments about how you weight stuff and how you combine it, we don't deal with it. We keep it separate.' The teachers in these schools love that idea, because they find that students take homework more seriously when they get a separate mark for homework. It's not combined with these other divergent elements. The teachers there also like it because if a parent ever questions them on an achievement grade, a product grade, they can turn and say, 'Look, here, maybe if your child started doing some homework...' 'Maybe if your child started participating in class, the achievement grade would go up.' The parents like it because it gives them the profile, the performance of their students in school. The colleges and universities love it, because you see all grades are also included on a transcript. And so, if you're an admission officer, would you rather admit a student with straight-A's, who got there through diligence and hard work, or a student with straight-A's who got there without even trying? Now, I'm not saying one is better than the other, I am saying, from a lot of transcripts from our schools, we can't tell the difference, but in these schools, they can. Now, I'd like to show you a real quick example of a report card that does exactly this, that offers these multiple grades. This is a report card that I adapted from the school district that I found, and I've made some changes in it to suit my purposes, I've also included my dream team of teachers. But here's the format. The format is a- it's online, it's also a paper version, but single point of paper, they use it for their secondary level. And it's folded in half. On the front page, there's some information about the school, and information about the student, and a statement of purpose. You open it up on the inside, and there are eight fields, four on one side, and four on the other. Each field corresponds to a class, so the students in the school can take up to eight classes. The field for each class begins with a photograph of the teacher. Now, I asked them why they did this, because I hadn't seen teacher's photographs on report cards before. And their response was, 'Well, secondary report cards, in particular, are so impersonal, we just want to personalize it a bit.' And so, you'll see a photograph of the teacher. Beside that, you're going to see the teacher's name, and you're going to see the name of the class. Below that, there's a band of grades. In this band of grades, you'll see an achievement grade, that's the product grade. And that's printed boldly, it's emphasized, and like I said, they use that for class rank, and GPA, and all of the kind of stuff we might think is necessary. But beside that, then you're going to see a separate grade for homework, a separate grade for class participation, a separate grade for punctuality, and a separate grade for effort. These are numbers. They're numbers associated with the rubric. And so, the homework rubric, one, two, three, four. A four, all homework assignments turned in on time. A three, one or two assignments missing, a two, three-to-five assignments missing, a one, multiple assignments missing. Now, below that, you're going to see a narrative. But the way the narrative works is this, there are two sections. In the first section of the narrative, the teacher generally types in three or four sentences describing what the class worked on during that marking period. And that gives parents very detailed information about what was emphasized in the class. But then, having done that, the teacher has that printed on every student's report card. So, the teacher has to enter it only once, and every student in that class gets a census. What then the teacher can do is pull up individual records, and add a sentence or two about the student that the teacher chooses. So, what you're going to see if have the report card, you're going to see four classes, and again, I've adapted it to suit my purposes, and included my dream team of teachers. But here's what half of the report card looks like. So, you can see here, we have the first period language arts class, taught by Ms. Angelou, you can see there's an achievement grade, there's a mark for participation, homework, punctuality, and effort. Ms. Angelou's gone in and she states that, 'This quarter, we focused on poetry in (inaudible) forms. Students read both well-known, and lesser-known poets, and constructed their own poems. Now remember, she entered that only once, and every student in the class got those sentences. Then she pulled up the individual record and she added, 'Chris actively participated in class discussions and wrote several excellent poems. But, needs to be more conscientious about completing homework assignments on time.' I looked on over this report card, and it's clear that this student has a problem at home. Consistently, teachers learning that. As a parent, that would be really good information for me to have. And they tell me, it's easier than what we do. Now, we've taken models like this, and we've been able to adapt it to suit purposes in other school districts here in the States. We have a project going on here in the state of Kentucky, where we have incorporated a report card very similar to this in our secondary schools. After having done this, and our secondary school teachers giving us overall achievement grade, and all we ask was that they make sure they pull out the non-achievement factors. We've not asked any teacher'or not told any teacher how to determine the achievement grade, all we did is say pull the non-achievement factors out. And what that did is, then, it prompts these wonderful conversations among teachers, who start saying to each other, 'We both teach the same class, how do you determine the achievement grade? What do you think is the best evidence? Maybe we should have common assessments, maybe we should move in those directions.' All of those things we've wanted teachers to talk about become a natural part of the discussion when something like this is used. Now, the same kind of report card can be used at the elementary level. We've adapted it because once the teachers do this, and they pull the non-achievement factors out, what makes it easy is to take that achievement grade, and to break it down into standards. So, this is the intermediate step. The next step is to take that achievement grade, and actually list individual standards. Now, there are some guidelines you need to keep in mind with regard to that. We did some work where we took the common core here in Kentucky as our basis of developing a statewide standards-based report card. And, again, listing the elements in regarding to math, and language arts, and all of this other stuff, (inaudible) we went to the national organizations, and incorporated those. But then, one of our school districts had already done some work in developing a standards-based report card, and we faced the issue about how many standards per subject area can you have? One school district had already developed a report that included 38 language arts standards, and 28 math standards at a single grade level. So, we did a little study, and we just got the parents, how many ways (inaudible) meaningful to parents? What we discovered is, you've got to get it down to about four to six. That you can't list 38 language arts standards on a report card. That, what happens is, that it becomes a bookkeeping nightmare for teachers to keep track of all of those things, and second, it overwhelms parents with information they don't understand, and often, don't know how to use. Four to six things. So, as we broken this down, we've not broken down the individual standards, but rather, to strands of standards. That means, that if you report according to strands, that you don't have to have a different report card for each grade level. Because even though the individual standards will remain the same, or the individual standards will be different, the strands remain the same. This also means, though, that accompanying this report card you have to have another, sort of, curriculum document. And Bob will talk about this in some greater detail, but in this curriculum document, you need to be able to say, if my kid gets a mark in mathematics, in the area of measurement, in the second marking period of third grade, what did the teacher work on? Does measurement telling time, does it mean measuring distance, does it mean looking at money? All of those different things. The point is, it need not be done on the report card, but parents have to have access to that report'to that information. Or, have access to another resource. So, to conclude, I have basically three guidelines that I hope will offer you some recommendations for better practice. The first, of course, always begin with a clear statement of purpose. This is where any reform in grading has to begin, why are you doing it in the first place, for whom is the information intended, and what do you want the results to be? Do not begin with method. Don't say we're going to develop a standards-based report card, and start on that without clarifying your purpose first, because that will guide you in all of these efforts. Second, you want to make sure that you provide accurate and understandable descriptions of student learning. It's much more challenging, effective communication is, just in exercising, quantifying, or documenting achievement. You see, machines can do the latter part, and technology, with computerized grading programs, can also do the second part, giving parents a porthole to access daily information about student learning. But, that doesn't necessarily communicate- those numbers don't necessarily communicate what students have learned and what they're able to do. And so, to the degree that you can do that, that's the purpose that grading is going to serve. And finally, you want to make sure that you use grading and reporting to enhance student learning, not to get in the way. Some of the things we do in grading are actually counterproductive. And we can talk about some of those issues, perhaps, during our discussion time. But you do want to make sure you facilitate communication between the school and home, and you use those efforts to really help students, not to hinder their progress in any way. So, those ideas, hopefully, will guide you in these efforts. I'm going to turn things over'or back over to Leslie or to Bob now, to carry on the discussion, and look at some other issues that are related to these ideas. LESLIE HERGERT: Thanks, Tom, you've given us a lot to think about, and some ideas from research in your own work with schools, especially high schools. And thanks to the participants for the questions that you generated, which I think we will come back to when we get to the discussion period at the end. Especially the issue of secondary schools, grade points, things like that. There may be other things that come up, but I'm going to turn now to Bob, and give him time to introduce the idea of learning progressions and proficiency scales, which is a whole other aspect of this issue. So, Bob, over to you. BOB MARZANO: Thank you very much, I appreciate that. So, first of all, thanks to Tom, it's always a great pleasure and honor to present with Tom. For me, Tom has been the leading voice in the necessary reform in grading practices. When I was introduced, I was introduced, kind of, in two ways. One as the director of REL Central, and the other as the president and CEO of the Marzano Research Lab, my organization. I have to clarify, on this presentation, I am wearing the Marzano Research Laboratory hat. Here's why, the RELs are charged with doing research, fine research, objectively, disseminating that research, working with state departments of educations, school districts, to help them use research, and conduct research, and all of that has to be done in a very objective manner. My presentation, which is necessarily going to come from me wearing the hat of CEO of my company, because it's not objective in any way, shape, or form. I'm going to present an approach'an implementation of many of the recommendations Tom has made over the years that we've been using for about 15 years right now. So, it's relatively myopic, it's going to be very, very specific. Hopefully, you know, it's representing Tom's recommendations, but at a very, very detailed level. So, given that, let me start. The basis here for our recommendations really come from the work that's being done on learning progressions. And proficiency scales are really kind of a subset of learning progressions. If you know what's been done with the common core, their base on one of the founding principles of the common core has been to organize the content into progressions, progressions of knowledge. As to find the common core, those progressions really go across grade levels. So, one of the shifts, for example, in the mathematical common core state standards, from previous efforts to develop standards, has been to make sure that every grade level is set up for what's learned at the next grade level, so students don't have to keep re-learning content over time. You can think of a proficiency scale as really, running progressions go across grade levels. Proficiency scales stay within grade levels, and break the content within grade levels into more manageable progressions. Now, they don't have to be just applied to the common core, although, I'll give you a resource to that. And they can be applied to any subject theory at any grade level. I was going to say, over the last 15'actually, probably more than 15 years, we've worked with schools and districts across the United States in 12 different subject areas. Operational and proficiency scale represents knowledge of skills as a continuum of simpler target and complex goals that students work towards sequentially. Now, here's the generic form for the proficiency scale that we use. And let me start with the 3.0 score, we call that the target learning goal. So, in designing a proficiency scale, if you're an individual teacher, you start with the three, what do I want students to know, or to be able to do. And, of course, they commonly go into the standards documents that are available to them, common core as the standards, obviously, and you might'a teacher might look into a particular standard, and parse that out into one or two proficiency scales. Let's go to the 2.0, that's the simpler learning goal. Now, this might not necessarily be found in your standards documents. A simpler learning goal would be content that the teacher is willing to teach explicitly that is necessary to accomplish the target learning goal. Let's go up to the four, that's the complex learning goal. That's content or activity that will demonstrate the student has gone above and beyond what was the target. Usually, the 4.0, I'll give you an example in a bit here, the 4.0 content is an application of the knowledge. Now, I noticed there are other data points there. There's the one and the zero. Let's take a look at the one. That's not new content; it says with help, the student receives partial success at that 2.0 or 3.0 content. And the zero is even- with help, there is no success. Now, by the way, I've never really seen the student get a zero. Usually, with some help, they can do something. But anyways, that's kind of an absolute bottom. Then there are half point scores, and I'll explain that a little bit later. So, fundamentally, a proficiency scale only has three levels of content, 3.0, 2.0, and 4.0. Now, if you're measuring, for instance, you say why just three? You know, of course, with any subject, or any topics, there are multiple levels of knowledge or skill, understanding your skill. Well, we settled on three because it's very understandable. You know, at the classroom level, it's very understandable at the parent level too, you know? You know, target learning goal, simpler content, and then more complex content. Let me show you what it looks like in the context of the common core. Now, by the way, we actually have a database, a proficiency scale, it's for the common core, and for other subject areas of (break in audio) (inaudible) research lab, if you're interested, it's a free database. That's the only reason I mention it. So, you'll see scales like this, I believe there's over 3,000 scales across these different subject areas. Score of 3.0, this happens to be based on that common core, and as you can see if you read to the right, we actually list the common core standard that it came from. At 2.0, students will recognize or recall specific vocabulary. People commonly put vocabulary at the 2.0 level, along with basic processes, or, you know, detailed information. And then at the 4.0, it could be a very general statement, or it could be specific content. Let me show you another example in mathematics. Again, these are from that database that I mentioned. Now, again, this is a very specific application of principles Tom's been talking about for quite a few years here. One of looking at it is a very myopic perspective. We recommend that these proficiency scales, or organizing what we call measurement topics, categories, or related proficiency scales, that you extend across several grade levels. So, it's an attempt to shrink the standards, you know, across grade levels into these definable areas we call measurement topics. And within those measurement topics, a lot can be done in terms of measuring students' knowledge and skill, giving them feedback, and translating all of that information into grades. Once you have scales and measurement topics, then you can design assessment. So, we always say, start with your topics, develop the proficiency scales, and then from that, you can design a variety of types of assessments. Let's start with the kind of traditional type of assessment, and that would be the paper and pencil. So, identify measurement topic to be a success. Step two, determine how many items there will be for each level of the scale. Remember, we just had three levels of content, 2.0, 3.0, and a 4.0. And then write your assessment items. Now, once you've got that in place, by the way, what typically happens is there's a lot of items at the 3.0 level, maybe not as many, but still, a good sampling from the 2.0 level, and not quite as many at the 4.0 level. That's not a hard and fast (inaudible), but that's the typical experience that we have. Once you have an assessment that is written around these three levels of item difficulty, if you will, although it's technically not item difficulty, then you can score tests in a rather efficient way. So, your tests will have level two items that usually deal with simpler details and processes that have been explicitly taught, that's an important piece. Level three, complex ideas and processes that have been explicitly taught. And those usually come directly from the standards, or you know, parts of standards. And number four, inferences and applications that go beyond what was directly taught. Now, let me talk about a big change here that you might try in a classroom, particularly at the secondary level. Instead of writing a numeric score for each student's answer for each item, here's a recommended scoring protocol. It's not numeric. If an item is completely correct, it gets a code of a C, or whatever you want, completely correct. If it's completely incorrect, you know, I, or an (inaudible) sign, whatever you want. Partially correct is P. Now, some secondary teachers like to do low partial and high partial. Now, your reaction to that, at first, might be well, wait a minute, that's relatively imprecise. Well, let me talk about precision for second, in terms of scoring students' papers. Now, I was a secondary teacher, taught high school for a number of years. And when I had a test that I would score at night, it was very easy to score the items that were worth one or two points, they either got it or they didn't. It wasn't so easy to score the five-point items, and it was harder to score the 10-point items, and infinitely harder to score the 20-point items. And if you still do that, let me give you a scenario here, and see how you react to it. Do you ever notice that when you're scoring a paper that has- a test that has 10 or 20-point items, at the end of the night, you have to go back and recalibrate, particularly the items that are five, 10, or 20 points? Here's why. Let's take a 10-point item. The first student you run into who doesn't have the item completely correct or incorrect, you assign a score to that student. You say, that's worth a six. And that's fine, next paper, you know, student doesn't get that item completely correct. You now compare that student's score with the student for whom you gave the six. You know, but over time, that six migrates up or it migrates down. In effect, what teachers are doing is actually creating a little rubric in their head for every multi-point item. And that includes, or inserts, quite a bit of error in the process. So, what we've actually found is this process is actually more reliable, more precise, if you will, because you're decreasing the amount of judgments that have to be'and it's much quicker. You know, we've found this cuts scoring time down by about one-third. It's either correct, incorrect, or partially correct, or for a little more precision, low partial or high partial. Now, how do you score a student's paper, you just don't add up score. Let me give you a little artificial activity here. Let's say- here's response patterns. Let's say a student answers level two items correctly, but not level three or four items. A second student answers level two and three items correctly, but not level four items. And my question is, what score would you give these students? The next student misses all items. And with help, can answer some correctly. And the last student misses all items, even with help. Well, these are the scores you would assign. So, a student answers all level two items correctly, but not level three, score of two. A student answers level two and three correctly, but not four and three. And obviously, you need to follow logic on that. Now, you might say this is a setup, and it is. What about the student who answers all level two correctly, and some of the level three? Well, that's where we have the half-point scores. Now, again, a common reaction would be, well, this is pretty subjective. Actually, in terms of precision, I would assert it's no more imprecise or subjective than what we do right now. As a matter of fact, you know, our studies indicate that there is a little more precision doing it t his way. Now, this is a topic in and of itself we could spend an hour on. We've been doing this for a number of years. Once teachers get this in their head, it's pretty straightforward and relatively inefficient, in terms of how they score assessment. Also, very powerful, in terms of how they communicate back to students, how they're doing. Because you can say, I'm your student, say, 'Bob, look, we've got a 2.5, because you've got a- did a nice job on these level two items here, but look, there's some problems here you're having with some of the level three items you got correct, but you show some confusion here. So, it's very concrete, relative to criteria, what you're presenting to students. And another way of looking at this is that every teacher, when he or she puts a score of 2.0 for a student- excuse me, 2.5 for a student, let's say, they're saying the same thing. Relative to the scale that's being used to score this assessment, this student has exhibited knowledge of the simpler detail, the level two content, and partial knowledge of the level three content, so a score means the same from teacher to teacher, relative to a specific scale. Now, we played this out a little bit further, here's how it might look, in terms of a test. Assessment blueprint, items one, two, three, four, and five are all level 2.0 items, six, seven, eight, and nine are level 3.0 items. Item 10 is a level 4.0 item. And those student responses, those would be the scores which, of course, were written on the paper for the student, relative to each one of those items. Now, with a system like that, over time, for each measurement topic, you can actually track student progress, and that's one thing that we were always trying to accomplish, the ability to show student growth over time, which for us, requires a proficiency scale in designing and scoring items in a certain way. This would be a little form that each student keeps, relative to a specific measurement topic. This is Ellie, the measurement topic of expressions and equations. Notice my score at the beginning, 2.0; my goal is to be at a 3.5 by etc., etc. So, there are nice student activities that go along with this type of a tracking system. You don't have to do this, we've had great success doing it. Notice at the end there, if you look way to the right, you'll see columns that are labeled A, B, C, D, all the way to S. S stands for summative in this case. So, that last score that is given to the student is the summative score. And it's not an arithmetic average for all of the scores. You're looking at the progression of knowledge over time, and the teacher's assigning a score at the end. There are mathematic algorithms that help to do this; there are computer programs that do this. You know, but it's still a teacher's judgment based on the progression of knowledge. Of course, the student can be invited into that conversation too. Here's what we found. Other than the traditional paper-pencil test, there are other types of tests that can be used too, or I should say assessments that can be used. See, with the proficiency scales, it opens the door to assessments like observation. Let me play that out a little bit. Imagine you're a phys ed teacher, and you've created a scale for the overhand throw in second grade, you've taught the overhand throw. You walk out, and you've got a scale for that, and the two, the three, and the four content are actually in that scale. Articulating in that scale. You walk out into the playground, and you see a particular student executing an overhand throw exactly the way it was taught. You know, that's an indication of 3.0 status at that particular time. That can be entered into the grade book for that particular student. So, a strict observation can be used. There's a strategy we call probing discussion, where you simply sit down with the student and say, 'OK, start telling me about linear equations here, and how we solve a linear equation.' There would be a scale for that, and based on a two-to-three-minute conversation with the student, a teacher can place the student, you know, on the scale. One I like the best, we call student-generated. And that is where the student actually says, 'I am now ready to show you that I'm not a 2.0 anymore, that I'm actually a 3.0, and here's how I'm going to do it.' We call that student-generated assessments. So, the proficiency scales open the door to a lot of other types of data that can be used as assessment data. Now, let's go to the report card because, again, Tom's been a leading voice, I think the leading voice in necessary changes to report cards. Remember, this is a very specific application, hopefully, without violating too many of Tom's principles, as to changes in grading, it's a very specific way of doing things. Notice, that you can still have your overall grade, although Tom is absolutely right, you know, that we really don't need the overall grade. But, in this day and age, if you say you don't want to fight the battle of getting rid of the overall grade, you can still have it. So, language arts, a 2.5, which translates into a letter grade of a 'B.' I'll show you a conversion in another slide. Mathematics, 3.25, that translates to an 'A-minus,' etc., etc. Now, notice underneath, though, you have more specific topics, and I might be violating Tom's rule here for too many elements. But here's what we found. You know, you can't have too many, but you can actually have more than you might think, I would assert. So, this is English, language arts. Under questioning- I've got to put my glasses on here. Questioning inference and interpretation, the student is at a 3.0, themes and central idea, 2.0. And this little bar graph. Now, notice the bar graph has a dark part and a light part. The dark part represents where the student started, the light part represents where the student is now. So, you can actually get that growth component that Tom was talking about, and it's right there, it's very visual. Teachers have reported this as very useful to them in a formative sense, parents have reported this is very useful to them. They see their students' growth over time, they like that, you know, where did the students start from? Let me give you some conversions here. Now, these are somewhat arbitrary, but they have a certain logic here. Let's go back here. This is- let's say, over a given grading period, and that's what this particular report card represents, seven scales have been addressed, seven topics have been addressed. Well, you can get a weighted or unweighted arithmetic average of those final summative scores, and assign a grade to that. Here's what schools usually do. If that average is anywhere between a 3.0 and a 4.0, they're in the 'A' category. You might say that's a big range, it's really not. Remember, the 3.0 represents the target learning goal. So, if a student has a central tendency, that average, weighted or unweighted, that's anywhere between 3.0 and 4.0, that means in general, the student has demonstrated competence of all of the target learning goals. Two-point-five to 2.99 is a- remember, 2.5 says they've got the simpler stuff, partial credit on the more complex stuff, and there's a logic that runs through it. Another conversion, in some states, you have to put a percentage score, and that's easily done, 4.0 represents 100, 3.4 a 95- excuse me, 3.5 a 95, 3.0 90, etc., etc. Now, this is- it's a very detailed approach, we realize that. Let me go back to the report card again. Just some anecdotes from the field. It is not a simple change, there's no two ways about it. The major work represents the proficiency scales. Now, 15 years ago, or 10 years ago, that was a lot of work for a school or district. There are schools and districts that have already done this, they say, we have a free database on this, you can take a look at it, the scales are not perfect, but they're a good place to start. I believe there's 11 different subject areas represented in that database. So, that's the hard work up front. You know, another part that's hard is that it's a very different way of thinking, it really is, keeping track of student progress levels. And I don't recommend doing it by hand, there are grading- online grade books that will do this for you, all you have to do is enter the score. But teachers say they have to think differently. But once they do this, it's very hard to go back to the old way, because they have parsed out, you know, what students are doing, their knowledge, their skill, and their progress into very specific silos, and they say over time, they like that part a bit. Now, I didn't address Tom's grade issue, though, about what do we do with the homework, what do we do about working with others. There are scales for that, too, so it's not reflected in this particular report card, but you can have scales, and there's schools who do, for work completion, working with others, etc., etc. And those represent- they have their own bar graphs. And as Tom mentioned, we, too, recommend to separate those out into their own grades. In other words, don't put in with the overall grade for language arts, the student's grade, if you will, or score, on work completion. You part and parse that out. So, again, this is a very specific way of looking at things. I am completely unobjective about this, because this is the way I see the world. And again, you know, much of this came from an attempt to translate the fine guidance Tom Guskey, and other people, Rick [Stiggings?], have given us, in terms of the necessary changes in grading. So, I think with that, my time is up. (pause) Leslie? LESLIE HERGERT: Hello, I'll try again. Thanks, Bob. I'm going to turn to the participants now, and ask some of the questions that they've typed in, because I think both Bob and Tom have given us a lot of very specific ideas about addressing issues of grading and reporting, especially as we think about proficiency, and proficiency-based learning. So, I'm going to start with a broad question that was asked in the chat room, with a response from another participant, about the purpose of these kinds of grading systems. And that is, that it seems like we're trying to get rid of a zero-sum game setup, so everyone can achieve and achieve mastery, but many people don't like that. They don't like not being able to sort. And the question was, do you have recommendations for that? One of our other participants responded by saying it was important to set expectations high, because when resistance comes, it often comes because people think proficiency is too easily granted. So, would either of you like to respond to those comments? BOB MARZANO: Tom, why don't you take that? TOM GUSKEY: (laughter) Thanks, Bob. Well, the reason Bob's letting me take this is he knows that I have a particular passion about this area. My mentor, and he chaired my doctoral dissertation, was Benjamin Bloom. And Bloom is very clear in his perspective on this issue in particular, the idea of what our purpose is in education, and what we're trying to accomplish. Very early in his career, Bloom wrote an essay in which he suggested that when you enter education, you have one basic decision to make. And how you make this decision, or what you decide, will determine your entire career. He said, the decision you have to make is this, is your purpose to select talent, or is it to develop talent? He said, it needs to be one or the other, and there's no room in between. He said, if your purpose is to select talent, which is what some people, undoubtedly, believe we need to do in education, then what you must do is accentuate the differences between students, and make them as large as possible. You see, they're all clustered close together on any measurement scale; it's very hard to distinguish between them. And so, we need to spread them out and accentuate those differences. The problem is this, the very best device we know to accentuate the differences between students is poor teaching, nothing does it better. That if you want to spread students out and accentuate differences in their learning, then teach as poorly as possible. Because there are some students who will learn regardless of what we do, and the vast majority that need our help, won't get it, and we will spread them out. However, if your purpose is to develop talent, then you operate under a very different set of rules. Under that, the first thing you have to do is be very clear about what it is you want students to learn and be able to do. And having established that, you do everything within your power to guarantee that all students learn those things really excellently. Now, you can certainly debate about how rigorous those learning expectations are, or you want them to be, but that's quite aside. And I know that as we go about this idea of changing grading, there is that tradition that we need to confront and we need to take on. I've argued in other cases that, you know, people get caught up in this idea that we want grades to resemble a normal curve distribution pattern. I've never seen the recent ones, but to me, if you really understand that normal bell-shaped curve, you know that's a distribution of randomly-occurring events when nothing intervenes. If we didn't experiment in some natural phenomenon, like agriculture, and crop yield, you'd expect a normal curved distribution. I mean, some fields are very fertile, and it can be high yield, and others, less fertile and low yield, and that's where you get a cluster around the center. But if you intervene in that process, say, add a fertilizer, what you would expect at the end is to get something very different. What you want is for all of those fields to give you a very high yield; you want to push them all to the top. And in fact, to the extent that the distribution of the crop yield after intervention still looks like the normal curve, that is the degree to which your intervention has failed. It's made no difference. I would argue that teaching is an intervention, it is a purposeful and intentional act. And, from that perspective, if the grade distribution after you teach looks like the normal curve, that, too, is the degree to which you have failed. You've made no difference. So, that's a long-held tradition, I'm not sure how we came to that, or why, but it's something we need to take on and be able to challenge. I know people understand that that's really not what we're about in education today. BOB MARZANO: I can't add to that, ditto to Tom. Except the whole fertilizer analogy, I think, could spin off in to a long discussion. We don't want to do that, though. TOM GUSKEY: (laughter) (inaudible) LESLIE HERGERT: (laughter) No, we don't want to get into fertilizer. Thanks. There were several questions that were very specific about the report cards. Beginning with, the first report cards that Tom showed looked to this participant like an elementary report card, versus a secondary report card. And the issue that comes up for both examples, are that they seem more complex, and for secondary teachers, who may have 100 or 100-plus students, is that going to be difficult to deal with? Another question about specifics, related to, often, schools and teachers are using a 100-point scale, how does that compare to the four-point scales that you were both talking about. BOB MARZANO: You mind if I take this first? TOM GUSKEY: Sure, go ahead, Bob. BOB MARZANO: OK, yeah. Well, relative to the- I'm going to skip the part it looks like more of an elementary, because I actually think the positive changes in terms of grading have gone from the bottom-up here. I actually think that people at the elementary level have made changes more quickly and quite effectively. So, I don't consider something looking elementary necessarily bad. Relative to the more time, and obviously, the one I presented would certainly have the most work involved. If the question had come up 10 years ago, does this take more record keeping time, I would have said yes, it does. Because people were mostly doing books by hand. With online technology, computer technology, it really doesn't. You know, with the systems that are out there, teachers just enter a score, you know, and all they have to do is say well, this score was for this student, for this particular measurement topic, you know, slash proficiency scale. So, it does the rest. So, there really is no more time that is taken up. The 100-point scale, you know, I think that it's had its day. I mean, if I were king of the world here in education, I would say we have to really look at how useful is the 100-point scale anymore. So, for me, that's not the criteria. (laughter) You know, how well do things match up on a 100-point scale. You can use the 100-point scale, along with the system I presented, in certain circumstances. But I would hope that, you know, decades from now, that the 100-point scale is something we don't use much at all anymore. Tom, go ahead. TOM GUSKEY: Yeah, I would, you know, just add to what Bob said there, the report card that I showed was actually a secondary report card. It was developed and used in middle schools and high schools. And what the educators there said- and I agree with Bob that as you look at reform efforts, it's typically an effort that is moving from the elementary grades up. Several people have tried to explain why that might be. I think it has a lot to do with the whole idea of curriculum differentiation. I mean, if you're talking about standards, basically, all second graders are working on the same stuff. That's not true of all of the sophomores in high school. So, once you hit those grades, there's such curriculum differentiation that requires a really different format when it comes to reporting and even report cards. An example that I showed in the secondary level, and those teachers that, in what we found, using a similar model here, in our state, is that it's not asking teachers to collect any more information than they had before. It does require a little bit of extra effort on their part to prepare those sort of narratives. But again, most of the information is done per class, not necessarily per student. And so, that the description of what the teacher covered during that is (break in audio) effort for each class, and it's something the teacher can put together in relatively short order. The information that's reported there, they're already gathering. The one thing that we did insist upon, however, was that teachers developed this whole idea of what Bob called a scale for the process elements or a rubric. That if you're going to give a grade or a mark for class participation, you need to be able to differentiate what those levels means taking on some pretty challenging issues. I mean, for example, even with something as seemingly simple as class participation, how do you mark the difference between a student who speaks every day in class but says nothing of consequence, versus the student who only speaks up once a week, but shows they've been really thinking about it? I mean, so distinguishing those quality and quantity issues still remains a challenge. But the idea is you pull that out of the achievement grade, it's something different from achievement. So, when teachers say to me, 'Well, you know, we need to be teaching these kids responsibility.' I say, 'You know, I'm really with you on that. I go along with you on that. But that's different from achievement.' And so, if it is valuable, let's report it, but let's keep it separate, let's pull it out, and report it there. Now, interestingly, and in those process elements, we worked with a group of teachers here in Kentucky that were doing this (inaudible), and they, for about a decade, have been giving an effort grade. We pressed them and said, well, you need to develop a rubric or a scale to indicate effort, so if a kid gets a three, and the parent wants them to get a four, you need to be able to tell them how to do that. These elementary teachers worked for two days, and could not reach agreement on a rubric for giving in every grade. And so, they abandoned it. They found it was very difficult to distinguish between the student who was sincerely trying, but may not have appeared to be, versus the one who wasn't, but was very good at faking it. At the elementary level, they found it's sometimes difficult to distinguish between what was a child's effort, and what was the parent's effort. And so, that confusion lead to them just dropping it from their report card, and going with different process elements. There's nothing sacred about the ones that we've shown on that report card, we have a list of about 24 or 25 different process elements that could be considered. I mean, all we've said is that once you choose it, make sure it's something that you value, and you honor, and you give kids feedback on their performance in those areas, and be able to develop a clear scale or rubric that can communicate to parents and students what is meant in each of those areas. LESLIE HERGERT: Thanks to both of you on that. A related question that someone asked is about user-friendly grading software. Do either of you have suggestions in that area, and if it's a long list, you could send us things, and we would post it on our website. BOB MARZANO: I can jump in there. Global Scholar has developed a system using my particular approach. We have our own little teacher grade book that we get- when we're working with the school district, teachers use that, and it becomes as part of that- it's free for those folks. Can I actually'I just noticed one question just quickly, one person wrote, relative to the proficiency scale, three looks like three-out-of-four, that's a 75%, why does it receive a 90%? It's not- I probably did a poor job of explaining that, then. That one, two, three, four are not equal intervals. The three, remember, means the student demonstrated proficiency at all of the target content that was explicitly taught. So, that doesn't translate into 75% of the content, that actually is they know everything that was explicitly taught. So, that mathematical reasoning doesn't fit on the scale. Anyway, sorry for that digression. Global Scholar, and we have our own computerized systems there. TOM GUSKEY: And I would add, I would actually be very cautious in looking at these online grading programs. We've not been successful in when an individual- it gives the teachers the latitude to do the things that we want. And most of the districts we found that are doing the job have actually developed their own. Most of these are developed by software engineers who do not understand education and don't really understand grading. They incorporate aspects of grading that are based on very traditional models that have nothing to do with best practice. They develop their programs based on what will sell, not necessarily what's best. And so, you know, the smaller programs, like the one that Bob mentioned for his, we have one that we developed here in the state, too, that, you know, we just give away. The idea was, we had- the state of Kentucky requires that all schools in the state use the same grading system. And they did that, simply because they wanted a uniform system for a student information system. And the program that they bought to do that incorporated grading programs, so they just decided to incorporate that as well. The programs is called (inaudible). Unfortunately, grading was an add-on to it, and it wasn't done so well. There are a lot of educators in our state that refer to it as 'Infinite Chaos.' But we wanted some changes made in this, and we wanted to extend the teachers' comments, we were going to include photographs, break it down by standards. And they said, 'Well, we can do it, but it's going to cost you a lot of money, and it's going to take us about six months to a year to do.' My friend here at the university sat down in an afternoon and wrote a program to do this for us. It basically imports the class lists from any computerized grading program, you know, Power School, Infinite Campus, Skyward, any of those. And it fits on top of that, and generates the report card that we want. We're still working on ours, we're still refining it, but there are things we found parents really favor. One of the things that we did, in implementing the report cards, we had a summer (inaudible) here where the teachers worked to develop these models, and then when they went back to their districts, the way we implemented it was this. For the first two marking periods of the school year- here in Kentucky, all of our schools are on a nine-week reporting cycle, so they're basically reporting quarters. For the first two marking periods, our pilot schools sent home two report cards. They sent home the traditional one the parents were accustomed to getting from Infinite Campus, which listed each course and subject area, and a single grade. They also included percentage. And we sent them the new one. So, for these two marking periods, the parents got two report cards. After the second marking period, we surveyed all of the parents and basically said, we don't have the resources to keep sending you two, we're going to send you only one, that you get to pick, whichever one you want is the one we'll continue sending here. One hundred percent of our parents chose the new one. Not a single parent said they wanted the old one. Now, when we asked parents about what they liked and what they didn't like, the only critical comment we had is that in our model, we went to an overall letter grade, which was a sort of five-point scale, before, they had been accustomed to a percentage scale. And there were a few parents that said, 'Well, that (inaudible) I'd really like to know, is that 'A' a 95 or 97.' It was interesting, not a single parents asked us, is his 'C' a 73 or a 75? So, we did see a little bit of that. You know, a few parents at that very high level still wanted that kind of discrimination. But to add to what Bob said, I would hope that within the shortest time possible, we can get rid of percentage grade scales. Of all of the grading scales we have, it is the one, from an educational perspective, that is the least useful, and the hardest to defend. I mean, consider, you have a grading scale, in which two-thirds of the scale denotes levels of failure, and only one-third denotes levels of passing. I mean, is it really necessary for us to have a scale that has so many levels of failure? Do we need to discriminate among those so finely? And people say, well, we really don't use 65 levels of failure. But then why have them? The easiest transition is just to go to integers, zero, one, two, three, four. I mean, you can- don't have to have battles over the zero anymore. Zero is only a problem in the percentage scale. You can keep the zero in an integer scale. The zero in a percentage scale that was such an extreme score that a kid has to get nine perfect papers to recover from a single one. In an integer scale, you don't have to worry about the minimum grade stuff. And there's nothing sacred about percentages. We use them today because the technology programs that the schools buy, that were developed by software engineers, are fond of percentages. That's the only reason we do them. And so, get rid of percentages, go to integers. We use it at the university, it works pretty well. LESLIE HERGERT: Thanks, Tom. There are a couple of other questions here, but I think we are coming to a close. And we want to make sure that you have time to respond to our evaluation survey. I want to- while you're doing that, I want to thank Bob and Tom for a really exciting, and interesting, and detailed discussion of some of the issues that come up as we grade students and think about how to grade proficiency and ensure mastery. And this is a link for the survey. But I want to remind you, if you're interested in these topics, and our research alliance, feel free to contact us, and this is our contact information. You can also go to our website, and look at the information that's on the website about the research alliance, both our research agenda for the next few years, and the members of our core planning group who are helping us plan that research agenda. And they represent all of the states, the seven states that we serve in the Northeast. We will be archiving this webinar so you can go to the website for that, if you'd like to listen to certain parts of it. And we'll add some resources as well that were referred to here, and also, that people seemed to be interested in. So, we thank you, we also encourage you, if you're interested in these topics broadly, to attend the webinar on June 6th, that the Northwest Regional Lab is hosting on related topics. And we look forward to having you again. Thank you very much for your participation. Great. MODERATOR: Thanks, this ends today's webinar, and yes, again, an archive for this session will be posted on the REL-NEI website, relnei.org, early next week. Thanks and have a great afternoon.

Contents

Ethnic and cultural organizations

  • Arab Students Association (ArabSA)
  • Armenian Students Association
  • Asian Student Union
  • Barkada
  • Cape Verdean Student Association (CVSA)
  • Caribbean Students' Organization (CSO)
  • Chinese Culture and Conversation Connection (NU4C)
  • Chinese Student Association (CSA)
  • Northeastern University Culture and Language Learning Society (NUCALLS)
  • Deaf Club
  • Eurasia
  • Haitian Student Unity (HSU)
  • Hawaii Ohana at Northeastern University (HONU)
  • Hellenic Association
  • Hip Hop Culture Club
  • Indian Graduate Student Association - NEU Sanskriti
  • Indonesian Student Association
  • Italian Culture Society
  • Japanese Culture Club
  • Korean American Student Association
  • Latin American Student Organization (LASO)
  • Northeastern African Student Organization (NASO)
  • Northeastern Black Student Association (NBSA)
  • Project NUR
  • Russian Speaking Students United
  • Saudi Arabian Student Organization (SASO)
  • South Asian Student Org. (UTSAV)
  • Students for Israel at Northeastern
  • Taiwanese Student Association (TSA)
  • Venezuelan Student Unity (VSU)
  • Vietnamese Student Association (VSA)

Political organizations

Special interest groups

Honor societies

Fraternities

Sororities

Professional fraternities and sororities

Student publications

Publications include the Huntington News (formerly the Northeastern News), the student magazine Woof Magazine, the political review NU Political Review, the arts magazine Artistry, the architecture journal Common Ground, the music magazine Tastemakers, the university literary magazine Spectrum, the science magazine NUScience, the African-American cultural magazine Onyx, the faculty newspaper Northeastern Voice, conservative newspaper the Northeastern Patriot, the comedy magazine Times New Roman, and one of the few undergraduate economic research journals in the nation, ECONPress. The university also publishes the yearbook the Cauldron.

Student media

The university runs FM radio station 104.9FM WRBB as well as online-television station NUTV. There is also a student-run record label, Green Line Records.[2]

Performing arts

References

  1. ^ "Husky Ambassadors". Orgsync. Retrieved 6 November 2014.
  2. ^ http://www.northeastern.edu/camd/music/about/activities/greenline-records/
This page was last edited on 17 December 2019, at 12:06
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.