I have just read a most interesting article about â€˜Fixing Classroom Observationsâ€™ (http://tntp.org/assets/documents/TObservations_2013.pdfNTP_FixingClassroom). ItÂ proposes â€˜bringing the same focus and coherence to classroom observations that Common Core brings to academic standardsâ€™, with the solution being to develop shorter observation rubrics that focus on the content and delivery of a lesson so as to provide more useful feedback to the teacher.
Citing The Bill & Melinda Gates Foundation (2012). Gathering Feedback for Teaching, the article claims that the evaluation systems used today serve as neither Â a â€˜comprehensive teacher development tool, nor a â€˜comprehensive rating toolâ€™. The reasons for this become rather obvious when one realizes that it is unrealistic to rate teachers on every aspect of a districtâ€™s framework and also provide detailed feedback, all in a single class period.Â There is simply too much to look for and think about, leaving little time to provide useful feedback, and long, complicated rubrics further complicate things, interfering with both accurate ratings and feedback. The Bill & Melinda Gates Foundationâ€™s Measures of Effective Teaching (MET) projectâ€™s researchers found â€˜that beyond a handful of indicators, observers have too much trouble keeping the competencies and indicators distinct and become overloaded, generally rating most components similarlyâ€™. Because the MET project used Â the Danielson Framework to evaluate 23,000 lessons , I am led to believe that this is direct criticism of it.
Florida has adopted the Marzano model, and in their first year of using it 97% of the teachers were rated as â€œeffectiveâ€ or better, and the New York Times reported that in Michigan (Danielson, Marzano, Thoughtful Classroom and 5D+ Â frameworks used) and Tennessee(large state-created framework – TEAM) the same ratings were even higher â€“ at 98%.Â The overall rating inflations for the teachers was blamed partly on the observable ratings themselves, and interesting enough, the MET researchers found that even under the most favorable conditions, multiple observations by multiple observers throughout the year, were no more accurate than student surveys !!
Backing up the statement that â€˜classroom observations arenâ€™t delivering on their promises of fair ratings and good feedbackâ€™,Â the authors point out that school leaders are struggling to conduct more observations with frameworks having even more complex rubrics. As an example, the popular Danielson Framework has been continuously growing in size with each edition with the 2007 Edition having 22 components and with the current edition have in 76 components to rate.
Using the Common Core philosophy of â€˜focus and coherenceâ€™, the next generation of observation instruments should have rubrics that will help observers â€˜focus on the most essential aspects of a lessonâ€™, giving feedback about lesson content, rather than the current focus of virtually all frameworks which is all about delivering instruction and how students respond.Â It is far more constructive to concentrate on picking the right content to teach rather than evaluating how well the delivery of poor or inappropriate content was.Â Delivering an excellent lesson focused on a fifth-grade standard to a class where they are performing above the sixth- grade level should not be rated highly, but with current models it would be, which is a shame because the students would not be moving any closer to mastering their appropriate grade-level material.
Fixing the problems:
Classroom observations need to focus on what students are being asked to accomplish â€“ not what the teacher is â€œcoveringâ€ in a lesson, and when observing a lesson, the observer need to be able to make correct judgments as to whether the lesson is helping students master the standards appropriate for their grade level. To do this requires Â easy-to-access Common Core Â or local subject Standards built into the observation tool, and access to a reference library for any template component makes this â€˜Fixâ€™ possible with eWalk.
Observations will be far more fruitful if the time is spent focusing on a small number of the components essential to a good lesson i.e. â€œscore what counts rather than everything we can countâ€.Â The rubrics used should help observers find the evidence they need to rate each of the essentials, so as to provide meaningful feedback. By linking an observation form to a video of the teaching being observed with eWalk and the iCoach option, the observer can embed feedback into the video so as to make it clear the strengths or weaknesses being observed.Â This â€˜Fixâ€™ also enables the observer to pause and/or rewind the video to give them ample time to rate and give feedback. Another â€˜Fixâ€™ is to use the â€˜time-lineâ€™ scripting option in eWalk whereby the observer simply records anecdotally what is observed, and later assigns the anecdotal comments to the appropriate rubrics/components and uses them for making the ratings.
Rubrics should differentiate Â between the outcomes teachers should produce Â in a successful lesson and the strategies that can help them achieve those outcomes. It is most practical to rate a modest number of indicators which are based on observable evidence of student outcomes, and provide a list of the various strategies that teachers could use, and various strategies could be associated with the outcomes they are meant to produce. The ability of eWalk to have a different checklist of what to look-for for each item in a dropdown, is the ideal â€˜Fixâ€™.Â Thus whenever a particular strategy is selected, the appropriate list of outcomes would appear.
Observation rubrics can take time to design, negotiate and deploy, and high-quality tools are required to build them.Â The â€˜Fixâ€™ for this problem is the ability of eWalk to enable a prime author to share authoring of a template with someone else so that they can help craft the items in the template.Â Furthermore, the ability to share templates with individuals, groups, specific schools or an entire district or state, makes it easy to distribute template frameworks appropriate users.
Rubrics are only as effective as the observers who use them and the systems that support them, and the MET research showed that most observers donâ€™t know what good teaching looks like.Â To â€˜Fixâ€™ this problem, iCoach makes it possible to video what good teaching should look like for a rubric and embed comments into the video to point out what is important in judging that rubric.
â€œLeaders are generally held accountable for entering ratings and for submitting forms after each observation, but not for generating helpful feedback for teachersâ€”and as a result, helpful feedback becomes a lesser priorityâ€.Â The â€˜Fixâ€™ for this is the ability for eWalk to send a classroom observation directly to a teacher immediately upon completion via email.Â InÂ addition, when teachers have their own eWalk accounts, they can view observations that have been made available to them, and they can comment directly into the observation if the form has implemented dialogue boxes.Â Maintaining a dialogue between observer and observee is a powerful feedback mechanism.