Marshall Memo 589

A Weekly Round-up of Important Ideas and Research in K-12 Education

June 1, 2015

 

 


In This Issue:

  1. Improving teacher evaluation

  2. Are teachers with the highest Danielson scores really the best teachers?

  3. Using student surveys to evaluate teaching: cautionary notes

  4. Keys to effective instructional coaching

  5. Measuring students’ noncognitive skills

  6. Separating boys and girls for middle-school anti-bullying lessons

  7. The Matthew effect with educational technology

  8. Which causes more academic loss, snow days or individual absences?

  9. Wordless picture books as a key literacy element in kindergarten

10. Career advice from Robert Sternberg

11. George Mitchell reflects on conflict resolution

12. How people handle tensions at work

 

Quotes of the Week

“If coaches are asked to write reports, develop school-improvement plans, oversee assessments, deal with student behavior, do bus and cafeteria duty, and substitute teach, they’ll have little time left to partner with teachers.”

            Jim Knight (see item #4)

 

“Lately, we seem to have shifted from improving teaching to alternately blaming or idolizing teachers. We are no longer evaluating with the goal of ongoing changes in practice; we’re blinded by science and ‘metrics.’”

            Nancy Flanagan (see item #1)

 

“We talk about how we get into that 4.7-and-above range. We talk about that more than about how to teach.”

            A college professor on student evaluations of her teaching (see item #3)

 

“The receptors in our brain for information contrary to our prior beliefs are very narrow. It requires effort and discipline to get people to consider what the other side has to say.”

            George Mitchell (see item #11)

 

“While technology helps education where it’s already doing well, technology does little for mediocre educational systems – and in dysfunctional schools, it can cause outright harm.”

            Kentaro Toyama (see item #7)

 

 

 

 

 

 

 

 

 

1. Improving Teacher Evaluation

            In this article in Education Week Teacher, veteran teacher Nancy Flanagan casts a skeptical eye on new, supposedly more-rigorous ways of evaluating teachers, including the use of student test scores. “I’m not saying that we can’t do a better job of providing teachers with feedback to continuously fine-tune their practice,” she says. “Nor am I denying that some teachers need to improve or be counseled, swiftly, out of a job. Only this: we might be granting shiny new teacher-evaluation protocols a lot more power and veracity than they deserve.” Here are her questions:

            • Is it teachers or their teaching we’re assessing? If it’s the latter, there’s always room for discussion about classroom strategies, skills, and dynamics. But if it’s the former, with the goal of ranking and firing the least-effective teachers, “any process we use will fail, whether it includes standardized test scores or not,” says Flanagan. “Lately, we seem to have shifted from improving teaching to alternately blaming or idolizing teachers. We are no longer evaluating with the goal of ongoing changes in practice; we’re blinded by science and ‘metrics.’ We have even injected ratings competition into teacher evaluation.”

            • When should we evaluate teaching? “When we assume that only one (or two, or six) formal evaluations ‘count,’ we’ve lost sight of the purpose of evaluation,” says Flanagan. She believes all teachers need to be assessed – and assess themselves – all the time, with administrators, colleagues, and students all providing feedback and suggestions. “Beginning teachers need continuous feedback and structured conversations with more-experienced teachers,” she says. “But so do long-termers, who get stale, must teach something new – or simply desire a continuous stream of new ideas and strategies.”

            • Who should evaluate teaching? Flanagan has serious doubts about supposedly neutral outsiders evaluating teachers or scoring classroom video-recordings without a face-to-face follow-up conversation. Teachers need to be able to explain the rationale for their classroom decisions and engage in a dialogue with observers if they are to reflect, self-assess, and grow. Flanagan is highly skeptical about commercial evaluation tools or state-mandated protocols, which can produce impressive amounts of data but are rarely helpful for changing what’s going on in classrooms. She believes building administrators and peers are best positioned to visit classrooms, always with a productive dialogue afterward.

            • Why haven’t teachers and their unions pushed for better evaluation models? Flanagan is mystified about this, and sad that teachers have been backed into resisting evaluation procedures approved by lawmakers and blue-ribbon commissions. She says teachers should take the lead in defining the critical competencies and accomplishments of highly-effective teaching and insist on a place at the policy-making table.

            • Why is it impossible to believe that all teachers in a particular school are effective? “If a district is hiring carefully, mentoring new teachers, and providing ongoing professional learning, why wouldn’t upwards of 95% of their teachers be performing at a level somewhere between competent and amazing?” asks Flanagan. “That’s what other organizations, from law firms to landscaping businesses, do – hire the best available, train them, then monitor.” It makes no sense to believe that a certain percent of teachers must be ineffective.

            • Who decides what “best practice” is? Flanagan is worried about non-educators setting policy on teacher evaluation, and questions whether some advocates of this approach really want public schools to succeed. She strongly believes that teachers need to take the initiative and work to create a process that actually improves teaching and learning.

 

“Six Questions About Teacher Evaluation” by Nancy Flanagan in Education Week Teacher, May 28, 2015, http://bit.ly/1eLjZye

Back to page one

 

2. Are Teachers with the Highest Danielson Scores Really the Best Teachers?

            In this article in Educational Evaluation and Policy Analysis, Rachel Garrett (American Institutes for Research) and Matthew Steinberg (University of Pennsylvania) report on their analysis of teacher effectiveness data from the Measures of Effective Teaching (MET) study. They reached four conclusions:

First, there is a strong correlation between teacher ratings on Charlotte Danielson’s Framework for Teaching (FFT) and student test scores. “On average,” say Garrett and Steinberg, “student achievement is higher among teachers who receive higher FFT ratings.”

            Second, there are problems with using this correlation for high-stakes personnel decisions on individual teachers (e.g., tenure, performance pay, or dismissal). That’s because “relying heavily on FFT measures ignores one of the key drivers of this relationship,” say Garrett and Steinberg, “– the systematic sorting of students to teachers. We find consistent patterns of noncompliance with randomization that moves students to teachers with higher FFT scores. Such nonrandom sorting limits the ability of teacher performance measures to provide a valid estimate of a teacher’s contribution to student learning, thereby constraining policymakers’ and school leaders’ ability to identify truly effective teachers.”

            Third, Garrett and Steinberg conclude that Danielson rubric data, while an interesting marker of teacher effectiveness, is less useful as an intervention to improve teaching. “Implicit in this distinction,” they say, “is the impossibility of either fully capturing or randomly assigning instructional quality. While better teachers, on average, may receive higher FFT ratings, there are likely other aspects of teacher quality that are salient to student learning but not measured by the FFT… Disentangling the effect of quality instruction on student achievement is further complicated by the fact that instruction undoubtedly interacts with numerous factors, including the composition of students in a given class. The mix of students on observed and unobserved dimensions will vary both across teachers within a school, as well as across classes taught by the same teacher.” That’s why individual teachers’ Danielson ratings fluctuate from year to year.

            Finally, Garrett and Steinberg note that the Danielson framework was originally created to coach teachers, encourage self-reflection, and inform professional development, but the MET study and their own analysis of MET data used it only to measure correlations between teacher effectiveness and student achievement. In other words, teachers in these studies were analyzed in a way that didn’t capture the feedback conversations with principals and the subsequent professional development that Danielson envisioned – which any good school implements on a routine basis. Therefore, conclude Garrett and Steinberg, “this study does not speak to either the potential impacts on student and teacher performance when the FFT protocol is fully implemented, or how performance can be shaped over time. Indeed, this potential for professional development embedded within the compete FFT protocol is one of the compelling reasons for its use, as compared with value-added scores, which provide little guidance for teachers on how to improve their practice.”

 

“Examining Teacher Effectiveness Using Classroom Observation Scores: Evidence from the Randomization of Teachers to Students” by Rachel Garrett and Matthew Steinberg in Educational Evaluation and Policy Analysis, June 2015 (Vol. 37, #2, p. 224-242), available for purchase at http://bit.ly/1cuzOaQ;

Back to page one

 

3. Using Student Surveys to Evaluate Teachers: Cautionary Notes

            In this Chronicle of Higher Education article, Stacey Patton reports on the debate around high-stakes use of student evaluations of college instructors. One full-time humanities professor at a West Coast research university said the number 4.7 was burned into her mind: that’s the student-evaluation score (out of 5) she needed to receive in order to feel safe in her non-tenure-track position. “Everybody in my department is obsessed,” said this veteran professor. “We talk about how we get into that 4.7-and-above range. We talk about that more than about how to teach.” She lists some of the ways she and her colleagues have tried to game the system:

-   Baking cookies or brownies for students (but skip the nuts, which could cause allergy problems and very negative evaluations).

-   Handing out evaluation forms when the most irascible student is absent.

-   Giving low-stakes assignments just before students fill out evaluations.

-   Never giving back a graded assignment on evaluation day.

-   Not leaving the classroom unattended while evaluations are being filled out, lest one or two unhappy students poison the minds of their classmates.

-   Not giving students a lot of time to fill out their evaluations. “If they’re in a hurry, they’ll give you all fives unless they’re mad at you,” says the West Coast professor.

-   Throughout the course, letting students hand in papers late, retake exams, and get extra credit. “We all know we can’t afford to uphold grading standards, because of the pressure put on us,” she says.

“Don’t get me wrong,” says this professor. “I think student evals are useful. I used to push my students, even the ones I didn’t like, to fill them out. I don’t want to do away with them. But it’s just frightening that administrators turn the entire discussion of what your teaching is like over to a bunch of 19-year-old kids.”

Instructors have reason to be concerned about how students evaluate them, since in a number of universities survey scores drive decisions on hiring, promotion, and tenure. Adjunct instructors are particularly nervous, since a low score can mean a pink slip. Christine Thorpe, chair of the department at the New York City College of Technology, says that when she sees an instructor getting scores under four from students, that’s a red flag. She worries that a veteran professor may be getting stale or a novice is using ineffective methods. “We tell that instructor, We know you’ve been committed to the department, but your scores are low. What’s happening? Can you improve this? My intention is not to fire them but to help them improve their teaching experience.” If things don’t improve over two or three semesters, she says, “I try to phase them out by reducing the number of classes they are given to teach, and then I bring them in and counsel them out of teaching.” Thorpe says that peer evaluations also play a part in the process.

            Mark Chelgren, a state senator in Iowa, took things to a higher level: he proposed a bill that would require that professors, even those with tenure, be fired if they scored low on student evaluations. “Professors need to understand that their customers are those students,” he said in a recent interview. (His bill didn’t pass.)

            The big question is how much weight student evaluations should have in personnel decisions. “There seems to be a disconnect between how faculty view their usefulness and how the university’s promotion-and-tenure committees view them,” says Michael Chaney, a professor at Oakland University in Michigan. The higher the stakes, the more any quirks in the process cause concerns. For example, response rates tend to be low, giving disproportionate weight to the opinions of a small minority of students. Some students don’t take the evaluations seriously, bullet-voting all fives or all ones and scribbling unhelpful written comments – “He’s an awesome dude” “Loved your mustache” – and occasional racist or sexist invective. But there is helpful feedback. Adam McKible, an English professor at John Jay College of Criminal Justice in New York City, says, “I’m more conscious of my behavior in the classroom if students say I’m being too tart or assigning too much work.”

            How valid are student surveys as measures of teaching quality? Philip Stark, chair of the statistics department at the University of California/Berkeley, has studied this question at the college level and concludes that surveys are little more than popularity contests, it’s easy to game the system, good professors often get bad ratings, and bad professors often get good ones. “Fear of bad ratings stifles pedagogical innovation and encourages faculty to water down course content,” says Stark. “Relying on averages of student-evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned.”

            John Holland, director of the writing program at the University of Southern California, has a different view. “It’s very important that we look at the data carefully to understand a professor’s interactions with students over time,” he says. “I have a whole history of evaluations to look at how they perform in a cumulative fashion. We have a mentoring relationship with new faculty. Mentors can talk through how to improve… We want students to know that their opinions do matter. Evaluations are not just a blow-off at the end of the semester.”

            [One thing that many university administrators seem not to be doing is actually visiting classrooms. A business-school dean once responded when I asked her about this blind spot in the supervision and evaluation of professors, “Why would I do that?” This article raises major red flags about the high-stakes use of student surveys in K-12 schools, clearly pointing to the wisdom of medium-stakes use of student responses in face-to-face supervisor-teacher conversations about effective classroom instruction – and always considering the data in light of frequent classroom visits.                                                                                     K.M.]

 

“Student Evaluations: Feared, Loathed, and Not About to Go Away” by Stacey Patton in The Chronicle of Higher Education, May 29, 2015 (Vol. LXI, #37, p. A8), http://bit.ly/1LWXgcL

Back to page one

 

4. Keys to Successful Instructional Coaching

            In this Scholastic EduPulse article, Jim Knight (University of Kansas) says instructional coaching has the potential to move schools “from cultures of talking to cultures of doing.” He suggests seven ways that principals can support and enhance this work:

            • Protect instructional time. “If coaches are asked to write reports, develop school-improvement plans, oversee assessments, deal with student behavior, do bus and cafeteria duty, and substitute teach,” says Knight, “they’ll have little time left to partner with teachers.”

            • Use an instructional playbook. Coaches need to “deeply understand a set of high-impact teaching strategies that will help teachers achieve their goals,” he says, beginning with the “big four”: content planning, formative assessment, instruction, and community-building. Coaches need to know the playbook backward and forward and filter district directives and initiatives to maintain focus on a small number of key teaching strategies.

            • Listen to the troops. Teachers’ opinions matter, and they should be making most of the decisions about what occurs in their classrooms, working shoulder-to-shoulder with the coach to reach worthy goals. Knight believes coaching should be voluntary, since requiring it is often seen by teachers as punitive.

            • Clarify roles. Coaches shouldn’t be involved in supervisory visits to classrooms or formal evaluation, says Knight: “If coaches are given administrative roles, they need to have the same qualifications and training as any other administrator, and everyone in the school (most especially the coach) needs to know they are in that role.”

            • Maintain confidentiality. Trust and transparency are essential if teachers are to be forthcoming about their thoughts and concerns. “What is most important with regard to confidentiality,” says Knight, “is that principal and coach clarify what they will and will not talk about, and that the principal clearly communicates that agreement to everyone involved.”

            • Meet regularly. Principal-coach meetings don’t have to be longer than 20 minutes in most cases, but frequency is essential if the principal and coach are to be on the same page.

            • Walk the talk. “Principals who proclaim that professional learning is important should attend and even lead professional learning sessions,” says Knight. They might also video-record their own meetings and presentations and model the process of examining what’s working and what isn’t.

 

“Seven Ways Principals Can Support Instructional Coaches” by Jim Knight in Scholastic EduPulse, May 26, 2015, http://bit.ly/1PZvm6o

Back to page one

 

5. Measuring Students’ Noncognitive Skills

            In this article in Educational Researcher, Angela Duckworth (University of Pennsylvania) and David Yeager (University of Texas/Austin) affirm the importance of noncognitive attributes, including:

-   Goal-oriented effort through grit, self-control, and growth mindset;

-   Healthy social relationships via gratitude, emotional intelligence, and social belonging;

-   Sound judgment and decision-making marked by curiosity and open-mindedness.

“Longitudinal research has confirmed such qualities powerfully predict academic, economic, social, psychological, and physical well-being,” say Duckworth and Yeager.

            However, they believe the ways of measuring these important attributes are not ready for prime time, and should not be used for consequential decisions about students or schools.

What’s the problem? Duckworth and Yeager have found that each of three methods currently used to measure noncognitive qualities – student self-reports, teacher questionnaires, and performance tasks – has strengths but also significant disadvantages. For self-reporting and questionnaires:

-   Teachers and students may read or interpret an item in a way that differs from researcher intent.

-   Students or teachers may not be astute or accurate reporters of behaviors, emotions, or motivation.

-   Questionnaire scores may not reflect subtle changes over short periods of time.

-   The frame of reference (i.e., implicit standards) used when making judgments may differ across students or teachers.

-   Faking – students or teachers may provide answers that are desirable but not accurate.

With performance tasks, there’s a different set of problems:

-   Researchers may make inaccurate assumptions about underlying reasons for student behavior.

-   Tasks that optimize motivation to perform well may not reflect behavior in everyday situations.

-   Task performance may be influenced by unrelated competencies (e.g., hand-eye coordination).

-   Performance tasks may put students into situations (e.g., doing academic work with distracting video games in view) that they might avoid in real life.

-   Scores on sequential administrations may be less accurate (e.g., because of increased familiarity with the task or boredom).

-   Task performance may be influenced by aspects of the environment in which it is performed or by physiological state (e.g., time of day, noise in classrooms, hunger, fatigue).

-   Scores may be influenced by purely random errors (e.g., a respondent marking the wrong answer).

Duckworth and Yeager give a vivid example of how a teacher’s and a student’s answer to a question might differ. The question: In the last month, how often does this student come to class prepared? The teacher’s thought process:

-   Let’s see… I think he didn’t have his homework most days last week. He keeps making excuses. And he almost never brings a pencil.

-   Okay, so overall, I would say that he comes to class prepared much less than most fifth graders I’ve taught.

-   “Rarely” or “Sometimes” makes sense.

-   Hmm… Nobody will see this but researchers. I’ll put down “Rarely.”

The student’s thought process:

-   Let’s see… I think I didn’t have my homework a few times last week. But there were reasons why I couldn’t get it done.

-   Okay, so overall, I guess I’m pretty good at coming to class prepared. Compared to my friends, I’m pretty good.

-   “Sometimes” or “Often” makes sense.

-   Hmmm… I guess it’s not too embarrassing to say “Sometimes.”

So the same student’s level of classroom preparation is rated “Rarely” on this question by the teacher and “Sometimes” by the student. Imprecise!

Duckworth and Yeager conclude that current tools for measuring students’ noncognitive attributes are useful for in-school reflection and improving educator practices, but are not precise and reliable enough be used for individual student diagnosis, program evaluation, school accountability, or between-school or within-school over-time comparisons. They conclude with a call for further research and refinement of measurement tools.

 

“Measurement Matters: Assessing Personal Qualities Other Than Cognitive Ability for Educational Purposes” by Angela Duckworth and David Scott Yeager in Educational Researcher, May 2015 (Vol. 44, #4, p. 237-251), http://bit.ly/1AGi9sj; the authors can be reached at [email protected] and [email protected].

Back to page one

 

6. Separating Boys and Girls for Middle-School Anti-Bullying Lessons

            In this ASCA School Counselor article, Pennsylvania counselor Lisa Fulton says she used to present lessons about bullying to girls and boys together. Over time, she noticed that many students were tuning her out. “I blamed the students for their lack of attentiveness,” says Fulton, “but I should have been blaming my approach.” Finally she realized that there are big differences in the type of bullying experienced by boys and girls:

-   Boys – Shoving, punching, elbowing, wrestling, and other physical bullying, and also name-calling and teasing (“Just joking!”);

-   Girls – Gossiping, talking behind someone’s back, spreading rumors, excluding, and other social bullying.

Fulton realized that one-size-fits-all lessons weren’t the best approach, and convinced her principal to let her try single-gender groups with a new four-lesson curriculum:

            • Lesson #1: The teacher draws attention to the fact that the group is all-girl or all-boy, and students quickly identify the differences in bullying between gender. The teacher then shows clips from several movies…

-   For girls, Mean Girls, The Clique, and Odd Girl Out;

-   For boys, Back to the Future, The Any Bully, and Cheaper by the Dozen

and they discuss who was the bully, who was the target, and the type of bullying involved.

• Lesson #2: The teacher introduces some key terms; sidekick, supporter, disengaged onlooker, possible defender, champion, and target. Students think about situations when they’ve been in one of these roles and realize that all of them are part of the bullying dynamic.

• Lesson #3: The goal is helping bullies see the impact of their actions from the target’s point of view. In the girls’ group, students play the telephone game and see how a rumor, as it is passed from one girl to another, can cause hurt feelings. The girls also learn assertiveness and are encouraged to use these new skills to deal with problems rather than spreading rumors to others. In the other group, boys focus on the difference between bullying and teasing, starting with the classic “young lady or old woman” drawing, as well as The True Story of the Three Little Pigs. The goal is to teach the importance of perspective and how looking at something from a different point of view can produce a completely different understanding of the same event.

• Lesson #4: Students consider the actions of supporters, disengaged onlookers, possible defenders, and champions. Each group is presented with a gender-specific scenario and asked to decide on possible actions to help the target and combat bullying behavior. The girls’ group has a special segment on exclusion, where all girls get different cards, form groups by affinity, and the girl with a particular card is left out. The big take-away: exclusion hurts.

“Once the lessons are over, the hard work begins,” says Fulton, “both for us and for the students.” Kids try to figure out how to implement what they’ve learned, and adults monitor to see if they’re succeeding. Teachers also give pre- and post-tests (with gender-neutral and gender-specific questions) to measure learning and behaviors. Fulton reports significant gains in student awareness and a marked decline in bullying behavior as a result of the single-sex curriculum.

 

“Mean Girls and Rough Boys” by Lisa Fulton in ASCA School Counselor, May/June 2015 (Vol. 52, #5, p. 18-21), www.schoolcounselor.org; Fulton is at [email protected].

Back to page one

 

7. The Matthew Effect in Educational Technology

            In this Chronicle of Higher Education article, Kentaro Toyama (University of Michigan) expresses profound skepticism about whether technology can level the playing field in colleges, even when disadvantaged students are given free computers, tablets, and smartphones. Here is his Law of Amplification: While technology helps education where it’s already doing well, technology does little for mediocre educational systems – and in dysfunctional schools, it can cause outright harm.

What does this mean? “[T]echnology is a tool,” says Toyama, “which means that any positive effects depend on well-intentioned, capable people. But good outcomes are never guaranteed. What amplification predicts is that… technology on its own amplifies underlying socioeconomic inequalities… Any idea that more technology in and of itself cures social ills is obviously flawed. How is it, for example, that during the past four decades we have seen an explosion of incredible technologies, but America’s poverty rate hasn’t decreased, and inequality has skyrocketed?... Students with poor high-school preparation will always find it hard to learn things that their prep-school peers can ace. Low-income families will struggle to pay registration fees that wealthy households barely notice. Blue-collar workers doing hard manual labor may not have the energy to take evening courses that white-collar professionals think of as a hobby. And those disparities are even more true online than offline. Sure, educational technologies can lower costs for everyone, but it’s the people with existing advantages who are best positioned to capitalize on them.”

Case in point: MOOCs. Toyama says that those who actually complete these online courses are a select minority of highly motivated, well-educated, largely male students who are able to persist without the kind of personal contact and peer pressure that exist in bricks-and-mortar colleges. Very few lower-income young adults enroll in MOOCs. “The primary effect of free online courses,” says Toyama, “is to further educate an already well-educated group, which will pull further away from less-educated others. The educationally rich just get richer.”

“The real obstacle in education remains student motivation,” he concludes. “Especially in an age of informational abundance, the bottleneck is not getting access to knowledge but mustering the will to master it. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure.” Neither of these is cheaply available online.

 

“Why Technology Will Never Fix Education” by Kentaro Toyama in The Chronicle of Higher Education, May 29, 2015 (Vol. LXI, #37, p. A26-27), http://bit.ly/1AFXd4U

Back to page one

 

8. Which Causes More Academic Loss, Snow Days or Individual Absences?

            In this article in Education Next, Joshua Goodman (Harvard Kennedy School) reports on his study of the impact of Massachusetts no-school days on student achievement. Using data on school closings and standardized test scores, Goodman concludes that individual student absences “sharply reduce student achievement, particularly in math, but school closings appear to have little impact.” He continues: “These findings should not be taken to mean that instructional time does not matter for student learning; the bulk of the evidence suggest it does. A more likely explanation is that schools and teachers are well prepared to deal with the coordinated disruptions caused by snow days – much more so than they are to handle the less-dramatic but more frequent disruptions caused by poor student attendance.”

When just a few students in a class have been absent, teachers have to choose between spending time helping returning absentees catch up, which takes time away from the rest of the class, or letting returning students fend for themselves, which negatively affects their progress. Either way, the class’s achievement takes a hit. A snow day, on the other hand, can be handled by postponing, compressing, or eliminating non-tested material, which is why these lost school days have so little impact on test scores.

“The negative achievement impacts associated with student absences imply that schools and teachers are not well prepared to deal with the more-frequent disruptions caused by poor student attendance,” concludes Goodman. “Schools and teachers may benefit from investing in strategies to compensate for these disruptions, including the use of self-paced learning technologies that shift the classroom model to one in which all students need not learn the same lesson at the same time.”

 

“In Defense of Snow Days” by Joshua Goodman in Education Next, Summer 2015 (Vol. 15, #3, p. 64-69), http://educationnext.org/defense-snow-days/

Back to page one

 

9. Wordless Picture Books As a Key Literacy Element in Kindergarten

            In this article in The Reading Teacher, Judith Lysaker and Elizabeth Hopper (Purdue University) say they have no problem with the “pushdown” of literacy expectations to kindergarten, noting that “in many classrooms around the world, children read at the age of 5 and 6.” But they disagree with pushing down parts of the primary-grade literacy curriculum that are developmentally inappropriate. “An early emphasis on specific aspects of print processing and reading subskills may crowd out opportunities for children to develop more broadly as meaning makers,” they say. “The intensity of a code emphasis reflects the assumption that print reading is a completely new experience, demanding a distinctively different set of strategies, separate from the meaning making children have been engaged in since birth.”

            The bridge, say Lysaker and Hopper, is wordless picture books. When kindergarten teachers use these books well, students get practice at reading images and developing a number of early print-related strategies – searching, cross-checking, self-correction, and rereading. Lysaker and Hopper share this selection of wordless books for young readers:

-   The Snowman by R. Briggs (Random House, 1978)

-   Pancakes for Breakfast by T. DePaola (HMH Books for Young Readers, 1978)

-   The Zoo by S. Lee (Kane/Miller Publishers, 2007)

-   Yellow Umbrella by J. Liu (Kane/Miller Publishers, 2002)

-   Frog, Where Are You? by M. Mayer (Dial, 1969)

-   A Boy, A Dog, A Frog, and a Friend by M. Mayer (Dial, 1971)

-   Frog on His Own by M. Mayer (Dial, 1973)

-   Frog Goes to Dinner by M. Mayer (Dial, 1974)

-   One Frog Too Many by M. Mayer (Dial, 1975)

-   The Lion and the Mouse by J. Pinkney (Little, Brown, 2009)

-   Good Night, Gorilla by P. Rathmann (Putnam Juvenile, 1996)

-   Jack and the Missing Piece by P. Schories (Boyds Mills Press, 2004)

-   Breakfast for Jack by P. Schories (Boyds Mills Press, 2004)

-   Jack and the Night Visitors by P. Schories (Boyds Mills Press, 2006)

-   Jack Wants a Snack by P. Schories (Boyds Mills Press, 2008)

-   Deep in the Forest by B. Turkle (Puffin, 1992)

-   Free Fall by D. Wiesner (Morrow, 1988)

 

“A Kindergartener’s Emergent Strategy Use During Wordless Picture Book Reading” by Judith Lysaker and Elizabeth Hopper in The Reading Teacher, May 2015 (Vol. 68, #8, p. 649-657) available for purchase at http://bit.ly/1SRAmZF; the authors can be reached at [email protected] and [email protected].

Back to page one

 

10. Career Advice from Robert Sternberg

            In this Chronicle of Higher Education article, Robert Sternberg (Cornell University) looks back on his career to date (he’s 65) and offers advice that might well apply to K-12 educators:

-   Put family first.

-   Make your health a close second.

-   Save as much money as you can.

-   If you’re in the wrong place, get out.

-   Stay away from jerks.

-   If you’re not having fun, something’s wrong.

-   Be true to yourself.

-   Don’t tie up too much of your self-esteem in someone else’s evaluation of your work.

-   Take stock periodically.

-   Have a hobby. See the world. Or both.

-   Help others.

-   Take some risks.

“That’s it,” Sternberg concludes. “I hope that by the time you reach my age, you’ll feel that your life and career have made the kind of difference you had hoped to make. Me? I’m not there yet, which is why I’m still trying – for example, by writing this article.”

 

“Career Advice from an Oldish Not-Quite Geezer” by Robert Sternberg in The Chronicle of Higher Education, May 29, 2015 (Vol. LXI, #37, p. A27-28), http://bit.ly/1RG4khT

Back to page one

 

11. George Mitchell Reflects on Conflict Resolution

            In this Harvard Business Review interview with Alison Beard, former U.S. Senator George Mitchell, 81, reflects on what it took to successfully mediate the crisis in Northern Ireland. “The challenge is not to get them to talk, but to get them to listen,” says Mitchell. “The receptors in our brain for information contrary to our prior beliefs are very narrow. It requires effort and discipline to get people to consider what the other side has to say.” Here are his pointers for working with people who are at loggerheads with each other:

-   Detailed knowledge of the history and nature of the conflict;

-   A recognition that the people involved must own the resolution.

-   Deep reservoirs of patience and perseverance; “The setbacks are many,” says Mitchell, “and you can’t take the first or the second or the 10th no for a final answer. You have to keep at it.”

-   Understanding the bottom line, or basic objectives, for each party;

-   Being willing to take a risk when it’s warranted. In the negotiations in Northern Ireland, this consisted of setting a firm deadline for the process, based on his sense of timing, circumstance, and attitudes.

 

“Life’s Work: An Interview with George Mitchell” by Alison Beard in Harvard Business Review, June 2015 (Vol. 93, #6, p. 124), https://hbr.org/2015/06/george-mitchell

Back to page one

 

12. How People Handle Tensions at Work

            In this Harvard Business Review sidebar, Mark Goulston shares data from a recent survey on how the magazine’s subscribers say they communicate during conflict:

10% said they never ask colleagues to either stop or change behavior that bothers them.

24% said when they’re upset with someone at work, they rarely let the person know.

26% said when they disagree with someone, they often hint at it, rather than objecting outright.

28% said they always speak up when they feel that they’ve been misunderstood.

 

“HBR Survey: How People Communicate During Conflict” by Mark Goulston in Harvard Business Review, June 2015 (Vol. 93, #6, p. 22)

Back to page one

 

 

 

 

© Copyright 2015 Marshall Memo LLC

If you have feedback or suggestions,

please e-mail [email protected]

 


 


About the Marshall Memo

 


Mission and focus:

This weekly memo is designed to keep principals, teachers, superintendents, and others very well-informed on current research and effective practices in K-12 education. Kim Marshall, drawing on 44 years’ experience as a teacher, principal, central office administrator, and writer, lightens the load of busy educators by serving as their “designated reader.”

 

To produce the Marshall Memo, Kim subscribes to 64 carefully-chosen publications (see list to the right), sifts through more than a hundred articles each week, and selects 5-10 that have the greatest potential to improve teaching, leadership, and learning. He then writes a brief summary of each article, pulls out several striking quotes, provides e-links to full articles when available, and e-mails the Memo to subscribers every Monday evening (with occasional breaks; there are 50 issues a year).

 

Subscriptions:

Individual subscriptions are $50 for a year. Rates decline steeply for multiple readers within the same organization. See the website for these rates and how to pay by check, credit card, or purchase order.

 

Website:

If you go to http://www.marshallmemo.com you will find detailed information on:

• How to subscribe or renew

• A detailed rationale for the Marshall Memo

• Publications (with a count of articles from each)

• Article selection criteria

• Topics (with a count of articles from each)

• Headlines for all issues

• Reader opinions (with results of an annual survey)

• About Kim Marshall (including links to articles)

• A free sample issue

 

Subscribers have access to the Members’ Area of the website, which has:

• The current issue (in Word or PDF)

• All back issues (also in Word and PDF)

• A database of all articles to date, searchable

    by topic, title, author, source, level, etc.

• A collection of “classic” articles from all 11 years

Core list of publications covered

Those read this week are underlined.

American Educational Research Journal

American Educator

American Journal of Education

American School Board Journal

AMLE Magazine

ASCA School Counselor

ASCD SmartBrief/Public Education NewsBlast

Better: Evidence-Based Education

Center for Performance Assessment Newsletter

District Administration

Ed. Magazine

Education Digest

Education Gadfly

Education Next

Education Week

Educational Evaluation and Policy Analysis

Educational Horizons

Educational Leadership

Educational Researcher
Edutopia

Elementary School Journal

Essential Teacher

Go Teach

Harvard Business Review

Harvard Educational Review

Independent School

Journal of Education for Students Placed At Risk (JESPAR)

Journal of Staff Development

Kappa Delta Pi Record

Knowledge Quest

Middle School Journal

Peabody Journal of Education

Perspectives

Phi Delta Kappan

Principal

Principal Leadership

Principal’s Research Review

Reading Research Quarterly

Reading Today

Responsive Classroom Newsletter

Rethinking Schools

Review of Educational Research

School Administrator

School Library Journal

Teacher

Teachers College Record

Teaching Children Mathematics

Teaching Exceptional Children/Exceptional Children

The Atlantic

The Chronicle of Higher Education

The District Management Journal

The Journal of the Learning Sciences

The Language Educator

The Learning Principal/Learning System/Tools for Schools

The New York Times

The New Yorker

The Reading Teacher

Theory Into Practice

Time

Wharton Leadership Digest