Question I: I would like to use DIBELS to assess reading skills but have many questions about DIBELS.

1. If you do the DIBELS assessment, does that mean that you would not test for some of the benchmarks in The Roadmap to Literacy? For instance, do you assess the Roadmap reading list as well as, or instead of, Word Reading Fluency in DIBELS? Doing both might mean a lot of formal assessment for students, depending on how it was delivered. What is most important, DIBELS or The Roadmap to Literacy’s benchmarks or both? I guess teacher designed assessment is the bulk of the job anyway, with DIBELS or similar, just helping teachers to standardize their observations and making sure they are not missing anything.    

You are correct–if you use DIBELS, you will not do all the benchmark assessments in Roadmap. When there is overlap, you would choose which assessment to use. It does not matter which you pick. Use the one you are most comfortable with (or the one mandating by the school). You are also correct that informal assessment is the bulk of assessment with DIBELS or similar helping teachers to standardize their observations, make sure they didn't miss anything, and provide an easy way to communicate with parents/colleagues/other schools/regulatory boards.

2. Under what circumstances would you NOT do one of the DIBELS standardized tests. For instance, there is a lot of controversy (or so I hear) about use of nonsense words in standardized tests. In the assessment schedule I drafted, I had only used some of the DIBELS range. When would this be appropriate, if at all?

You would not do one of the DIBELS standardized tests if the student misses too many prompts on a related DIBELS subtest–the DIBELS subtests themselves will tell you to quit and not do other related subtests when students miss a certain number of prompts. The reason is, it is unlikely that the students would be able to do the related subtests, and there is no reason to use more time (and/or stress the student) by testing further.

There is a controversy about nonsense words in the Waldorf movement. The argument goes like this: students have a fixed visual memory for words. We should not use their capacities on nonsense words. This argument is made by people who are NOT experts in literacy and is based on a serious misunderstanding of how literacy (and the brain) work. Written English is a phonetic language, which means it is a code–a letter represents a sound and vice versa. Learning the code changes the students' brains and builds new capacities–the capacity to sound out words both real and nonsensical. The purest test of students' knowledge of the code/ new capacities (i.e., decoding and encoding skills) is nonsense words because students HAVE no visual memory for these words. The test is thus pure code knowledge/pure decoding capacity.

People who make this mistake think English is like Chinese, where each word is a random pictograph. If written English words were pictographs, nonsense words WOULD tax students' visual memories for words and drain their capacities. Fortunately, it is not. Students learn the code and can read–decode–any word. Teachers need to know how this capacity is developing if they are to help students achieve the skills they need to become literate in English.

As for your assessment schedule, some DIBELS is better than no DIBELS, but all DIBELS is better than some DIBELS. That said, you have to get your teachers to buy in. You know your colleagues. Decide how far you wish to push and how fast. Perhaps it would be better to start small and build over the years? Perhaps it would be better to push for the full DIBLES from the get-go? Perhaps it would be best to make the nonsense words optional but provide recommendations for those willing to do it? Perhaps it would be best to educate people in the importance of nonsense words? There are lots of options.

In the meantime, if people have questions about nonsense words, I am happy to answer them.

3. In Australia, we are heading toward the end of the school year so may only be able to manage the tests for the end of the year. Which would of course mean we do not have any data to compare it with. We MIGHT be able to get a start on assessment this term (‘middle' assessments) and then again next term (‘end' assessments) but the time between the tests would only be something like 8 weeks. Is that worth it or fair?

It would be fine to do the end of the school year only OR both the middle and the end. However, you would use the results in different ways. Let me explain. Assessments can be used for lots of different reasons, including progress monitoring, benchmark, outcomes assessments, etc. Because there would only be two months between the “middle” and “end” of the school year, you would then use the middle assessment for progress monitoring (i.e., how much did the students gain in the last two months of school) as opposed to using the middle assessment for benchmark assessment (are the students on track to be caught up with their public school peers by the end of class 3) Under either scenario, you would only use the end assessment for benchmark. It does not matter which you choose because you would be doing more assessments next year. As long as the end assessment is done, you and the teachers would have data to work with. The teachers would know which students need a little more help and which need extensive instruction and practice to be at benchmark.

4. It would certainly be worthwhile to see what running the tests looks like in the classroom and start some data gathering for next year. Would you do a whole heap of different tests with one student, then move on to the next student? Or would you break it up a bit. I am thinking about staffing the testing by someone who is not the class teacher. Or perhaps sharing the role so teachers can build confidence in administering the test? Deciding on where to go next with each child/class will be the crucial next step.

You would do all the one-on-one assessments with one student in one sitting. If you use DIBELS, it will not take very long. The subtests are short.

It would be good to include the class teacher to build confidence in all aspects of the test. Many teachers believe testing harms the students. This is based on a false reading of Steiner's indications. Steiner took umbrage with final exams because they are stressful for some students–and now Steiner School teachers sometimes overgeneralize that indication to all assessments.

The most important thing is deciding how to apply the testing data to help each student learn. That argues for including the class teachers as much as possible because if they cannot interpret the testing data, they will not use it to inform their teaching.

5. It is quite hard to work out which assessment tool is the best. Are different literacy assessment tools pretty much comparable? What is it that one looks for in a standardized assessment tool? I have a teacher at my school that keeps name dropping various programs, perhaps as though DIBELS is inferior. It is difficult to compare different programs when they are not accessible!

Do not sweat it too much–many literacy assessment tools overlap to a greater or lesser extent. Choose the assessment that will meet your needs. For example, why are you doing the assessment? I believe you once said that you wanted it to inform instruction and lead to better outcomes for students. DIBELS is a wonderful choice to meet these objectives because it is accessible, quick, and easy to administer, and there is support material to help teachers use the data to inform classroom instruction. (Check out the book I've DIBEL'd, Now What?) 

Good luck! I will be posting an article with amended DIBELS recommendations for Steiner Schools 1–3 soon.

Jennifer

1. If you do the DIBELS assessment, does that mean that you would not test for some of the benchmarks in The Roadmap to Literacy? For instance, do you assess the Roadmap reading list as well as, or instead of, Word Reading Fluency in DIBELS? Doing both might mean a lot of formal assessment for students, depending on how it was delivered. What is most important, DIBELS or The Roadmap to Literacy’s benchmarks or both? I guess teacher designed assessment is the bulk of the job anyway, with DIBELS or similar, just helping teachers to standardize their observations and making sure they are not missing anything.    

You are correct–if you use DIBELS, you will not do all the benchmark assessments in Roadmap. When there is overlap, you would choose which assessment to use. It does not matter which you pick. Use the one you are most comfortable with (or the one mandating by the school). You are also correct that informal assessment is the bulk of assessment with DIBELS or similar helping teachers to standardize their observations, make sure they didn't miss anything, and provide an easy way to communicate with parents/colleagues/other schools/regulatory boards.

2. Under what circumstances would you NOT do one of the DIBELS standardized tests. For instance, there is a lot of controversy (or so I hear) about use of nonsense words in standardized tests. In the assessment schedule I drafted, I had only used some of the DIBELS range. When would this be appropriate, if at all?

You would not do one of the DIBELS standardized tests if the student misses too many prompts on a related DIBELS subtest–the DIBELS subtests themselves will tell you to quit and not do other related subtests when students miss a certain number of prompts. The reason is, it is unlikely that the students would be able to do the related subtests, and there is no reason to use more time (and/or stress the student) by testing further.

There is a controversy about nonsense words in the Waldorf movement. The argument goes like this: students have a fixed visual memory for words. We should not use their capacities on nonsense words. This argument is made by people who are NOT experts in literacy and is based on a serious misunderstanding of how literacy (and the brain) work. Written English is a phonetic language, which means it is a code–a letter represents a sound and vice versa. Learning the code changes the students' brains and builds new capacities–the capacity to sound out words both real and nonsensical. The purest test of students' knowledge of the code/ new capacities (i.e., decoding and encoding skills) is nonsense words because students HAVE no visual memory for these words. The test is thus pure code knowledge/pure decoding capacity.

People who make this mistake think English is like Chinese, where each word is a random pictograph. If written English words were pictographs, nonsense words WOULD tax students' visual memories for words and drain their capacities. Fortunately, it is not. Students learn the code and can read–decode–any word. Teachers need to know how this capacity is developing if they are to help students achieve the skills they need to become literate in English.

As for your assessment schedule, some DIBELS is better than no DIBELS, but all DIBELS is better than some DIBELS. That said, you have to get your teachers to buy in. You know your colleagues. Decide how far you wish to push and how fast. Perhaps it would be better to start small and build over the years? Perhaps it would be better to push for the full DIBLES from the get-go? Perhaps it would be best to make the nonsense words optional but provide recommendations for those willing to do it? Perhaps it would be best to educate people in the importance of nonsense words? There are lots of options.

In the meantime, if people have questions about nonsense words, I am happy to answer them.

3. In Australia, we are heading toward the end of the school year so may only be able to manage the tests for the end of the year. Which would of course mean we do not have any data to compare it with. We MIGHT be able to get a start on assessment this term (‘middle' assessments) and then again next term (‘end' assessments) but the time between the tests would only be something like 8 weeks. Is that worth it or fair?

It would be fine to do the end of the school year only OR both the middle and the end. However, you would use the results in different ways. Let me explain. Assessments can be used for lots of different reasons, including progress monitoring, benchmark, outcomes assessments, etc. Because there would only be two months between the “middle” and “end” of the school year, you would then use the middle assessment for progress monitoring (i.e., how much did the students gain in the last two months of school) as opposed to using the middle assessment for benchmark assessment (are the students on track to be caught up with their public school peers by the end of class 3) Under either scenario, you would only use the end assessment for benchmark. It does not matter which you choose because you would be doing more assessments next year. As long as the end assessment is done, you and the teachers would have data to work with. The teachers would know which students need a little more help and which need extensive instruction and practice to be at benchmark.

4. It would certainly be worthwhile to see what running the tests looks like in the classroom and start some data gathering for next year. Would you do a whole heap of different tests with one student, then move on to the next student? Or would you break it up a bit. I am thinking about staffing the testing by someone who is not the class teacher. Or perhaps sharing the role so teachers can build confidence in administering the test? Deciding on where to go next with each child/class will be the crucial next step.

You would do all the one-on-one assessments with one student in one sitting. If you use DIBELS, it will not take very long. The subtests are short.

It would be good to include the class teacher to build confidence in all aspects of the test. Many teachers believe testing harms the students. This is based on a false reading of Steiner's indications. Steiner took umbrage with final exams because they are stressful for some students–and now Steiner School teachers sometimes overgeneralize that indication to all assessments.

The most important thing is deciding how to apply the testing data to help each student learn. That argues for including the class teachers as much as possible because if they cannot interpret the testing data, they will not use it to inform their teaching.

5. It is quite hard to work out which assessment tool is the best. Are different literacy assessment tools pretty much comparable? What is it that one looks for in a standardized assessment tool? I have a teacher at my school that keeps name dropping various programs, perhaps as though DIBELS is inferior. It is difficult to compare different programs when they are not accessible!

Do not sweat it too much–many literacy assessment tools overlap to a greater or lesser extent. Choose the assessment that will meet your needs. For example, why are you doing the assessment? I believe you once said that you wanted it to inform instruction and lead to better outcomes for students. DIBELS is a wonderful choice to meet these objectives because it is accessible, quick, and easy to administer, and there is support material to help teachers use the data to inform classroom instruction. (Check out the book I've DIBEL'd, Now What?) 

Good luck! See my article “DIBELS Recommendations for Steiner Schools.”

Jennifer

About the Author Jennifer Militzer-Kopperl