Académique Documents
Professionnel Documents
Culture Documents
The prevalence of one-to-one learning environments has grown in recent years, both nationally
and internationally (Richardson et al., 2013). Contemporary one-to-one learning environments are
classrooms in which every student has access to a computing device (either a tablet or laptop) and
continuous access to the Internet (Spires, Wiebe, Young, Hollebrands, & Lee, 2009). Research on the
pedagogy in one-to-one learning environments often yields reports of a transformation from traditional,
teacher-directed methods to student-centered and constructivist pedagogies (Cavanagh, Dawson, &
Ritzhaupt, 2011; Gherardi, 2017). The integration of one-to-one computing is said to provide instructional
affordances, including increased collaboration between learners (Broussard, Hebert, Welch, & vanMetre,
2014; Inserra & Short, 2012); higher levels of student motivation and engagement (Harper & Milman,
2015; Holen, Boury, & Semich, 2017); and more differentiation on the part of teachers (Gherardi, 2017;
Milman, Carlson-Bancroft, & Boogart, 2014). In these settings, ongoing access to digital resources
provides students with opportunities for greater relevancy and personalization in the learning process
(Spires et al., 2009). However, this can also introduce complexities that require teachers to be highly
skilled at improvising, coaching, and consulting (Spires et al., 2009).
Through this roundtable presentation, the researchers will share the validation strategies being
used and discuss the journey leading to the design of an instrument to measure inservice and preservice
teacher preparedness for teaching in a student-centered, one-to-one learning environment. Sharing this
next step as a transparent process helps to further involve other interested researchers in the field and
discuss the final product, its use, implementation in data collection, and potential. As part of this
discussion, we hope to engage colleagues in a discussion of validation itself, the challenges faced when
undertaking related methodologies, as well as a discussion of how validation procedures advance our
efforts to influence effective technology integration practices.
Validation Procedures
This set of competencies have been thoroughly assessed for content validity as part of the
research conducted by Author (2017; 2018). Several decisions must be made to continue the process of
VALIDATION OF 1:1 TEACHING COMPETENCIES 2
developing a tool which can measure those competencies. With regard to reformatting the inventory to be
used as an observation tool one must determine who the observer is, the setting, the scale and the
frequency and duration of observation. Of particular interest to us is the granularity of the behavior being
observed. Because the original source of competencies (Author, 2017) constitutes a set of somewhat
abstract qualities, it is important that we ascertain how specific the exemplars of these behaviors should
be. On the one hand, the more general the competency, the more extensible and flexible the observational
measurement. For example, the item “Utilize technology to provide opportunities for individualized
student learning experiences” is global enough to apply to different time resolutions (classroom
observations vs. principal yearly evaluations); devices (laptops vs. tablets); users (teacher’s whiteboard
vs. student’s device); and activities (whole class lesson vs. individualized work product).
Therefore, selecting a useful level of generality is an important first step. Developing an
inventory to be used as a self-assessment instrument to measure inservice or preservice teachers’
preparedness for teaching in this dynamic environment requires additional and equally important
procedures. This requires that translation of general standards into items appropriate for self-diagnosis by
educators, creating the appropriate scale and instructions. This will be followed by expert review, piloting
the instrument with the appropriate sample, and evaluation of internal consistency. If reliability is readily
established, we will question validity—does the measure predict what it should and not what it shouldn’t?
DeVellis (2003) explains that the process of scale validation is iterative; the researchers specify
potential items and response scales, obtain feedback on the items and format, include validation items as
applicable, and then collect data. Those data then inform subsequent rounds of item generation/revision,
review, and data collection. Although not easy, the task themselves are straightforward.
References
Author (2017)
Author (2018)
Broussard, J., Hebert, D., Welch, B., & VanMetre, S. (2014). Teaching today for
tomorrow: A case study of one high school’s 1:1 computer adoption. The Delta Kappa Gamma
Bulletin, 80(4), 37-45. Retrieved from https://www.questia.com/library/p438442/delta-kappa-
gamma-bulletin
Cavanagh, C., Dawson, K., & Ritzhaupt, A. (2011). An evaluation of the conditions,
processes, and consequences of laptop computing in K-12 schools. Journal of Educational
Computing Research, 45(3), 359-378. doi: http://dx.doi.org/10.2190/EC.45.3.f
DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks, CA: Sage.
Gherardi, S. (2017). Digitized and decoupled? Teacher sensemaking around educational
technology in a model 1:1 program. Mid-Western Educational Researcher, 29(2), 166-194.
Retrieved from https://www.mwera.org
Harper, B. & Milman, N. B. (2015). One-to-one technology in K-12 classrooms: A
review of the literature from 2004 through 2014. Journal of Research on Technology in
Education, 48(2), 129-142. doi: 10.1080/15391523.2016.1146564
Holen, J. B., Hung, W., & Gourneau, B. (2017). Does one-to-one technology really
work: An evaluation through the lens of activity theory. Computers in the Schools:
Interdisciplinary Journal of Practice, Theory, and Applied Research, 34(1-2), 24-44. doi:
10.1080/07380569.2017.1281698
Inserra, A. & Short, T. (2012). An analysis of high school math, science, social studies,
English, and foreign language teachers’ implementation of one-to-one computing and their
pedagogical practices. Journal of Educational Technology Systems, 41(2), 145-169. doi:
http://dx.doi.org/10.2190/ET.41.2.d
Islam, M. S., & Grönlund, Å. (2016). An international literature review of 1: 1 computing in schools.
Journal of educational change, 17(2), 191-222. 10.1007/s10833-016-9271-y
Maninger, R. M., & Holden, M. E. (2009). Put the textbooks away: Preparation and support for a middle
school one-to-one laptop initiative. American Secondary Education, 5-33. Retrieved at
http://www.ashland.edu/academics/education/ase/
Milman, N. B., Carlson-Bancroft, A., & Vanden Boogart, A. (2014). Examining
differentiation and utilization of iPads across content areas in an independent, preK-4thgrade
elementary school. Computers in the Schools, 31, 119-133. doi: 10.1080/07380569.2014.931776
Richardson, J. W., McLeod, S., Flora, K., Sauers, N. J., Kannan, S., & Sincar, M. (2013). Large-scale 1:1
computing initiatives: An open access database. International Journal of Education and
Development using Information and Communication Technology, 9(1), 4-18. Retrieved from
http://ijedict.dec.uwi.edu
Shapley, K., Sheehan, D., Maloney, C., & Caranikas-Walker, F. (2011). Effects of technology immersion
on middle school students’ learning opportunities and achievement. The Journal of Educational
Research, 104(5), 299-315. Doi: 10.1080/00220671003767615
Spires, H., Wiebe, E., Young, C. A., Hollebrands, K., & Lee, J. K. (2009). Toward a new
learning ecology: Teaching and learning in one-to-one environments. Friday Institute White
Paper Series. NC State University: Raleigh, NC.
Strong, M., Gargani, J., & Hacifazlioğlu, Ö. (2011). Do we know a successful teacher when we see one?
Experiments in the identification of effective teachers. Journal of Teacher Education, 62(4), 367-
382. DOI: 10.1177/0022487110390221.