Vous êtes sur la page 1sur 4

INTERVIEW

There are 10 questions concerning developing a data collection, designing and sampling strategy
of the program evaluation.
QUESTIONS:
1. Who do you consider to be the primary stakeholders in the IDT Program?

2. What are the interests and preferences of stakeholders?

3. What steps should be taken before evaluating the IDT program?

4. What do you think is the key purpose for evaluating the IDT program?

5. Based on the purpose of the evaluation what evaluation method would you use?

6. Is data collection and further data analysis significant in the program evaluation? If so, how
important is data analysis collection in an evaluation? Could you explain your ideas?

7. What do you consider the most effective and appropriate way to collect the data?

8. What sources would you use for the evaluation in the IDT program?

9. Could you provide at least 2-3 questions that you would use in the evaluation?

10. What is defined as Formative and Summative Evaluations in the IDT program?

SUMMARY
Q. 1: One of my interviewees consider that the primary stakeholders in the IDT program are the
university Leadership. One of the interviewees believe that the instructors and students would
not be stakeholders in an evaluation as much as they are data sources. Another view is quite
close to mine that the primary stakeholders in the IDT program are learners, the educator and the
funders who are paying for the program.
Q. 2: The primary interests and preferences of the university Leadership are insurance that the
university is perceived as a quality institution that produces skilled and knowledgeable
graduates, ready to participate and be competitive in the job market. The key interest and
objective is the reputation of applicability. For instance, the Office of the Chief Academic Officer
is concerned with school accreditation and academic integrity, and therefore wants university's
programs to be known as ones that successfully teach what they say they will teach. The primary
interest of the university as an educational organization is to get the accreditation to be able to
provide a high level of teaching. Learners, educators and funders prefer the program to work to
their expectations.
Q. 3: The purpose and scope of the evaluation needs to be determined with the stakeholders in
order to let evaluators figure out what is supposed to be measured and why. Another interviewee
considers that the evaluator needs to familiarize himself with the program and the expectations,
as it is impossible to evaluate a program that the evaluator is not familiar with effectively.
Q. 4: The IDT program evaluation could consist of essentially two items:
A. Did the students learn what the program claimed to teach them? (in other words, is the
accreditation unassailable?)
B. Do the students leave with marketable skills? (in other words, is the program applicable to
real life?)
Q. 5: The part of the evaluation that deals with accreditation should probably use an Expertiseoriented method, where experts in the field judge the content of the curriculum as well as the
responses of the graduates to determine if what is being taught is what is being learned.
The part of the evaluation that deals with the applicability of the program would probable use a
Decision-oriented method, since the administration's goals are to make the program as applicable
as possible. The content of the courses or teaching methods may have to change to meet this

goal, and therefore data to make decisions about the type and severity of changes must be
gathered.
Q. 6: Interviewees simply believe that for the IDT program evaluation data is of primary
significance. The perceptions of participants and stakeholders and the empowerment of the
participants and stakeholders are of very limited use. The data that the evaluator is collecting is
supporting the evaluation process. Data analysis is important because if there is data collection
the evaluator knows what the issues are with the program and what is working well for the
program.
Q. 7: Interviews might be a good method to collect information on long-term retention of what is
learned in the program. Electronic surveys wouldn't be good for that since answers could be
found with Google fairly easily. Moreover, gauging the applicability of the program could be
done by survey, either electronic or by other means.
Q. 8: Primarily the graduates would be a source as they provide information about long-term
retention of what is taught as well as perceptions about program applicability. The graduates'
employers could also be a source to measure applicability. But in order to evaluate for
accreditation, evaluators would also need course design documents and course content.
Q. 9: There are several questions that would be used in the evaluation:
For evaluation of applicability: "When asked to produce multimedia content for your first job,
how prepared did you feel by what you learned in the IDT program? 1 (not) to 5 (very)"; How
has this project changed the outlook of the program? What were some strategies that worked for
this program? Those that didnt?
Q. 10: Interviewees assume that the formative evaluation of the program would be what could be
used to make decisions about the direction and content of the program. Moreover, there wouldn't
be a summative evaluation of the entire program since the IDT program is ongoing and has no
"end".
RATIONALE OF THE INTERVIEW:
I mostly agree with my interviewees concerning the program evaluation, data collection and
analysis, appropriate methods and approaches to evaluate the IDT program.
I personally agree that the primary stakeholders of the IDT program are learners, educators and
funders as they have their personal interests and preferences about the program: The learner-

Whos interest would be success in the program. The educator- Who is teaching the content to
the students and checking for understanding.
Moreover, I believe that the best way to collect data is through interviewing stakeholders, they
have the most knowledge of what the evaluator is looking into and can explain in-depth why the
things that need to be changed are needing to be changed. Interviewing in person is the best way
to do it because then it is possible to elaborate on what others are saying right then and there and
it will be more of a conversation than a strict interview.
Speaking about the evolution method that should be based upon the key purpose of the
program, I would use the decision oriented method to evaluate the program. The only reason I
said that one is because with this one it is about what the stakeholders want to learn about the
program. This way there are decisions that can be made throughout the evaluation process to
make sure it is a successful approach to the evaluation.
I would use my stakeholders as the main source in the evaluation, they would be good sources
because they have walked through the program and can fully breakdown the program and help
bring out the things that need to be made better.
Finally, some examples of a formative evaluation would be interviews, conversations, asking
people to demonstrate what is being discussed so we could physically see what all of the
participant are talking about. At the end of the program a summative evaluation would be an
online survey, a test to express their understanding of the program or an assessment to see where
all of the stakeholders stand.

Vous aimerez peut-être aussi