Developing a monitoring and evaluation framework helps clarify which pieces of information to collect to evidence your story of change.
It is good practice to include people who will be collecting the data when you develop your framework. You could also involve beneficiaries, volunteers, trustees, partner organisations or funders.
Ideally, write your framework before your project starts so you can make sure you are collecting appropriate data from the beginning.
AALHE's Mission
Is to develop and support a community of educators and inform assessment practices in higher education to foster and improve student learning and institutional quality.
This is an interactive guide for people who are managing an evaluation.
The guide can be used for managing an evaluation that is conducted by an external evaluator or evaluation team, an internal team, or by a combination of these. It can be used for different types of evaluations and for evaluations of different types of interventions, including projects, programs, policies and clusters of projects. It can also be used for evaluation of research.
The guide aims to support decision making throughout the process of an evaluation, from planning its purpose and scope, designing it, conducting it, reporting findings and supporting use of its findings. In many organizations, this process will draw on the expertise of several individuals. Additional help may need to be obtained for one or more steps in the process.
Teachers as well as building and district leaders who are implementing the Ohio Teacher Evaluation System will benefit from training and support related to selecting, developing and using assessments. The Ohio Department of Education, in conjunction with Battelle for Kids, is offering this support through activities aimed at helping participants enhance their “assessment literacy” skills.
The Enhancement Themes are selected by the Scottish higher education sector and they provide a means for institutions, academic staff, support staff and students to work together in enhancing the learning experience.
In CARS, we think of the assessment process as an ongoing cycle, in which assessment of student learning outcomes is used to improve programming. Student learning outcomes are things we want students (or program participants) to know, think, or do upon completing the offered program. For more information on the assessment cycle, see the resources below. The following sections include resources for each step of the cycle.
We deliver actionable, meaningful research and development that advances the field of educational assessment and evaluation, promotes effective and equitable education policy, and improves evidence-based inferences.
This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations.
NILOA assists institutions and others in discovering and adopting promising practices in the assessment of college student learning outcomes. Documenting what students learn, know and can do is of growing interest to colleges and universities, accrediting groups, higher education associations, foundations and others beyond campus, including students, their families, employers, and policy makers.
Este portal está dedicado a mostrar los productos finales del Proyecto de Investigación INevalCO (EA2010-0052) INnovación en la EVALuación de COmpetencias: Diseño y desarrollo de procedimientos e instrumentos para la evaluación de competencias entornos de aprendizaje mixtos/virtuales con la participación de los estudiantes en los títulos de grado. Financiado por la Secretaría General de Universidades (Ministerio de Educación)
El estudio se ha centrado en un aspecto básico para el presente de las enseñanzas universitarias: el diseño y desarrollo de procedimientos e instrumentos para la evaluación de competencias en los títulos de grado, fomentando la participación de los estudiantes y adaptándolos para entornos de aprendizaje mixtos y virtuales.
This Shiny App is to provide a user-friendly interface for users to conduct item analysis based on Classical Test Theory (CTT). Item analysis is used for examining responses to individual test items in order to assess the quality of items and of the test as a whole. It is valuable in improving items or eliminating poorly written or ambiguous item. This Shiny App provides several strategies used for item analysis, including reliability estimates, item difficulty, item discrimination (i.e.,item total correlation).
Seven New England colleges and universities formed the Learning Assessment Research Consortium (LARC) and developed online modules on assessment to be utilized for professional development within colleges and universities nationally. NILOA is pleased to house the great work of the Consortium on our website. All LARC developed materials and module content are under a creative commons license.
Helping you deliver on your online promise
With online learning, everyone has a goal. Learners need to improve and grow. You work to nurture them with well-conceived, well-designed, well-presented courses and programs. Our goal — as a non-profit, quality assurance organization — is to provide a system to help you deliver on that promise: with review, improvement and certification of quality.
The project aimed to confront a fundamental issue for every Higher Education (HE) course/programme leader: how to design an effective, efficient, inclusive and sustainable assessment strategy which delivers the key course/programme outcomes.
The e-Assessment Association has three major goals. To provide professional support and facilitate debate and discussion for people involved in this field of expertise; create and communicate the positive contributions that technology makes to all forms of assessment; and to develop statements of good practice for suppliers and consumers of e-Assessment technologies.
These guidelines aim at helping you with implementing ePortfolios in your organisation by being a clearing house of existing implementation strategies and relevant know-how for schools, universities, companies and other public or non-profit institutions.
Initially, we provide you with general ideas about ePortfolios by showing you some characteristics of educational settings and contexts of ePortfolio use. Thus, ePortfolios can support competence or skills oriented teaching, they can be used as an instrument for assessment or they can serve as a knowledge management tool. Even though the individual learner’s ePortfolio is more known, as there has been done more research into it,there are group or company ePortfolios as well.
The next section, “Recommended ePortfolio models, concepts and tools”, gives you an insight into requirements before implementing an ePortfolio (EUROPORTFOLIO Matrix), learning processes that can be facilitated with ePortfolios. The embedding of the ePortfolio within (higher) education and organizational settings and its use as a LLL tool is demonstrated via the ePortfolio meta model, while a Taxonomy of ePortfolios shows the manifold purposes of ePortfolios. If you have decided to implement ePortfolios in your institution, you can have a look at the evaluation of ePortfolio software/platforms in order to choose the tool appropriate for your context.
The following section “Approved implementation processes and requirements” can serve as a model for your own implementation. The instructions and checklists mentioned in this section give you guidance through the process of implementation. However, you can additionally have a glance at the challenges you have to face when implementing ePortfolios. The brief list of keywords is aimed at helping you brainstorming and/or considering what you have to care for. Glossaries are a part of this Wiki as well. They were developed by different experts in the field of ePortfolio and will hopefully help you to get acquainted with the terminology.
Welcome to the Assessment Resources at HKU. This is the home for conceptual and practical information relating to the development, innovation and research for assessment in higher education. Based on the results of institutional surveys conducted in universities around the world, assessment and feedback continue to be identified as the major challenges in teaching and learning. Despite various ongoing initiatives to improve on assessment and feedback in higher education, it would inevitably takes time to ignite changes in pedagogical practices. It is hoped that AR@HKU can act as a resource platform to facilitate, accelerate and promote a global move towards good practices in assessment and feedback. In this website you will be able to exchange ideas and find various strategies and descriptive details for assessing your students whether they are in groups, individuals, large class settings or through online assessment. You will also find ways for evaluating your teaching and tips for students on how to ace certain assessments.
This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations. updated through October 23, 2018.Measurement
The Laboratory for Innovation Science at Harvard (LISH) is spurring the development of a science of innovation through a systematic program of solving real-world innovation challenges while simultaneously conducting rigorous scientific research and analysis.
En este espacio publicamos instrumentos de medición (modelos de cuestionarios, manuales de ayuda de aplicación de cuestionarios…) y metodologías, con el fin de orientar en el diseño y desarrollo de actuaciones de evaluación que aseguren la eficacia, eficiencia y calidad de los planes de formación.
A través de esta página web, pretendemos aportar a la comunidad universitaria procedimientos e instrumentos de evaluación en todas las ramas de conocimiento y sobre las ocho competencias básicas de los títulos de grado especificadas en el RD 1393/2007 de forma que dispongamos de un banco de ejemplificaciones para poder evaluar el grado de desarrollo competencial de los estudiantes.
A modo de repositorio, se ofrecen los procedimientos de evaluación, unidades de evaluación e instrumentos de evaluación elaborados en el estudio. Es decir, un banco público de procedimientos, unidades de evaluación e instrumentos para la evaluación de competencias que sea referente y utilizable por el profesorado universitario
To help campuses ensure that their online courses are learner centered and well designed, a team of Open SUNY staff and campus stakeholders has designed the OSCQR rubric, a customizable and flexible tool for online course quality review.
The OSCQR rubric specifically targets online course design. The OSCQR rubric is unique and differs from other online course quality rubrics in several ways. It is not restricted to mature online courses. The rubric can be used formatively with new online faculty to help guide, inform, and influence the design of their new online courses, and, it is non-evaluative.
Conceptually, the rubric and the online course review and refresh process are implemented as a professional development exercise designed to guide online faculty to use research-based effective practices and standards to improve the quality, effectiveness, and efficiency of their online course design, rather than as an online course evaluation, or quality assurance procedure.
The Transparency in Learning and Teaching in Higher Education project (TILT Higher Ed) is an award-winning national educational development and research project that helps faculty to implement a transparent teaching framework that promotes college students' success. The Project's activities include:
workshops for both faculty and students that promote student's conscious understanding of how they learn,
online surveys that help faculty to gather, share and promptly benefit from current data about students' learning by coordinating their efforts across disciplines, institutions and countries
confidential reporting of survey results to faculty
collaborative research on students' learning experiences.
Since its inception at the University of Illinois, Urbana-Champaign in 2009-2010, the project has involved over twenty-five thousand students in hundreds of courses at more than forty institutions in seven countries. Now housed at UNLV, the project invites participants from all institutions of higher education in the US and abroad. In 2014-2015, the Transparency Project began partnering with the Association of American Colleges and Universities to focus on advancing underserved students' success in higher education.
This page is devoted to teaching others about psychometric theory as well as R. It consists of chapters of an in progress text as well as various short courses on R.