Collaborative Evaluation Program
Corporate Training Program
The Appleton Greene Corporate Training Program (CTP) for Collaborative Evaluation is provided by Ms. Gordon MPH MS Certified Learning Provider (CLP). Program Specifications: Monthly cost USD$2,500.00; Monthly Workshops 6 hours; Monthly Support 4 hours; Program Duration 12 months; Program orders subject to ongoing availability.
Personal Profile 
Ms. Gordon is a Certified Learning Provider (CLP) at Appleton Greene and she has experience in management, human resources and marketing. She has achieved a Master’s in Public Health (MPH) and a Master’s in Anthropology (MS). She has industry experience within the following sectors: Education; Healthcare; Non-Profit & Charities; Technology and Consultancy. She has had commercial experience within the following countries: United States of America, or more specifically within the following cities: Washington DC; New York NY; Philadelphia PA; Boston MA and Chicago IL. Her personal achievements include: facilitated twenty-two programs on campus that underwent accreditation processes for a USA University; implemented an evaluation and accreditation process for an Accreditation Council and co-developed the first web-based, online, integrated accreditation system in the United States and world; Her service skills incorporate: learning and development; management development; business and marketing strategy; marketing analytics and collaborative evaluation.
To request further information about Ms. Gordon through Appleton Greene, please Click Here.
(CLP) Programs
Appleton Greene corporate training programs are all process-driven. They are used as vehicles to implement tangible business processes within clients’ organizations, together with training, support and facilitation during the use of these processes. Corporate training programs are therefore implemented over a sustainable period of time, that is to say, between 1 year (incorporating 12 monthly workshops), and 4 years (incorporating 48 monthly workshops). Your program information guide will specify how long each program takes to complete. Each monthly workshop takes 6 hours to implement and can be undertaken either on the client’s premises, an Appleton Greene serviced office, or online via the internet. This enables clients to implement each part of their business process, before moving onto the next stage of the program and enables employees to plan their study time around their current work commitments. The result is far greater program benefit, over a more sustainable period of time and a significantly improved return on investment.
Appleton Greene uses standard and bespoke corporate training programs as vessels to transfer business process improvement knowledge into the heart of our clients’ organizations. Each individual program focuses upon the implementation of a specific business process, which enables clients to easily quantify their return on investment. There are hundreds of established Appleton Greene corporate training products now available to clients within customer services, e-business, finance, globalization, human resources, information technology, legal, management, marketing and production. It does not matter whether a client’s employees are located within one office, or an unlimited number of international offices, we can still bring them together to learn and implement specific business processes collectively. Our approach to global localization enables us to provide clients with a truly international service with that all important personal touch. Appleton Greene corporate training programs can be provided virtually or locally and they are all unique in that they individually focus upon a specific business function. All (CLP) programs are implemented over a sustainable period of time, usually between 1-4 years, incorporating 12-48 monthly workshops and professional support is consistently provided during this time by qualified learning providers and where appropriate, by Accredited Consultants.
Executive summary
Collaborative Evaluation
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholders understand and use. Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The processes use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.
Collaborative processes are being promoted as an alternative decision-making process for managing. This is a relatively recent phenomenon, and, given its growing popularity, it is important to develop and apply methods and criteria for evaluation, to determine strengths and weaknesses, and to identify best practices for effective use of the collaborative model. Evaluation based on multiple criteria and at several points in time can assist those involved in designing and organizing collaborative processes to ensure the process is responsive to stakeholders’ and achieves its objectives. The success of both the process and the outcome of collaborative processes can be effectively appraised using participant surveys.
Evidence from case studies of collaborative approaches show these processes can generate higher quality, and more creative and durable agreements that are more successfully implemented due to increased public buy-in and reduced conflict. Collaboration can generate social capital, by facilitating improved relationships between stakeholders, generating new stakeholder networks, enhancing communication skills, and co-producing new knowledge with stakeholders. However, collaborative processes are a relatively recent phenomenon, particularly when compared with historical planning and decision-making processes.
“Is our program working?” This is a key question in education today, particularly in this era of heightened accountability. A collaborative program evaluation model is an extremely useful way to answer this question when education organizations want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative program evaluation model, the client (e.g., districts, states, public and independent schools, non-profits, and foundations) works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports program improvement, and offer summative information on the effectiveness of the program.
Collaborative evaluation is a proactive evaluation model that enables program staff to engage in continuous program improvement. Specific benefits of the model include
A customized evaluation design that reflects the nuances of the program being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in program implementation over time.
Increased reliability of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of program staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among program staff.
Potential cost efficiencies.
Each of these benefits is described in detail below:
Address program nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with program staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of program staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide program staff to identify and capitalize on external and internal program networks that they can tap to help them to achieve program goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between program staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the program. By communicating regularly about program developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the program.
Increased reliability of results
Another benefit of working collaboratively with program staff in developing the evaluation is increased reliability of the study. Because the evaluation team develops a high level of understanding of the program, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the reliability of the study and the credibility of the findings. The use of a collaborative program evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate program improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that program staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress
The evaluation team works with program staff to develop tools to measure desired outcomes of the program. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on program implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among program staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative program evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the program’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of program improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of program staff through their active participation in the design and execution of the evaluation. With increased capacity, the program staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process incorporates four phases: planning; implementation; completion; and dissemination and reporting. These complement the phases of program development and implementation. Each phase has unique issues, methods, and procedures.
Collaborative Evaluation – History
Most companies or organizations want to attain sustainable, successful goals for themselves, their employees and communities of interest. But more often than not, they have difficulties achieving them. It’s the “getting there” that’s hard. This corporate training program features a process-driven system that can help. If one is committed to and serious about reaching their goals, there is almost an inevitability about succeeding. It is based on efficiency, collaboration, quality, rigor and contemporary technology. What is more, it emphasizes excellence.
In describing the process, I will also note aspects of my own background that will, hopefully, provide some context to the explanation of this system. My professional career over the past three decades featured teaching, training, mentoring and consulting, with an emphasis on evaluation and assessment in the area of accreditation in higher education. The evaluations revolved around a set of standards that were developed and approved over a period of time by members of a board of directors of an evaluation commission and members-at-large of an associated profession. As well, there were policies and procedures written relating to the process, but the standards were of top priority and ‘written in stone’. These standards were mandated requirements for programs to have in place before receiving any approval or accreditation from the board. Much of my experience centered on developing, writing and directing all of the processes involved in the evaluation. As noted, the area of focus was in higher education, specifically within healthcare professions. The evaluations took place primarily within the United States but there were several conducted abroad and all were in higher education or learning institutions. In those outside of the United States, there may have been some cultural and language modifications made to the processes, but the predominant qualitative standards were similar throughout the world.
To achieve the long-term goal of what was known as accreditation in universities, there was a detailed peer-reviewed process that needed to be closely followed and monitored over the course of several years. The actual process was completed by the program using a web-based system, the first of its kind in accreditation within the United States and abroad. More will be discussed about this integrated (and online) system under “Future Outlook.
The process was rigorous, with articulated goals, i.e., principles of quality, self-reflection and collaboration. A primary objective was to require institutions to demonstrate both the strengths and weaknesses of their programs. In this way, there was an honest approach on the part of all professionals involved, i.e., the staff, faculty, senior administrators and evaluators. In a sense, everyone started on an equal footing and received equal treatment. This was considered an extremely important component of the process by the Board of Directors. A second critical objective was the approach the evaluators took. It had to be clear comprehensive and unbiased, in other words, not punitive. These objectives were stated at the onset of the process to members of departments and/or programs. The board also wanted all faculty and staff to know that they, the board, wanted them to succeed. Therefore, if institutions were serious and committed to the process and eager to improve (even if they already considered themselves stellar) a win-win situation could occur. This was gratifying to everyone concerned. The agency strongly believed in this management style and found the benefits to be plentiful. Most significant, program participants began to enjoy the process, felt less constrained when responding to questions and believed they were partners in the evaluation process. This was extraordinarily valuable to them and they also expressed an eagerness to hear how they could improve. The result of this peer-reviewed process with ongoing progress reports over a period of time, meant there was, under most circumstances, a stability of quality and continual improvement in the educational programs accredited. This was an organized effort to reach purposeful goals and will be further explained in the Future Outlook section.
Collaborative Evaluation – Current Position
My current position continued to be in the area of evaluation, but as a consultant, beginning in 2018. In December 31, 2017 I retired from my former full-time position of Executive Director in the accreditation agency I founded. My present consulting remains in the area of education/healthcare and accreditation. Throughout 2018, I consulted with programs undergoing Developing Status. These programs (as to ones undertaking accreditation evaluations) are starting from a ‘blank page’ and have never educated students in a specialized content area or at a terminal or doctoral level. This type of program also is referred to as a candidacy application.
The program submitting this type of application describes its intentions as fully as possible and explains how it will go about achieving the necessary components of its curriculum. Since the program is in its infancy or preliminary stages, it is not necessary for each area of the curriculum to be completely in place at this time. Instead, the administration (within the relevant university) will discuss the plan it has for its related course of study.
During the Developing Status stage, program administrators submit an application in the form or a questionnaire to the accrediting agency. The purpose of the application is to inquire how the program plans to achieve the goals it has for its curriculum and operating business. Each question pertains to a specific area, such as mission, goals and objectives, governance, policies, recruitment and admissions, faculty development, curriculum, methods of instruction, finances, facilities, student achievement, resources and advisement and other categories pertaining to this initial level of development.
A discussion takes place about when the program wishes to admit students and acknowledges that it will not do so until it receives approval from the accrediting body. Thus, if a program wishes to admit students in August/September of a given year and wants to recruit students in the beginning of that year, it needs to submit its application in sufficient time prior to the recruitment stage. Once an application is received, the Chair of the board designates two evaluators to review the Developing Status materials. These evaluators are usually members of the board.
The next step is for the evaluators to plan a fact-finding visit to the institution within the next two to three months. At a mutually convenient time between the program and accrediting body, a visit takes place. The purpose of the visit is to determine whether the program is ‘on track’ with the development of its program and headed in the right direction. This means evaluators on-site can verify the materials they received and meet with the institution’s senior administrators, faculty, practitioners and others who have been influential in the designing and planning of the program. It also is an opportunity for the program’s administrators to ask evaluators questions about the program’s progress and, in return, to receive responses. This usually is a collaborative, supportive meeting, spurring the program to move quickly, provided it is headed down the right path.
Once the Developing Status evaluation is complete, the two evaluators write a report of their findings and submit it to the board. The board works quickly to discuss the pros and cons of the evaluation and within two weeks to one month, provides a decision to the university or college about whether to approve or deny the report. It could also ask for additional information, if needed. A formal letter explaining the decision is written to the administrator in charge of the program. Once the letter is received and approval is given, the program can proceed with its timeline for development. If a program is deferred and a request for additional information is made, the program’s administrator can speak with the director of the accrediting body about its next steps. The timeline for submitting new or additional information would be explained in the letter sent. Once new materials are submitted and approved, the deferral is removed and Developing Status is awarded. If the board decision is to deny Developing Status, the program could re-apply, if it wishes, at a time mutually decided on by the accrediting body and the program. In most cases, the program will have an idea of whether it is on the correct path after its fact-finding meeting with the evaluators.
Collaborative Evaluation – Future Outlook
I have been involved in each aspect of the evaluation system noted for almost three decades and have believed in its virtues for the entire time. I value the fact that one engages in a process that is collaborative, self-reflective and stresses improvement on a regular basis. Although aspects of the process are time-consuming, I see enormous merit in the benefits it provides to individuals, team groups and programs. These benefits transcend any factors that may seem tedious. I also believe this type of process-driven assessment can be adapted to numerous settings in both for-profit and non-profit companies. The important fact is that there is a consistent emphasis on excellence, efficiency, quality and rigor as well as a consistency to questions asked within the formal structure. These characteristics are in-bedded into the processes. What is more, they can be replicated in institutions and businesses throughout the world.
How can this be