Collaborative Evaluation Program
Corporate Training Program
The Appleton Greene Corporate Training Program (CTP) for Collaborative Evaluation is provided by Ms. Gordon MPH MS Certified Learning Provider (CLP). Program Specifications: Monthly cost USD$2,500.00; Monthly Workshops 6 hours; Monthly Support 4 hours; Program Duration 12 months; Program orders subject to ongoing availability.
Personal Profile 
Ms. Gordon is a Certified Learning Provider (CLP) at Appleton Greene and she has experience in management, human resources and marketing. She has achieved a Master’s in Public Health (MPH) and a Master’s in Anthropology (MS). She has industry experience within the following sectors: Education; Healthcare; Non-Profit & Charities; Technology and Consultancy. She has had commercial experience within the following countries: United States of America, or more specifically within the following cities: Washington DC; New York NY; Philadelphia PA; Boston MA and Chicago IL. Her personal achievements include: facilitated twenty-two programs on campus that underwent accreditation processes for a USA University; implemented an evaluation and accreditation process for an Accreditation Council and co-developed the first web-based, online, integrated accreditation system in the United States and world; Her service skills incorporate: learning and development; management development; business and marketing strategy; marketing analytics and collaborative evaluation.
To request further information about Ms. Gordon through Appleton Greene, please Click Here.
(CLP) Programs
Appleton Greene corporate training programs are all process-driven. They are used as vehicles to implement tangible business processes within clients’ organizations, together with training, support and facilitation during the use of these processes. Corporate training programs are therefore implemented over a sustainable period of time, that is to say, between 1 year (incorporating 12 monthly workshops), and 4 years (incorporating 48 monthly workshops). Your program information guide will specify how long each program takes to complete. Each monthly workshop takes 6 hours to implement and can be undertaken either on the client’s premises, an Appleton Greene serviced office, or online via the internet. This enables clients to implement each part of their business process, before moving onto the next stage of the program and enables employees to plan their study time around their current work commitments. The result is far greater program benefit, over a more sustainable period of time and a significantly improved return on investment.
Appleton Greene uses standard and bespoke corporate training programs as vessels to transfer business process improvement knowledge into the heart of our clients’ organizations. Each individual program focuses upon the implementation of a specific business process, which enables clients to easily quantify their return on investment. There are hundreds of established Appleton Greene corporate training products now available to clients within customer services, e-business, finance, globalization, human resources, information technology, legal, management, marketing and production. It does not matter whether a client’s employees are located within one office, or an unlimited number of international offices, we can still bring them together to learn and implement specific business processes collectively. Our approach to global localization enables us to provide clients with a truly international service with that all important personal touch. Appleton Greene corporate training programs can be provided virtually or locally and they are all unique in that they individually focus upon a specific business function. All (CLP) programs are implemented over a sustainable period of time, usually between 1-4 years, incorporating 12-48 monthly workshops and professional support is consistently provided during this time by qualified learning providers and where appropriate, by Accredited Consultants.
Executive summary
Collaborative Evaluation
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholders understand and use. Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The processes use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.
Collaborative processes are being promoted as an alternative decision-making process for managing. This is a relatively recent phenomenon, and, given its growing popularity, it is important to develop and apply methods and criteria for evaluation, to determine strengths and weaknesses, and to identify best practices for effective use of the collaborative model. Evaluation based on multiple criteria and at several points in time can assist those involved in designing and organizing collaborative processes to ensure the process is responsive to stakeholders’ and achieves its objectives. The success of both the process and the outcome of collaborative processes can be effectively appraised using participant surveys.
Evidence from case studies of collaborative approaches show these processes can generate higher quality, and more creative and durable agreements that are more successfully implemented due to increased public buy-in and reduced conflict. Collaboration can generate social capital, by facilitating improved relationships between stakeholders, generating new stakeholder networks, enhancing communication skills, and co-producing new knowledge with stakeholders. However, collaborative processes are a relatively recent phenomenon, particularly when compared with historical planning and decision-making processes.
“Is our program working?” This is a key question in education today, particularly in this era of heightened accountability. A collaborative program evaluation model is an extremely useful way to answer this question when education organizations want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative program evaluation model, the client (e.g., districts, states, public and independent schools, non-profits, and foundations) works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports program improvement, and offer summative information on the effectiveness of the program.
Collaborative evaluation is a proactive evaluation model that enables program staff to engage in continuous program improvement. Specific benefits of the model include
A customized evaluation design that reflects the nuances of the program being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in program implementation over time.
Increased reliability of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of program staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among program staff.
Potential cost efficiencies.
Each of these benefits is described in detail below:
Address program nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with program staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of program staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide program staff to identify and capitalize on external and internal program networks that they can tap to help them to achieve program goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between program staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the program. By communicating regularly about program developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the program.
Increased reliability of results
Another benefit of working collaboratively with program staff in developing the evaluation is increased reliability of the study. Because the evaluation team develops a high level of understanding of the program, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the reliability of the study and the credibility of the findings. The use of a collaborative program evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate program improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that program staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress
The evaluation team works with program staff to develop tools to measure desired outcomes of the program. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on program implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among program staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative program evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the program’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of program improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of program staff through their active participation in the design and execution of the evaluation. With increased capacity, the program staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process incorporates four phases: planning; implementation; completion; and dissemination and reporting. These complement the phases of program development and implementation. Each phase has unique issues, methods, and procedures.
Collaborative Evaluation – History
Most companies or organizations want to attain sustainable, successful goals for themselves, their employees and communities of interest. But more often than not, they have difficulties achieving them. It’s the “getting there” that’s hard. This corporate training program features a process-driven system that can help. If one is committed to and serious about reaching their goals, there is almost an inevitability about succeeding. It is based on efficiency, collaboration, quality, rigor and contemporary technology. What is more, it emphasizes excellence.
In describing the process, I will also note aspects of my own background that will, hopefully, provide some context to the explanation of this system. My professional career over the past three decades featured teaching, training, mentoring and consulting, with an emphasis on evaluation and assessment in the area of accreditation in higher education. The evaluations revolved around a set of standards that were developed and approved over a period of time by members of a board of directors of an evaluation commission and members-at-large of an associated profession. As well, there were policies and procedures written relating to the process, but the standards were of top priority and ‘written in stone’. These standards were mandated requirements for programs to have in place before receiving any approval or accreditation from the board. Much of my experience centered on developing, writing and directing all of the processes involved in the evaluation. As noted, the area of focus was in higher education, specifically within healthcare professions. The evaluations took place primarily within the United States but there were several conducted abroad and all were in higher education or learning institutions. In those outside of the United States, there may have been some cultural and language modifications made to the processes, but the predominant qualitative standards were similar throughout the world.
To achieve the long-term goal of what was known as accreditation in universities, there was a detailed peer-reviewed process that needed to be closely followed and monitored over the course of several years. The actual process was completed by the program using a web-based system, the first of its kind in accreditation within the United States and abroad. More will be discussed about this integrated (and online) system under “Future Outlook.
The process was rigorous, with articulated goals, i.e., principles of quality, self-reflection and collaboration. A primary objective was to require institutions to demonstrate both the strengths and weaknesses of their programs. In this way, there was an honest approach on the part of all professionals involved, i.e., the staff, faculty, senior administrators and evaluators. In a sense, everyone started on an equal footing and received equal treatment. This was considered an extremely important component of the process by the Board of Directors. A second critical objective was the approach the evaluators took. It had to be clear comprehensive and unbiased, in other words, not punitive. These objectives were stated at the onset of the process to members of departments and/or programs. The board also wanted all faculty and staff to know that they, the board, wanted them to succeed. Therefore, if institutions were serious and committed to the process and eager to improve (even if they already considered themselves stellar) a win-win situation could occur. This was gratifying to everyone concerned. The agency strongly believed in this management style and found the benefits to be plentiful. Most significant, program participants began to enjoy the process, felt less constrained when responding to questions and believed they were partners in the evaluation process. This was extraordinarily valuable to them and they also expressed an eagerness to hear how they could improve. The result of this peer-reviewed process with ongoing progress reports over a period of time, meant there was, under most circumstances, a stability of quality and continual improvement in the educational programs accredited. This was an organized effort to reach purposeful goals and will be further explained in the Future Outlook section.
Collaborative Evaluation – Current Position
My current position continued to be in the area of evaluation, but as a consultant, beginning in 2018. In December 31, 2017 I retired from my former full-time position of Executive Director in the accreditation agency I founded. My present consulting remains in the area of education/healthcare and accreditation. Throughout 2018, I consulted with programs undergoing Developing Status. These programs (as to ones undertaking accreditation evaluations) are starting from a ‘blank page’ and have never educated students in a specialized content area or at a terminal or doctoral level. This type of program also is referred to as a candidacy application.
The program submitting this type of application describes its intentions as fully as possible and explains how it will go about achieving the necessary components of its curriculum. Since the program is in its infancy or preliminary stages, it is not necessary for each area of the curriculum to be completely in place at this time. Instead, the administration (within the relevant university) will discuss the plan it has for its related course of study.
During the Developing Status stage, program administrators submit an application in the form or a questionnaire to the accrediting agency. The purpose of the application is to inquire how the program plans to achieve the goals it has for its curriculum and operating business. Each question pertains to a specific area, such as mission, goals and objectives, governance, policies, recruitment and admissions, faculty development, curriculum, methods of instruction, finances, facilities, student achievement, resources and advisement and other categories pertaining to this initial level of development.
A discussion takes place about when the program wishes to admit students and acknowledges that it will not do so until it receives approval from the accrediting body. Thus, if a program wishes to admit students in August/September of a given year and wants to recruit students in the beginning of that year, it needs to submit its application in sufficient time prior to the recruitment stage. Once an application is received, the Chair of the board designates two evaluators to review the Developing Status materials. These evaluators are usually members of the board.
The next step is for the evaluators to plan a fact-finding visit to the institution within the next two to three months. At a mutually convenient time between the program and accrediting body, a visit takes place. The purpose of the visit is to determine whether the program is ‘on track’ with the development of its program and headed in the right direction. This means evaluators on-site can verify the materials they received and meet with the institution’s senior administrators, faculty, practitioners and others who have been influential in the designing and planning of the program. It also is an opportunity for the program’s administrators to ask evaluators questions about the program’s progress and, in return, to receive responses. This usually is a collaborative, supportive meeting, spurring the program to move quickly, provided it is headed down the right path.
Once the Developing Status evaluation is complete, the two evaluators write a report of their findings and submit it to the board. The board works quickly to discuss the pros and cons of the evaluation and within two weeks to one month, provides a decision to the university or college about whether to approve or deny the report. It could also ask for additional information, if needed. A formal letter explaining the decision is written to the administrator in charge of the program. Once the letter is received and approval is given, the program can proceed with its timeline for development. If a program is deferred and a request for additional information is made, the program’s administrator can speak with the director of the accrediting body about its next steps. The timeline for submitting new or additional information would be explained in the letter sent. Once new materials are submitted and approved, the deferral is removed and Developing Status is awarded. If the board decision is to deny Developing Status, the program could re-apply, if it wishes, at a time mutually decided on by the accrediting body and the program. In most cases, the program will have an idea of whether it is on the correct path after its fact-finding meeting with the evaluators.
Collaborative Evaluation – Future Outlook
I have been involved in each aspect of the evaluation system noted for almost three decades and have believed in its virtues for the entire time. I value the fact that one engages in a process that is collaborative, self-reflective and stresses improvement on a regular basis. Although aspects of the process are time-consuming, I see enormous merit in the benefits it provides to individuals, team groups and programs. These benefits transcend any factors that may seem tedious. I also believe this type of process-driven assessment can be adapted to numerous settings in both for-profit and non-profit companies. The important fact is that there is a consistent emphasis on excellence, efficiency, quality and rigor as well as a consistency to questions asked within the formal structure. These characteristics are in-bedded into the processes. What is more, they can be replicated in institutions and businesses throughout the world.
How can this be done? As an example, in 2004, the accreditation body, in conjunction with an outside technology company, developed an integrated web-based, online system that incorporated all aspects of the accreditation process. It eliminated a former paper trail that was moribund, convoluted and had little long-term effect. The system our group wanted to devise was similar to “Turbo Tax”, an online data system used by many Americans to computerize their federal, state and city taxes. The idea was to enter data, update it as necessary and have permanent and easy access to it. We developed a similar electronic process and found this process strengthened the bond between the agency and its programs.
There were many components built into the system. For example, a constructive online interaction between an academic program and its faculty was developed as they pursued questions about their program. The Program Director or overall administrator could assign standards to faculty members and the entire group could meet electronically to privately discuss their assignments. In addition, programs had the ability to dialogue electronically with site-team evaluators after they submitted their application to the agency and their program’s materials were closed for making more changes. The evaluators received access to these documents through a special portal to study them and ask for clarifications, as necessary. The program also had the ability to respond. This type of efficient conversation, known as the Interactive Evaluation, was helpful to all parties and allowed evaluators to have a better understanding of the program prior to their physical visit on campus. This meant they could spend more time verifying particular areas on-site that still needed resolution. It also allowed them to see relevant facilities at the institution and meet with faculty, groups of students, senior administrators and clinicians. The period of time for a site-visit is between 2 to 3 days, and the time goes quickly. At the conclusion of this visit, the evaluators submitted a Preliminary Report online which was sent to the senior administration of the institution prior to the exit meeting of the visit. This value-added benefit was facilitated by the web-based system. Comments received by the agency about these online procedures, specifically the Interactive Evaluation and Preliminary Report were very positive. Over and above the benefits provided to the agency and programs, there were benefits the institutions found valuable during their own internal reviews. In addition, the aggregate data garnered from programs could assess the health of the profession at any given time and be of assistance to other stakeholders in the field, such as professional organizations or government agencies. In addition, there is also great value in using components of the process for ‘training the trainer or evaluator”. The same commitment and dedication to educating a ‘trainer’ is as significant as evaluating a program. For example, the ‘trainer’ has to learn how to be a role model and exemplify the attributes needed in encouraging the development of a qualitative program.
In conclusion, this process-driven program is a concrete example of how the many components of a curriculum could be integrated and easily utilized online. The value of being able to organize and analyze hundreds of pieces of data, what I have called synchronized conceptual thinking, has proven to be of benefit to academic institutions. Although the system was established for a particular healthcare profession, it was also designed so it could be adapted for other groups within the United States and abroad. The processes developed were meant to be productive for a variety of institutions and sustainable over a long period of time.
Curriculum
Collaborative Evaluation – Part 1- Year 1
- Part 1 Month 1 Internal Analysis
- Part 1 Month 2 Consumer Demand
- Part 1 Month 3 Business Analysis
- Part 1 Month 4 Business Partnering
- Part 1 Month 5 Technology
- Part 1 Month 6 Human Resources
- Part 1 Month 7 Internal Structures
- Part 1 Month 8 Fund Raising
- Part 1 Month 9 Product Advertising
- Part 1 Month 10 Project Evaluation
- Part 1 Month 11 Product Launch
- Part 1 Month 12 Time Management
Program Objectives
The following list represents the Key Program Objectives (KPO) for the Appleton Greene Collaborative Evaluation corporate training program.
Collaborative Evaluation – Part 1- Year 1
- Part 1 Month 1 Internal Analysis – The first stage of the program is to understand the history, current position and future outlook relating to collaborative evaluation, not just for the organization as a whole, but for each individual department, including: customer service; e-business; finance; globalization; human resources; information technology; legal; management; marketing and production. This will be achieved by implementing a process within each department, enabling the head of that department to conduct a detailed and thorough internal analysis to establish the internal strengths and weaknesses and the external opportunities and threats in relation to collaborative evaluation and to establish a MOST analysis: Mission; Objectives; Strategies; Tasks, enabling them to be more proactive about the way in which they plan, develop, implement, manage and review collaborative evaluation, within their department.
- Part 1 Month 2 Consumer Demand – The feasibility study will provide information about consumer demand. If there is sufficient demand, the program knows that it can begin development of a program to advertise its future project to prospective clients. More details about this effort will be discussed under Product Advertising. It is also interesting to note that consumer demand can be determined in different ways. For example, there will be clients who are interested in the project’s particular qualities and processes involving evaluation or accreditation and will not need to be convinced. These are clients who have been searching for this type of program. On the other hand, there are potential clients who are required to undergo an evaluative process, but are not aware of benefits evident in this project. These are clients who have followed a more traditional accreditation or evaluation process and need to be educated about the contemporary avenues available to them. This means there would be merit in educating these clients about the benefits of technology, efficiency, collaboration and success. One means to pursue this is to advance a marketing campaign to raise awareness about these new avenues as well as present papers at conferences or less formal gatherings. A third group of clients is the one that has never been evaluated or accredited but would benefit greatly from the process. This type of project would encourage improvement in their program and, if successful, would be a prestigious accomplishment. Most educational, non-profit or for-profit facilities are eager to have stamps of approval demonstrating they have met national and/or international standards.
- Part 1 Month 3 Business Analysis – A business analysis is equally fundamental to develop as the feasibility study. In the business analysis, you acknowledge whether you are able to fulfill all of the categories necessary for the project. This involves a careful analysis of the income and expenses you determine will be needed for the first year. During the beginning phases of the project, it can usually serve as a provisional budget. Once it is finalized, it will be projected out for at least a period of five to ten years. In the case of our project, the first business plan would cover two years and include the planning, development, implementation and review. As the project continues, the business plan will continue to expand its budget on an annual basis and at one point may build one spanning the next decade. The business analysis for any company or organization is a calculated fiscal blueprint for the future, organizing the financial thinking of the company and determining whether the project as a whole is affordable or not over the course of time. The analysis provides the accountability that is mandated for any type of business operation. Accounting systems for the business processes should be developed by the financial department of the company or organization to satisfy boards of directors of non-profit organizations or corporate structures. These procedures would also be necessary for governmental or external agencies, such as auditors or potential foundations that provide grant awards. In addition to the project’s Chief Executive, there should be designated staff available to review and monitor the financial health of the project’s operation on a regular, if not daily basis.
- Part 1 Month 4 Business Partnering – As part of the development of this project, it will be necessary to determine business partnerships. Throughout the writing of the Client Information Hub (CIH), there has been discussion of client partnerships with the accreditation agency and how many positive relationships resulted from these professional bonds. This was where the ‘collaborative effect’ was important and clients involved with the accreditation body were treated with respect and with admiration for what they knew and were trying to accomplish. The same effort must be applied to other business relationships. These new relationships will stem from what is important to the growth and progress of the project and they will need cultivation. As an example, a business partnership that is paramount to the technology of this project is the department or company that will assist in the development of the online web-based system. From experience, extensive time will be necessary to spend with developers and programmers during the building stages of the platform. More about this subject will be discussed under Technology. Other critical partners will be the staff members. Their commitment to the success of the project is critical and to secure a promise from them, they must first feel that they are important colleagues in a project striving to succeed. In addition, there will be other stakeholders, such as external consultants, practitioners, and members of other departments who will be part of the team affiliated with the project. All should be enthusiastic about their work and feel that what they bring to the table is meaningful.
- Part 1 Month 5 Technology – By necessity technology will play a critical role in the efficient and effective implementation of the project. The technological interface will embed crucial factors to the system being built, and will bring innovative characteristics to the whole operation. In this day and age the advantages of technology are not only welcomed, they are expected. The technology expertise will be provided by an IT company specializing in computerized and software services or will be part of an IT department from within the company. This will be a cooperative effort between a group of programmers and developers and member(s) of the project’s team that will include the Appleton Greene consultant. All will work together to plan and create the software that will be suitable for the project. Integral to the system being developed will be features discussed in Executive Summary. Some of these include the ability to enter data once, update it and have permanent access to it, constructive interaction among staff in programs as well as with site visitors (if the program is undergoing a formal evaluation), user-friendly system to navigate, inclusion of standards or guidelines and an ability to provide trends and analysis. What is important in this development is establishing a timeline for the project to be completed. Given the fact that it will be part of the planning year, i.e., the first year, there should be sufficient time to have it finished before the beginning of the second year. This is realistic. What also is critical is to have staff learn how to navigate the system so training will be essential. Fortunately most users will have a familiarity with computer systems and data entry and retrieval.
- Part 1 Month 6 Human Resources – The key to success in any significant program is the ability to attract and retain capable, motivated personnel. No organization can achieve success and sustain a high level of productivity without the energetic buy-in from its employees. And just as important as ‘making the trains run on time’ these partners are the voices of the institutions – they tell the story the company wishes to convey and leadership must be creative in instilling motivation and a sense of mission from the top down. With that in mind, the job of Human Resources is to recruit, train and supervise the people who will define the organization. Employees must be recruited on the basis of their current skills and their potential for growth. From the initial interview they should be made aware of the basic requirements, the demand of excellence, and the possibilities for advancement concomitant with the contribution they are able to make to the organization. Specific requirements will be educational attainment in the fields of education and psychology as well as training in IT commiserate with the job. Because interaction with people is important when dealing with various programs it is vital that key personnel have the social interaction skills to successfully represent the organization. In addition to hiring, training and motivating employees, HR has the responsibly of continuous monitoring of the productivity and satisfaction level of employees. It is important that employees are recognized for their contributions and rewarded for outstanding work. Opportunities for self-improvement and advancement will result in employees who are loyal and motivated and who will return the investment in them by enhanced productivity.
- Part 1 Month 7 Internal Structures – It may be necessary to create several internal structures within the project to keep it running smoothly on a daily basis, to monitor its growth and to make decisions about its future. A few internal structures that may be appropriate are a Board of Directors, a structure for the senior staff and one for the department. In developing a board of directors, one must look for a group of independent individuals who bring expertise to the project as well as commitment to seeing it succeed. The board can vary in size, but one that approximates seven to nine people works well in that each person is heard and, more often than not, feels comfortable contributing to board meeting discussions. A relatively small board consists of officers, anywhere from two to four with remaining members serving as members-at-large. The board usually reviews major projects, approves an annual operating budget, follows the general rules of board governance and determines the overall vision and direction of the project. It does not intervene with the daily operations of the project. These latter responsibilities are handled by the staff. Bylaws frequently are written by or for the board documenting rules and procedures to follow. In essence, the board serves as the group accountable for the project’s program. The senior staff is considered the leadership group and is responsible for the day-to-day activities of the project. They take the lead in the development of the project’s program planning over the long-term and report on these plans to the board. Without question, the relationship between the senior staff and board of directors should be a collaborative one, recognizing that each group is dependent on the other. The department structure is created by the senior staff who ensure that each aspect of the program is handled well and in a timely fashion. The responsibilities of each area are outlined and regular reports about how each functions are sent to specific members of the senior staff. If departmental staff are confident that they have the trust of their leadership, they will be more motivated and committed to the overall project and to their individual responsibilities. Internal structures add another layer of organization to the project that helps promote better performance and success.
- Part 1 Month 8 Fund Raising – The successful implementation of a program such as this depends on securing adequate funding at the front end. It is unreasonable to expect revenue returns in the first year and up-front money will be required until the program is sufficiently in place to generate income. However, because of the vital nature of the program, it lends itself quite naturally to fund raising as a mechanism to supply an initial operating budget. Fund raising for the program will go hand in hand with marketing, targeting those organizations that are potential customers and making them aware of the ultimate benefits of the program. Fund-raising can be tailored to specific organizations and can be one on one or large scale. Examples of one on one would be direct connections to individual presidents and CEOs of target organizations. Making them aware of the advantages of investing in a program that would benefit their organization is a sure way of raising funds. On a large scale, presentations to boards and councils of organizations can achieve similar results. Conferences and relevant meetings also provide a venue for advertising the program and generating interest. It is important in Fund Raising to reach a critical mass of funding rather quickly and diligent efforts in this regard are rewarded. Once you can demonstrate that you have the financial backing of important players it is easier to appeal to those sitting on the fence. For that reason strong efforts at fund raising will be vital in the first year of the program and should be a priority.
- Part 1 Month 9 Product Advertising – Concurrent with fund raising efforts, it would be wise to begin advertising the project to all potential clients. Even those who do not contribute advanced funding can still be clients or consumers and they need to be continually apprised of the benefits the program supplies. There are many ways this can be done, both informally and with deliberation. Personal contacts are always important as well as networking to share what you have to offer in an informal way. In a more direct way, brochures and mailing should be developed and distributed to the community of interest. These brochures should be carefully devised to fully display the total range of opportunities and benefits the program will offer and should provide ways of getting additional information and easy follow-up. Webinars are another advertising example of providing an in-depth view of program benefits. More formal presentations can be given at relevant conferences and meetings. It will be important to maintain a presence at the venues that offer opportunities for advertising the product, especially if it can be done in a natural, non- adversarial way. It is always important to tailor any advertising to the target audience and to make presentations that appeal to the intellectual and educational level of the potential client. In our case, the unique characteristics of the program can be leveraged to advantage in advertising. It is actually an opportunity to promote the various features and talk about the ease of use, collaborative structure and interactive nature of the program. In many respects it is a program that ‘sells itself’.
- Part 1 Month 10 Project Evaluation – Evaluation is key to this project in that every aspect of it revolves around a form of assessment. Most of the discussion throughout the CIH has related to the project’s evaluation of a client. This brief discussion, however, will discuss the necessity to evaluate the project’s operations. A key point to make is that the evaluation should begin in the first year and continue on a regular basis. A strong organization does not wait until time has elapsed to begin an evaluation process. There is no question that you need to wait a period of time, but more months rather than years. The sooner you begin to self-reflect, the sooner you begin to improve and no individual or organization (no matter how successful) is exempt from improvement. More concretely, the evaluations for the project should be outlined for each of the four years. The first year’s evaluations should begin approximately after six months. This will provide a good assessment of the first half of the year and, at this point, the questions more likely to be asked are: How am I doing? What should be strengthened? What needs to be corrected? These are healthy queries about the organization and, at this juncture, can be remedied or easily ‘nipped in the bud’. Initial evaluations can take place informally within the operational structures over one or more meetings. In years two, three and four, relevant online surveys/questionnaires, interviews will be part of the evaluative system. All of the major players who have been active in the project will receive at least one evaluation form to complete for each particular year. Regarding the fourth year, the Review Year, evaluation questions will pertain to the overall impression of the project with emphasis on the Implementation Year, the year the project actually began. It will be key to hear from everyone who participated in the project over the four-year period, such as clients, staff, board members, consultants and IT personnel.
- Part 1 Month 11 Product Launch – It is important to be strategic in the Product Launch. The reception of the program in the community of interest depends upon a careful roll-out that proceeds at an optimum speed. Potential clients should be introduced to the product in a way that makes it easy and natural for them to adopt the program not only initially but in a sustaining way. A careful product launch can be divided into three phases: First year – initial discussions, focus groups, and formal presentation; Second year – midyear launch of a pilot program; Third year – full implementation. The first year is concerned with generating interest, the second with demonstrating the program and responding to any problems and the third year with putting it in full operation. The Product launch will move in conjunction with Marketing. Clients who express interest will be involved in the introduction of the program to the extent they want and can express their preferences and requirements. Some clients may choose to be involved in the pilot program and can even be part of Beta-testing. In any case, it is important that clients be fully informed and trained before the process is released to them. Premature adoption of the process could actually be counter-productive. As part of the second or third year, events could be arranged, in conjunction with Marketing, to showcase the product at relevant conferences and meetings. This would also be an opportunity to get feed-back from clients that would be useful in quality control and future developments. The timing of the product launch should be flexible and will depend on client reception and interest.
- Part 1 Month 12 Time Management – The first year of planning gathers information, organizes structures, hires personnel (as necessary), begins to prepare documents for standards/guidelines, policies and procedures – and has an overall idea of its time frame for the first four years. In fact, a skeletal outline of the major areas which need to be accomplished is also prepared for this span of time. This, then, is the first time management blueprint for the project. It is also one of the twelve planning processes identified in this Program Outline section that is critical for the smooth operation of the organization. It is designated the twelfth process because at the conclusion of the first planning year, the twelfth workshop of the year will be held. At this time, the senior staff and board of directors (if in place) will have more knowledge and facts about the project to discuss among themselves. The staff and Appleton Greene consultant will be positioned better to develop and flesh out a time frame for the entire project over the first four years. A word about the importance of time management: Time as everyone knows slips by quickly and what may seem to be in the distant future is suddenly upon one – an all too familiar concern. The simple but extremely central theme of managing time in organizations is to ensure that each important function is adequately addressed. This cannot be over-emphasized. In the initial years, it is often tempting to dwell on what seems more interesting and enjoyable, while neglecting essential roles, such as fund-raising, upon which the success of the entire organization could hinge. This is but one example. There are many more. The actual time frame for an organization is essential to develop, document distribute, discuss and follow as closely as possible. It is always feasible to modify the completion of a project, i.e., abbreviate or lengthen it, or even add others. It is essential, however, to establish beginning and concluding dates for each major task and to accommodate to them. Time management has the good fortune of making one’s life easier because of the boundaries it establishes and, eventually, the milestones it achieves.
Methodology
The Model for Collaborative Evaluations (MCE)
The (MCE) is a framework for guiding collaborative evaluations in a precise, realistic, and useful manner (RodríguezCampos & Rincones-Gómez, 2013). A collaborative evaluation is an evaluation in which there is a substantial degree of collaboration between the evaluator, collaboration members (CMs) and stakeholders in the evaluation process, to the extent that they are willing and capable of being involved (Rodríguez-Campos, 2012a; Rodríguez-Campos, 2012b). An evaluator who wishes to use a collaborative evaluation approach should be flexible and tolerant of contextual difficulties and variations in stakeholders’ willingness to participate (Garaway, 2005). To optimally use this type of approach, there must be clear expectations of its advantages and disadvantages based on the specific situation. In any case, the benefits gained by adopting a collaborative evaluation approach should outweigh the potential difficulties that may ensue. The MCE constitutes a theoretical foundation for accreditation self-studies because its systematic structure provides a basis for decision-making through the development of formative and summative evaluations. The model has been successfully used in business, non-profit, and educational sectors (Rodríguez-Campos, 2015), and it belongs to the use branch of the evaluation theory tree (Alkin, 2012). The MCE is flexible so it supports both summative and formative evaluation processes within a framework that is efficient and systematic, and it is sturdy enough to allow for a variety of elements to be incorporated (e.g., Danielson, 2015;Marzano, Frontier, & Livingston, 2011; Popham, 2013).
The MCE revolves around six interactive and interdependent components. This cyclic, iterative, and systematic model and its step-by-step process is robust enough to allow for handling unforeseen issues that may occur along the way. This is important when evaluating the wide range of situations that take place across departments, curricula, policies, and procedures. A sound self-study design provides a mechanism that allows development and maintenance of effective planning and continuous improvement processes. The MCE literature, includes examples of how they could help frame aspects of accreditation self-studies. These examples represent just the tip of the iceberg for how collaborative evaluations can work in this highly complex venue. The MCE helps ensure end-products of the evaluation are satisfactory to stakeholders, including those for accrediting bodies.
The MCE framework provides an alternative guidance to undertake the complex accreditation self-study process. We have outlined here how the approach could assist to organize this institution-wide endeavor. By conducting accreditation self-studies using the MCE, an institution can tap into its greatest resource – its students, faculty, and staff. The MCE gives an entire campus community or specific department the opportunity to engage in the process and to see how they can positively contribute. Each component of the MCE builds commitment to the process with the added benefit of creating a sound system to follow throughout the self-study. The greatest strengths of this model are that it gives focus to collaborative processes and provides a strong basis for establishing long-term relationships. The MCE assumes that important decisions can be made collaboratively in the early evaluation stages and that ongoing alternatives can be easily incorporated as necessary. Therefore, it is a tool that helps us better understand how to develop priorities and achieve a high level of support within a collaborative evaluation. The MCE provides an important learning opportunity. This model can help you understand and account for the nature of the work and the full range of stakeholders in a self-study effort. Results from a collaborative approach provides a useful basis for guiding the decision-making process because people work collaboratively while understanding the added value of their interactions. This paper illustrates promising practices that could be widely integrated in different self-studies. James Sanders (2005), former president of the American Evaluation Association (AEA), summed up his impressions of the MCE: “The model…serves as a guide for evaluators who believe that making evaluation an integral part of everyday work in programs and organizations is important… It is a significant next step in the evolution of the practice of evaluation. It could not have come at a better time. In my judgment, this contribution to the evaluation literature is excellent”
Collaborative Evaluation – Program Planning (Months 1-6)
Program Planning is fundamental to the success of this project and there are at least four components of development that need to begin as soon as possible. The first is the preparation of a Feasibility Study (or a detailed outline of one) that defines the many aspects of the project, such the purpose, goals and objectives, development of program and resources, budgeting process, business plan, potential fund-raising, marketing and time frame over a four-year period. The study would take the time of two to three members of staff influential in the creation of the project to meet for approximately two months to discuss and write the document. Assignments for completing aspects of the study would be made to different staff members. The final document should be completed by the end of the first three months. The feasibility study should be dynamic in nature, not static, meaning that as the program evolves, the study could be modified or changed, as necessary. Simultaneously, and if approved by senior officials, it may be necessary to begin the design of the web-based program that would incorporate customized questions for the organization into a web-based template with assistance from the Appleton-Greene consultant. This would involve working with the IT department of the company or possibly outsourcing the work to a technology company. The platform development would be sustainable for the company or organization for decades. Again, those who have been influential in the creation, implementation and/or maintenance of the program could be involved. This process could take anywhere from six to eighteen months. The third component is a potential fund-raising program that may be in order to subsidize the platform for the project, i.e., similar the one described in the Executive Summary. The methodology used could be the development of a campaign over the course of the first year to raise necessary funds. It would include presentations, meetings, events, marketing to interested communities and the development of donors. The fourth is raising awareness about the project and seeking buy-in from various communities of interest internal and external to the organization. This is the marketing aspect of the project which should be vigorous in the first year and continued, as needed, during the ongoing year. In addition, six monthly workshops will be arranged requiring staff to attend. At this time, presentations and updates on major project areas will be discussed plus potential agenda items and tasks.
Collaborative Evaluation – Program Development (Months 7-12)
During the second half of Year 1, the Program Development phase, the staff should have a clear vision of the project and be able to begin writing standards or guidelines plus policies and procedures. If possible, it will be important to have these documents in place by the conclusion of the year because it will streamline the project and quickly answer questions that automatically arise. These documents can be modified, if needed, but having a finished copy in hand will demonstrate a strong commitment to the project. The standards and guidelines could take approximately six months or more to write and guidance on this work can be provided by the Appleton-Greene consultant. The policies and procedures would involve at least a period of three months. Staff members with expertise in all of these areas should participate in completing this work. Inevitably, there will be additional design and planning on the web-based platform. Although there will be IT programmers involved with the process, it will be necessary for the project’s staff members to review what has been developed on a regular basis. The timeline for finishing the platform should be at the end of the first year. If a fund-raising campaign is ongoing, it will be necessary to review the goals for the campaign and any events that may take place. Twelve workshops, one per month, will be held during this year, six related to program planning and six related to development. As staff become more familiar with the overall project, it will be easier to discuss details that may not have been apparent during initial months. Once again, the workshops would be required for those involved in the project and each one will last six hours. This will be an opportune time for updates, presentations and for new, relevant tasks to be discussed and developed. The platform also should be taught to all staff members so that everyone knows exactly how to navigate it and answer questions about it when they arise. The more in-depth study the staff will have on the platform, the more quickly they will see the nuanced benefits and efficiency of the process.
Collaborative Evaluation – Program Implementation (Months 13-18)
In the first six months of Year 2, the Implementation Phase, there will be concrete evidence about the project, when it should begin and how many clients should participate. There should be designated staff to work with each client during the year on discussions pertaining to the project. This is usually an enthusiastic period for staff when the hard work from the previous year becomes a reality. Within the first three months of the this year, a webinar or face-to-face meeting would take place with clients to review the project, its navigation on the platform and the immediate, short-term and long-term expectations. The meeting could also be at the agency for all clients or individually at a client’s facility. Staff would reassure clients about the goals that can be reached, note the reasons why it is important for them to participate and, most important, highlight successful outcomes that can take place. Whether this is a