Collaborative Evaluation – Workshop 1
Executive Summary Video
The Appleton Greene Corporate Training Program (CTP) for Collaborative Evaluation is provided by Ms. Gordon MPH MS Certified Learning Provider (CLP). Program Specifications: Monthly cost USD$2,500.00; Monthly Workshops 6 hours; Monthly Support 4 hours; Program Duration 12 months; Program orders subject to ongoing availability.
If you would like to view the Client Information Hub (CIH) for this program, please Click Here
Learning Provider Profile
Ms. Gordon is a Certified Learning Provider (CLP) at Appleton Greene and she has experience in management, human resources and marketing. She has achieved a Master’s in Public Health (MPH) and a Master’s in Anthropology (MS). She has industry experience within the following sectors: Education; Healthcare; Non-Profit & Charities; Technology and Consultancy. She has had commercial experience within the following countries: United States of America, or more specifically within the following cities: Washington DC; New York NY; Philadelphia PA; Boston MA and Chicago IL. Her personal achievements include: facilitated twenty-two programs on campus that underwent accreditation processes for a USA University; implemented an evaluation and accreditation process for an Accreditation Council and co-developed the first web-based, online, integrated accreditation system in the United States and world; Her service skills incorporate: learning and development; management development; business and marketing strategy; marketing analytics and collaborative evaluation.
Mission Statement
The first stage of the program is to understand the history, current position and future outlook relating to collaborative evaluation, not just for the organization as a whole, but for each individual department, including: customer service; e-business; finance; globalization; human resources; information technology; legal; management; marketing; production; education and logistics. This will be achieved by implementing a process within each department, enabling the head of that department to conduct a detailed and thorough internal analysis to establish the internal strengths and weaknesses and the external opportunities and threats in relation to collaborative evaluation and to establish a MOST analysis: Mission; Objectives; Strategies; Tasks, enabling them to be more proactive about the way in which they plan, develop, implement, manage and review collaborative evaluation, within their department.
Objectives
01. Obtain a clear understanding of the core objective of Workshop 1. Time Allocated: 1 Month
02. Analyse the history of Collaborative Evaluation within your department. Time Allocated: 1 Month
03. Analyse the current position of Collaborative Evaluation within your department. Time Allocated: 1 Month
04. Analyse the future outlook of Collaborative Evaluation within your department. Time Allocated: 1 Month
05. Analyse internal strengths and weaknesses, relating to Collaborative Evaluation, within your department. Time Allocated: 1 Month
06. Analyse external opportunities and threats, relating to Collaborative Evaluation, within your department. Time Allocated: 1 Month
07. Identify and engage up to 10 Key Stakeholders within your department. Time Allocated: 1 Month
08. Identify a process that would enable your stakeholders to decentralize Collaborative Evaluation. Time Allocated: 1 Month
09. Estimate the likely costs and the ongoing financial budget required for this process. Time Allocated: 1 Month
10. Estimate the likely hours and the ongoing time budget required for this process. Time Allocated: 1 Month
Strategies
01. Each department head is to personally set aside time to study Workshop 1 content thoroughly.
02. List the key projects that have been undertaken historically within your department and analyse how and if Collaborative Evaluation was used and where it was successful.
03. List the key projects that are currently being undertaken within your department and analyse how and if Collaborative Evaluation is being used and where it is being successful.
04. List the key projects that are scheduled to be undertaken in the future within your department and analyse how Collaborative Evaluation can be used in order to ensure success.
05. Research internal strengths and weaknesses, relating to Collaborative Evaluation, within your department.
06. Research external opportunities and threats, relating to Collaborative Evaluation, within your department.
07. Review the files and resumes of employees within your department in order to identify those with a collaborative nature.
08. Research and identify a process that would enable your stakeholders to decentralize Collaborative Evaluation.
09. Liaise with the Finance department to evaluate the likely costs and the ongoing financial budget required for this process.
10. Liaise with the Human Resource department to evaluate the likely hours and the ongoing time budget required for this process.
Tasks
01. Read through the entire workshop content while making notes including: Profile; MOST; Introduction; Executive Summary; Curriculum; Distance Learning; Tutorial Support; How To Study; Preliminary Analysis; Course Manuals; Project Studies; Benefits.
02. Create a task on your calendar, to be completed within the next month, in order to list and analyse historical projects.
03. Create a task on your calendar, to be completed within the next month, in order to list and analyse current projects.
04. Create a task on your calendar, to be completed within the next month, in order to list and analyse future projects.
05. Create a task on your calendar, to be completed within the next month, in order to research and analyse internal strengths and weaknesses, relating to Collaborative Evaluation, within your department.
06. Create a task on your calendar, to be completed within the next month, in order to research and analyse external opportunities and threats, relating to Collaborative Evaluation, within your department.
07. Set up interviews with employees within your department in order to identify those with a collaborative nature.
08. Implement a process that will enable your stakeholders to decentralize Collaborative Evaluation.
09. Set up an appointment with the Finance department to evaluate the likely costs and the ongoing financial budget required for this process.
10. Set up appointment with Human Resource department to evaluate the likely hours and the ongoing time budget required for this process.
Introduction
Workshop Objective
The first stage of the program is to understand the history, current position and future outlook relating to collaborative evaluation, not just for the organization as a whole, but for each individual department, including: customer service; e-business; finance; globalization; human resources; information technology; legal; management; marketing; production; education and logistics. This will be achieved by implementing a process within each department, enabling the head of that department to conduct a detailed and thorough internal analysis to establish the internal strengths and weaknesses and the external opportunities and threats in relation to collaborative evaluation and to establish a MOST analysis: Mission; Objectives; Strategies; Tasks, enabling them to be more proactive about the way in which they plan, develop, implement, manage and review collaborative evaluation, within their department.
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholders understand and use. Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The processes use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.
Collaborative processes are being promoted as an alternative decision-making process for managing. This is a relatively recent phenomenon, and, given its growing popularity, it is important to develop and apply methods and criteria for evaluation, to determine strengths and weaknesses, and to identify best practices for effective use of the collaborative model. Evaluation based on multiple criteria and at several points in time can assist those involved in designing and organizing collaborative processes to ensure the process is responsive to stakeholders’ and achieves its objectives. The success of both the process and the outcome of collaborative processes can be effectively appraised using participant surveys.
Evidence from case studies of collaborative approaches show these processes can generate higher quality, and more creative and durable agreements that are more successfully implemented due to increased public buy-in and reduced conflict. Collaboration can generate social capital, by facilitating improved relationships between stakeholders, generating new stakeholder networks, enhancing communication skills, and co-producing new knowledge with stakeholders. However, collaborative processes are a relatively recent phenomenon, particularly when compared with historical planning and decision-making processes.
“Is our program working?” This is a key question in education today, particularly in this era of heightened accountability. A collaborative program evaluation model is an extremely useful way to answer this question when education organizations want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative program evaluation model, the client (e.g., districts, states, public and independent schools, non-profits, and foundations) works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports program improvement, and offer summative information on the effectiveness of the program.
Collaborative evaluation is a proactive evaluation model that enables program staff to engage in continuous program improvement. Specific benefits of the model include
A customized evaluation design that reflects the nuances of the program being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in program implementation over time.
Increased reliability of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of program staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among program staff.
Potential cost efficiencies.
Each of these benefits is described in detail below:
Address program nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with program staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of program staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide program staff to identify and capitalize on external and internal program networks that they can tap to help them to achieve program goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between program staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the program. By communicating regularly about program developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the program.
Increased reliability of results
Another benefit of working collaboratively with program staff in developing the evaluation is increased reliability of the study. Because the evaluation team develops a high level of understanding of the program, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the reliability of the study and the credibility of the findings. The use of a collaborative program evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate program improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that program staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress
The evaluation team works with program staff to develop tools to measure desired outcomes of the program. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on program implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among program staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative program evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the program’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of program improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of program staff through their active participation in the design and execution of the evaluation. With increased capacity, the program staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process incorporates four phases: planning; implementation; completion; and dissemination and reporting. These complement the phases of program development and implementation. Each phase has unique issues, methods, and procedures.
Collaborative Evaluation – History
Most companies or organizations want to attain sustainable, successful goals for themselves, their employees and communities of interest. But more often than not, they have difficulties achieving them. It’s the “getting there” that’s hard. This corporate training program features a process-driven system that can help. If one is committed to and serious about reaching their goals, there is almost an inevitability about succeeding. It is based on efficiency, collaboration, quality, rigor and contemporary technology. What is more, it emphasizes excellence.
In describing the process, I will also note aspects of my own background that will, hopefully, provide some context to the explanation of this system. My professional career over the past three decades featured teaching, training, mentoring and consulting, with an emphasis on evaluation and assessment in the area of accreditation in higher education. The evaluations revolved around a set of standards that were developed and approved over a period of time by members of a board of directors of an evaluation commission and members-at-large of an associated profession. As well, there were policies and procedures written relating to the process, but the standards were of top priority and ‘written in stone’. These standards were mandated requirements for programs to have in place before receiving any approval or accreditation from the board. Much of my experience centered on developing, writing and directing all of the processes involved in the evaluation. As noted, the area of focus was in higher education, specifically within healthcare professions. The evaluations took place primarily within the United States but there were several conducted abroad and all were in higher education or learning institutions. In those outside of the United States, there may have been some cultural and language modifications made to the processes, but the predominant qualitative standards were similar throughout the world.
To achieve the long-term goal of what was known as accreditation in universities, there was a detailed peer-reviewed process that needed to be closely followed and monitored over the course of several years. The actual process was completed by the program using a web-based system, the first of its kind in accreditation within the United States and abroad. More will be discussed about this integrated (and online) system under “Future Outlook.
The process was rigorous, with articulated goals, i.e., principles of quality, self-reflection and collaboration. A primary objective was to require institutions to demonstrate both the strengths and weaknesses of their programs. In this way, there was an honest approach on the part of all professionals involved, i.e., the staff, faculty, senior administrators and evaluators. In a sense, everyone started on an equal footing and received equal treatment. This was considered an extremely important component of the process by the Board of Directors. A second critical objective was the approach the evaluators took. It had to be clear comprehensive and unbiased, in other words, not punitive. These objectives were stated at the onset of the process to members of departments and/or programs. The board also wanted all faculty and staff to know that they, the board, wanted them to succeed. Therefore, if institutions were serious and committed to the process and eager to improve (even if they already considered themselves stellar) a win-win situation could occur. This was gratifying to everyone concerned. The agency strongly believed in this management style and found the benefits to be plentiful. Most significant, program participants began to enjoy the process, felt less constrained when responding to questions and believed they were partners in the evaluation process. This was extraordinarily valuable to them and they also expressed an eagerness to hear how they could improve. The result of this peer-reviewed process with ongoing progress reports over a period of time, meant there was, under most circumstances, a stability of quality and continual improvement in the educational programs accredited. This was an organized effort to reach purposeful goals and will be further explained in the Future Outlook section.
Collaborative Evaluation – Current Position
My current position continued to be in the area of evaluation, but as a consultant, beginning in 2018. In December 31, 2017 I retired from my former full-time position of Executive Director in the accreditation agency I founded. My present consulting remains in the area of education/healthcare and accreditation. Throughout 2018, I consulted with programs undergoing Developing Status. These programs (as to ones undertaking accreditation evaluations) are starting from a ‘blank page’ and have never educated students in a specialized content area or at a terminal or doctoral level. This type of program also is referred to as a candidacy application.
The program submitting this type of application describes its intentions as fully as possible and explains how it will go about achieving the necessary components of its curriculum. Since the program is in its infancy or preliminary stages, it is not necessary for each area of the curriculum to be completely in place at this time. Instead, the administration (within the relevant university) will discuss the plan it has for its related course of study.
During the Developing Status stage, program administrators submit an application in the form or a questionnaire to the accrediting agency. The purpose of the application is to inquire how the program plans to achieve the goals it has for its curriculum and operating business. Each question pertains to a specific area, such as mission, goals and objectives, governance, policies, recruitment and admissions, faculty development, curriculum, methods of instruction, finances, facilities, student achievement, resources and advisement and other categories pertaining to this initial level of development.
A discussion takes place about when the program wishes to admit students and acknowledges that it will not do so until it receives approval from the accrediting body. Thus, if a program wishes to admit students in August/September of a given year and wants to recruit students in the beginning of that year, it needs to submit its application in sufficient time prior to the recruitment stage. Once an application is received, the Chair of the board designates two evaluators to review the Developing Status materials. These evaluators are usually members of the board.
The next step is for the evaluators to plan a fact-finding visit to the institution within the next two to three months. At a mutually convenient time between the program and accrediting body, a visit takes place. The purpose of the visit is to determine whether the program is ‘on track’ with the development of its program and headed in the right direction. This means evaluators on-site can verify the materials they received and meet with the institution’s senior administrators, faculty, practitioners and others who have been influential in the designing and planning of the program. It also is an opportunity for the program’s administrators to ask evaluators questions about the program’s progress and, in return, to receive responses. This usually is a collaborative, supportive meeting, spurring the program to move quickly, provided it is headed down the right path.
Once the Developing Status evaluation is complete, the two evaluators write a report of their findings and submit it to the board. The board works quickly to discuss the pros and cons of the evaluation and within two weeks to one month, provides a decision to the university or college about whether to approve or deny the report. It could also ask for additional information, if needed. A formal letter explaining the decision is written to the administrator in charge of the program. Once the letter is received and approval is given, the program can proceed with its timeline for development. If a program is deferred and a request for additional information is made, the program’s administrator can speak with the director of the accrediting body about its next steps. The timeline for submitting new or additional information would be explained in the letter sent. Once new materials are submitted and approved, the deferral is removed and Developing Status is awarded. If the board decision is to deny Developing Status, the program could re-apply, if it wishes, at a time mutually decided on by the accrediting body and the program. In most cases, the program will have an idea of whether it is on the correct path after its fact-finding meeting with the evaluators.
Collaborative Evaluation – Future Outlook
I have been involved in each aspect of the evaluation system noted for almost three decades and have believed in its virtues for the entire time. I value the fact that one engages in a process that is collaborative, self-reflective and stresses improvement on a regular basis. Although aspects of the process are time-consuming, I see enormous merit in the benefits it provides to individuals, team groups and programs. These benefits transcend any factors that may seem tedious. I also believe this type of process-driven assessment can be adapted to numerous settings in both for-profit and non-profit companies. The important fact is that there is a consistent emphasis on excellence, efficiency, quality and rigor as well as a consistency to questions asked within the formal structure. These characteristics are in-bedded into the processes. What is more, they can be replicated in institutions and businesses throughout the world.
How can this be done? As an example, in 2004, the accreditation body, in conjunction with an outside technology company, developed an integrated web-based, online system that incorporated all aspects of the accreditation process. It eliminated a former paper trail that was moribund, convoluted and had little long-term effect. The system our group wanted to devise was similar to “Turbo Tax”, an online data system used by many Americans to computerize their federal, state and city taxes. The idea was to enter data, update it as necessary and have permanent and easy access to it. We developed a similar electronic process and found this process strengthened the bond between the agency and its programs.
There were many components built into the system. For example, a constructive online interaction between an academic program and its faculty was developed as they pursued questions about their program. The Program Director or overall administrator could assign standards to faculty members and the entire group could meet electronically to privately discuss their assignments. In addition, programs had the ability to dialogue electronically with site-team evaluators after they submitted their application to the agency and their program’s materials were closed for making more changes. The evaluators received access to these documents through a special portal to study them and ask for clarifications, as necessary. The program also had the ability to respond. This type of efficient conversation, known as the Interactive Evaluation, was helpful to all parties and allowed evaluators to have a better understanding of the program prior to their physical visit on campus. This meant they could spend more time verifying particular areas on-site that still needed resolution. It also allowed them to see relevant facilities at the institution and meet with faculty, groups of students, senior administrators and clinicians. The period of time for a site-visit is between 2 to 3 days, and the time goes quickly. At the conclusion of this visit, the evaluators submitted a Preliminary Report online which was sent to the senior administration of the institution prior to the exit meeting of the visit. This value-added benefit was facilitated by the web-based system. Comments received by the agency about these online procedures, specifically the Interactive Evaluation and Preliminary Report were very positive. Over and above the benefits provided to the agency and programs, there were benefits the institutions found valuable during their own internal reviews. In addition, the aggregate data garnered from programs could assess the health of the profession at any given time and be of assistance to other stakeholders in the field, such as professional organizations or government agencies. In addition, there is also great value in using components of the process for ‘training the trainer or evaluator”. The same commitment and dedication to educating a ‘trainer’ is as significant as evaluating a program. For example, the ‘trainer’ has to learn how to be a role model and exemplify the attributes needed in encouraging the development of a qualitative program.
In conclusion, this process-driven program is a concrete example of how the many components of a curriculum could be integrated and easily utilized online. The value of being able to organize and analyze hundreds of pieces of data, what I have called synchronized conceptual thinking, has proven to be of benefit to academic institutions. Although the system was established for a particular healthcare profession, it was also designed so it could be adapted for other groups within the United States and abroad. The processes developed were meant to be productive for a variety of institutions and sustainable over a long period of time.
Executive Summary
Collaborative Evaluation – History
The Model for Collaborative Evaluation (MCE) project should begin with a discussion of the five fundamental components outlined in the Client Information Hub (CIH) under the Executive Summary, Methodology and Program Objectives sections. These areas would comprise a re-cap of the importance of the Collaboration Evaluation, the development or revision of a Feasibility Study, the marketing research needed for the entire project, a potential software product for online efficiency (an option) of the project and the related fund-raising activities that may be necessary if development of the platform was undertaken. These conversations would be held at the first workshop in Year 1, Month 1 and each of these topics would require substantial time, such as 1 – 2 hours over the course of the six-hour day. Below are brief historical descriptions for each fundamental area.
Collaboration
Collaborative Learning can be traced back over 40,000 years to Ancient Times. The basic human drive has always been to communicate and pass on and share information for posterity. We have seen this in cave paintings and drawings in pre-historic Australia and in Egyptian Hieroglyphics where the Dynasty II civilization took pictures to another level, i.e. combining them in sequence to form simple sentences. This type of communication turned single moments into paragraphs or prose.
Through the ages, the means of communication developed rapidly. In the latter half of the 20th Century, collaborative learning progressed and research demonstrated this fact. Students learned more and recalled more when they became partners in the learning process instead of passive recipients of knowledge. Before that, in ancient times, small close-knit communities made it possible for leaders like Confucius, Buddha, Jesus and Muhammad to emphasize learning through personal interaction as opposed to texts and scriptures. This was evident, too, in other cultures, such as the Malaysian/Indonesian idea of Gotong royong, the spirit of mutual help in a society or community.
But these tiny communities disappeared throughout the world when the beginning of urbanization began to develop. With the ability to travel and migrate, the ancient ways of learning made way for a new paradigm of learning through formalized curricula using a system of lectures, texts and examinations. Dewey, Alpert, Piaget and Vygotsky, well-known theorists, all supported different theories of social and collaborative learning. These thought processes brought about the emergence of Constructivism, the foundation of Collaborative Learning or active learning. The constructivist is not passive, but rather an active contributor to the whole learning process (Ryan O’Reilly 2016).
Collaboration rapidly occurred in the scientific communities as well. In 110 US universities, the size of research teams increased 50% from the 1980s – 1999. The collaboration between US universities and foreign universities increased 5 fold during the same period (Niki Vermeulen, John N. Parker, and Bart Penders). In all branches of life, the second half of the 20th Century and first two decades of the 21st Century were responsible for the increase in collaborative learning. Yet, there still continues to be a need for strengthening human connectivity.
Feasibility Study
Feasibility Studies have been in formal existence for at least fifty years. They provide evidence demonstrating that a project is worthwhile and has been as thoroughly studied/investigated as possible. Whether members of an individual department, division or entire enterprise undertakes it, the critical goal is that the study be completed and that it serves as the foundation for the project. Some of the areas the study would investigate are the goals and mission of the project, the budgetary costs to the institution and, most important, the human resources needed. As well, this plan for the project, i.e., the steps involved over the course of two years will be outlined and will function as its initial blueprint.
Marketing Research
Incorporated into a feasibility study and/or business plan should be marketing research. It is equally important. According to Kuba Kierianczyk, Associate Director, Insights and Strategy (2016), market research began in the 1920s. Daniel Starch, one of the first advertisers thought advertising should be read, believed and acted upon. During that same period, the giant and American pioneer, George Gallup, developed his statistical method to measure public opinion. This eventually became the Gallup Poll as we know it today. From the 1940s – 1960s, in-depth interviewing techniques plus quantitative surveys were popular and from the 1960s through the 1980s, new technologies such as computers, phone systems and the Internet were used for market research. In 2019, advertising and market research includes not only the consumer but also cultural insights and its surrounding context.
Online Platform
Suggested in the CIH sections, Executive Summary – Future Outlook, is a description of a collaborative, efficient platform (portal) with a protected login for clients to use. This integrated system was developed in 2004 and serves as an online evaluation tool where standards for accreditation/assessment/or validation could reside. The standards would include questions about the curriculum, such as admissions, recruitment, finances, resources, institutional policies, students, faculty, advising, quality of program and clinical experiences (if applicable). In general, this integrated system would house all the information needed for the client and the accrediting body plus employing a format that was consistent throughout. For example, in questions pertaining to standards where there are narrative responses, there would be a common set of questions asked about each standard. A text box also would follow each question. This type of consistency within the platform’s format provides a mechanism to more easily benchmark, trend and/or analyze the information collected. This consistency can also provide a ‘reliability factor’ to the data collected.
Suggested Fund-Raising Program
If the online platform was undertaken, it may be necessary to raise funds for the expenses it incurs. In addition to the motivating fund-raising effort, it would also be a means of marketing the program. The cost would be finite or a one-time matter that would include capital expenses for the platform, program development and potential maintenance costs for the first few years. The means of fund-raising will be discussed and these conversations could involve the staff involved in development activities within the client’s institution.
Collaborative Evaluation – Current Position
Collaboration
Collaborative learning that takes place within educational institutions, businesses and organizations today is an ongoing cooperation between leaders and members of a staff who are closely affiliated with a particular project. When cooperative behavior occurs, it can lead to enthusiastic results and prolonged sustainability. These outcomes are what the enterprise desires and when using the suggested MCE model, it will be headed in the right direction. But in situations where collaborative learning techniques are not purposely used, a more negative result could occur. For example, how often does a program or policy fail because a full explanation about it has not been provided to the participants closely involved in the project? Would it not have been more successful if everyone had been kept abreast with ongoing communication? How frequently do we see individuals working in isolation without feedback from others? Would it not be more useful to eliminate the ‘silos’ of learning and collaborate with fellow workers? And why is ‘collaborative training’ not included in curricula for students before they graduate from their professional program? Would it not be more instructive and helpful to teach bright young minds how to collaborate and work closely with other healthcare colleagues in the controlled setting of the university – before they assume positions in the workplace? Or how disheartening is it to find that workers are depressed because of the jobs they hold. Wouldn’t it be a different situation if employees were rewarded for work well done or given constructive feedback if changes needed to happen?
Most of us have witnessed some of these negative scenarios and, I daresay, felt helpless on those occasions. Conversely, when collaboration occurs, there are increases in productivity, cost reductions in services and a greater enthusiasm among employees for their projects. We will discuss the effects of positive collaboration, view videos that are instructive about its importance and learn how to begin focusing on collaborative learning in this workshop. Collaboration Learning will serve as the umbrella for all the steps in the project over the two-year period. We will discuss the lessons that need to be learned to be effective and we will be apprised of the components involved in the process. We also will realize the time it takes to become experienced and knowledgeable in this area. It does not happen overnight, it takes patience over a period of time. Frequently, individual changes need to be made within oneself in order to be more nimble, flexible and adaptable. The rewards are substantial and it turns an employee into an ambassador!
Feasibility Study
In the Feasibility Study discussed in the first workshop of Year 1, there will be questions about the contents of the study and what they should include. These will range from conducting a preliminary analysis to seeing if you need a full investigation, to assessing the need for your idea and the competition for your services, to determining obstacles that may present challenges, to exploring the demand for your idea by studying relevant data and surveying people interested in the subject. In addition, there will be a financial analysis to see if you can not only cover the costs, but predict them over two years and longer. The sustainability of the program is always an important goal to bear in mind and the revenue plus costs for the project will be critical factors to analyze. During this process, a timeline needs to be set, establishing when the study should be completed and documented. A report, when complete, will be given to the people influential in the project because they will be making the important decisions about how to proceed and providing the subsidy. Of course, the report will be given to all those involved in the project as well. The discussion about the study will be interactive and suggestions from the group will be welcome.
Marketing Research
Kuba Kierianczyk (2016) states, “Perhaps the most important evolution in present day thought is the recognition that consumers don’t exist in a vacuum.” Cultural context, meaning the context surrounding consumers, is important to understand, not just the study of the consumer’s choices and behaviors. A vast array of tools and methodologies are available to market researchers today to provide a more comprehensive view of the consumer’s wishes. This broadened view of market research can be insightful when using market research within this project.
Online Platform
There is no question that the convenience of having an online platform would be an excellent project addition as well as an efficient tool to use. The ease with which one could either upload or download data and information would be user-friendly and alleviate potential stress. Having a well-designed system would also ensure that the information placed in the system would be secure and permanent. This is not to say that changes could not be made – they could and would take place when requisite modifications and/or updates were necessary. Having an overall online system at one’s fingertips could be a positive gain for the longevity of the project. The downside is the time it could take in development and design. But precautions could be cited at the beginning of the project and the platform could be closely monitored each step of the way.
Fund-Raising
Some of the possibilities for fund-raising include events (, e.g., luncheons, dinners, marathons, sports activities, silent auctions, letters about the project sent to communities of interest or a fund-raiser suggested by the members of the project. During the workshop, and, if applicable, they will be discussed.
In conclusion, the first workshop will include an overall introduction of the project and a discussion of the steps outlined, thus far, for the next two-years. It will be an opportunity for me, as the consultant to review the CIH and workshop materials. It also will be a key opportunity to hear from every person affiliated with this effort and what he/she will bring to the table. The goal will be to provide a jump-start to the MCE project and ensure that the collaboration model will be clearly understood by each member.
Collaborative Evaluation – Future Outlook
The future outlook for the Model for Collaborative Evaluation (MCE) is an important theme in today’s world. Whether it is in the area of business, politics, law, education or healthcare, the need to emphasize this model over the next decades will be critical. To explain what I mean, I will cite two examples that have been focal points in the news recently. These illustrations focus on whether collaboration was achieved or not and the outcomes that followed. These global examples include one of the largest companies today and one of the most prominent governments.
Amazon
Amazon is one of the largest companies in the world and one of the most successful, financially. In the US, specifically, a very large part of the population orders whatever it needs via Amazon online and the delivery of the product is usually 1-2 days. Satisfaction regarding their overall business is high. During the past two and a half years, Amazon sponsored a competition throughout North America asking hundreds of cities to participate. Amazon wanted to build two new headquarters, one in each winner’s city, and hire over the course of at least five years 25,000 employees for each site. Their employees would be highly skilled and Amazon promised to energize the winning cities even more than they were with new economic gains.
For the cities participating, they needed to demonstrate why their city would be the perfect one for Amazon to choose plus provide a host of possible benefits, such as transportation convenience, accessible housing, nearby airport facilities and funding. In addition, each city was required to complete a voluminous-sized application. Once applications were submitted, Amazon would review them and announce two winners. It seemed that local city populations were like students waiting to hear if they got into the university of their choice – this was the excitement generated throughout the entire country. After a prolonged period, Amazon announced that Washington D.C. (i.e., Crystal City/Arlington) and New York City (i.e., Long Island City) were the two winners. Although there were many disappointed people from remaining cities, the citizens of Washington and New York were ecstatic on hearing the news.
Washington continues to be excited and is pleased with the way Amazon collaborated with them. There have been public meetings in the specific area where Amazon will have its headquarters and discussions about potential problems that will exist, such as infrastructure, additional people, lack of facilities and extra expense problems. All have been highlighted among those who live and/or work in Crystal City. There are neighborhoods afraid of being dismantled, but the overall feeling among Washingtonians is that this new phenomenon in their midst is a good thing – an example of successful collaboration between interested parties.
The dialogue is different in New York City. From the beginning, there were critics stating that New Yorkers might not be the beneficiaries of this new neighbor and that $3 billion in tax breaks and other incentives would be given to the company or that it was “an extravagant giveaway to one of the world’s largest companies”. Since many New Yorkers were feeling the sting of inequality and being overtaxed with infrastructure, the thought of additional congestion and more problems than benefits was not appealing. Even the editorial boards of the New York Times and the Wall Street Journal, who usually do not take their viewpoints public on these issues, came out against Amazon being in the New York environs. The Mayor and Governor, though, were exceedingly excited about their presence.
In the midst of these critiques, Amazon recently stated that it was abandoning the NY plan and would not be moving to the city. This also meant that the additional 25,000 people they would hire would factor into this decision. It shocked the city, especially the Mayor and Governor.
Why did this occur? It’s hard to know exactly, but it seems as if Amazon had not done its collaborative homework as well as it should have. What would have been discovered if it had completed a thorough, investigative feasibility study? The findings would have revealed miscalculations about New Yorkers and how they were coping, specifically with inequality evident in the city and being overburdened. If financial analyses, predictions, interviews and public meetings were held in Long Island City, there may have been some revelations about this. Also, the lack of subway infrastructure was a severe problem plus little accessible housing for 25,000 new arrivals. These were similar problems to what one saw in Crystal City, but the problems in New York were magnified by the sheer volume of people. In addition, there was relatively little insight at Amazon about their being welcomed “with open arms” to this city. Large, successful technology companies, in general, are feeling the pain of not being readily accepted by the public (Bloomberg, Ovide 2019).
Breaking News: There is present talk from the Governor’s Office in Albany, NY about trying to reverse the present situation with Amazon. Keep tuned.
Brexit
The second example is about Brexit in the United Kingdom. I am by no means expert on this subject, but have read extensively on the topic and have deep interest in and concern for Britain, having lived there several years. I certainly will not capture all the complexities of the Brexit situation, but in a simplified fashion will make some comments about what I learned and provide some thoughts.
In June 2016, when the former Prime Minister, David Cameron, stated he wanted a referendum vote (i.e., a vote that includes UK citizens of voting age) about whether Britain should remain in the European Union – EU (which Britain had been a part of for 46 years, since 1973) or leave the EU, 71.8 % of the population came out to vote, more than 30 million people. The results were, 51.9% Leave to 48.1% Remain. As a result of the vote, which Mr. Cameron did not anticipate, he resigned and Theresa May became Prime Minister.
After the vote, Mrs. May began visiting Brussels, home of the EU, to discuss how the exit would take place. On March 29, 2017, on behalf of Britain, she invoked Article 50 of the Lisbon Treaty that states the two sides have two years from that date to come to an agreement. On March 29, 2019, a decision must be made. Up until that time, the UK has the choice to stay in the EU or ask for an extension. Note, the extension must be agreed upon by the other 27 member countries.
Since that time, Mrs. May worked tirelessly with the EU to design a ‘deal’ that could be taken back to the UK for approval. The first deal presented to Parliament in January 2019 was not approved. A revised second deal was presented in March 2019 and, this too, was unsuccessful. Therein lies the conundrum. Time is running out and no decision has been made. There still exists the possibility of leaving with a Brexit-deal, leaving with no-Brexit deal or, perhaps, calling a second election or referendum. The gridlock apparent in Parliament (not unlike some US situations) prevents rash decisions from occurring, but simultaneously, shows confusion within the government. These are the skeletal details (Hunt and Wheeler, BBC 2019)
By all accounts the British people are calm, sane and pragmatic in almost all of the decisions they make (going back centuries) and because of their behavior, the European countries within the Union found them to be excellent, helpful partners. But over these past three years, since the 2016 vote, a dramatic change in behavior occurred which is highly unusual. The British are behaving in a divisive fashion and, as a result, are representing one of two sides, the Brexiters or the Remainers. This dilemma among the people has caused protests in cities across the country and in Parliament.
The present state of chaos and confusion seems directly attributable to a lack of collaboration among all parties. It needs to be said that many of the people who voted to Leave did so because they imagined how wonderful it would be to recapture their freedom to be independent again and possibly return Britain to the Empire it once was. Before the vote, though, there was little or no discussion about what the reality was in 2019 or the consequences that could surface as a result of leaving. These might include the myriad of trade agreements between the UK and the EU, the economic, financial, human and potential Irish border problems. The British people were not adequately informed about substantial, annual payments made to the country by the European Union. They knew the British paid enormous sums to the E.U., but not what they received in return. The potential consequence of London no longer being the financial center of Europe, of businesses needing to leave Britain if the UK left the EU, were not apparent to many who voted to Leave. But even today, if a referendum vote took place again, it would be difficult to predict the outcome.
As an outsider looking in wondering why this has happened, several reasons come to mind. The first is the lack of general and collaborative information provided to the general public informing them about what the reality would be like if the UK left the EU. The second seems to be a disregard for the disruptive forces noted above that could affect the UK, if it would leave the EU. A third is the lack of collaborative dialogue between the Prime Minister and Parliament. As previously mentioned, Mrs. May worked tirelessly with the EU, but there is little discussion that she collaborated with Parliament as a whole, e.g., with the factions in the Conservative Party or the rival Labour Party. There was no question she was in an impossible situation, but because of the magnitude of this decision affecting every member of the country, inclusion could have made a difference in the outcomes of the two deals presented. The top-down approach was counterproductive. The fourth reason is most important. A misguided belief that one would not want to continue solid relationships with the EU after 46 years of living together and not wish to help improve the known problems that exist seems unfortunate. Collaboration and dialogue may have led both sides to appreciate their partnership and shared concern. And fifth, the belief that the UK could be independent and return to where it was 50 years ago is not facing the global reality of today.
In sum, this is a difficult and complex problem that can only be solved by engaging all parties in a collaborative effort.
Curriculum
Workshop 1 – Internal Analysis
- Part 1 Customer Services
- Part 2 E-Business
- Part 3 Finance
- Part 4 Globalization
- Part 5 Human Resources
- Part 6 Information Technology
- Part 7 Legal
- Part 8 Management
- Part 9 Marketing
- Part 10 Production
- Part 11 Education
- Part 12 Logistics
Distance Learning
Introduction
Welcome to Appleton Greene and thank you for enrolling on the Collaborative Evaluation corporate training program. You will be learning through our unique facilitation via distance-learning method, which will enable you to practically implement everything that you learn academically. The methods and materials used in your program have been designed and developed to ensure that you derive the maximum benefits and enjoyment possible. We hope that you find the program challenging and fun to do. However, if you have never been a distance-learner before, you may be experiencing some trepidation at the task before you. So we will get you started by giving you some basic information and guidance on how you can make the best use of the modules, how you should manage the materials and what you should be doing as you work through them. This guide is designed to point you in the right direction and help you to become an effective distance-learner. Take a few hours or so to study this guide and your guide to tutorial support for students, while making notes, before you start to study in earnest.
Study environment
You will need to locate a quiet and private place to study, preferably a room where you can easily be isolated from external disturbances or distractions. Make sure the room is well-lit and incorporates a relaxed, pleasant feel. If you can spoil yourself within your study environment, you will have much more of a chance to ensure that you are always in the right frame of mind when you do devote time to study. For example, a nice fire, the ability to play soft soothing background music, soft but effective lighting, perhaps a nice view if possible and a good size desk with a comfortable chair. Make sure that your family know when you are studying and understand your study rules. Your study environment is very important. The ideal situation, if at all possible, is to have a separate study, which can be devoted to you. If this is not possible then you will need to pay a lot more attention to developing and managing your study schedule, because it will affect other people as well as yourself. The better your study environment, the more productive you will be.
Study tools & rules
Try and make sure that your study tools are sufficient and in good working order. You will need to have access to a computer, scanner and printer, with access to the internet. You will need a very comfortable chair, which supports your lower back, and you will need a good filing system. It can be very frustrating if you are spending valuable study time trying to fix study tools that are unreliable, or unsuitable for the task. Make sure that your study tools are up to date. You will also need to consider some study rules. Some of these rules will apply to you and will be intended to help you to be more disciplined about when and how you study. This distance-learning guide will help you and after you have read it you can put some thought into what your study rules should be. You will also need to negotiate some study rules for your family, friends or anyone who lives with you. They too will need to be disciplined in order to ensure that they can support you while you study. It is important to ensure that your family and friends are an integral part of your study team. Having their support and encouragement can prove to be a crucial contribution to your successful completion of the program. Involve them in as much as you can.
Successful distance-learning
Distance-learners are freed from the necessity of attending regular classes or workshops, since they can study in their own way, at their own pace and for their own purposes. But unlike traditional internal training courses, it is the student’s responsibility, with a distance-learning program, to ensure that they manage their own study contribution. This requires strong self-discipline and self-motivation skills and there must be a clear will to succeed. Those students who are used to managing themselves, are good at managing others and who enjoy working in isolation, are more likely to be good distance-learners. It is also important to be aware of the main reasons why you are studying and of the main objectives that you are hoping to achieve as a result. You will need to remind yourself of these objectives at times when you need to motivate yourself. Never lose sight of your long-term goals and your short-term objectives. There is nobody available here to pamper you, or to look after you, or to spoon-feed you with information, so you will need to find ways to encourage and appreciate yourself while you are studying. Make sure that you chart your study progress, so that you can be sure of your achievements and re-evaluate your goals and objectives regularly.
Self-assessment
Appleton Greene training programs are in all cases post-graduate programs. Consequently, you should already have obtained a business-related degree and be an experienced learner. You should therefore already be aware of your study strengths and weaknesses. For example, which time of the day are you at your most productive? Are you a lark or an owl? What study methods do you respond to the most? Are you a consistent learner? How do you discipline yourself? How do you ensure that you enjoy yourself while studying? It is important to understand yourself as a learner and so some self-assessment early on will be necessary if you are to apply yourself correctly. Perform a SWOT analysis on yourself as a student. List your internal strengths and weaknesses as a student and your external opportunities and threats. This will help you later on when you are creating a study plan. You can then incorporate features within your study plan that can ensure that you are playing to your strengths, while compensating for your weaknesses. You can also ensure that you make the most of your opportunities, while avoiding the potential threats to your success.
Accepting responsibility as a student
Training programs invariably require a significant investment, both in terms of what they cost and in the time that you need to contribute to study and the responsibility for successful completion of training programs rests entirely with the student. This is never more apparent than when a student is learning via distance-learning. Accepting responsibility as a student is an important step towards ensuring that you can successfully complete your training program. It is easy to instantly blame other people or factors when things go wrong. But the fact of the matter is that if a failure is your failure, then you have the power to do something about it, it is entirely in your own hands. If it is always someone else’s failure, then you are powerless to do anything about it. All students study in entirely different ways, this is because we are all individuals and what is right for one student, is not necessarily right for another. In order to succeed, you will have to accept personal responsibility for finding a way to plan, implement and manage a personal study plan that works for you. If you do not succeed, you only have yourself to blame.
Planning
By far the most critical contribution to stress, is the feeling of not being in control. In the absence of planning we tend to be reactive and can stumble from pillar to post in the hope that things will turn out fine in the end. Invariably they don’t! In order to be in control, we need to have firm ideas about how and when we want to do things. We also need to consider as many possible eventualities as we can, so that we are prepared for them when they happen. Prescriptive Change, is far easier to manage and control, than Emergent Change. The same is true with distance-learning. It is much easier and much more enjoyable, if you feel that you are in control and that things are going to plan. Even when things do go wrong, you are prepared for them and can act accordingly without any unnecessary stress. It is important therefore that you do take time to plan your studies properly.
Management
Once you have developed a clear study plan, it is of equal importance to ensure that you manage the implementation of it. Most of us usually enjoy planning, but it is usually during implementation when things go wrong. Targets are not met and we do not understand why. Sometimes we do not even know if targets are being met. It is not enough for us to conclude that the study plan just failed. If it is failing, you will need to understand what you can do about it. Similarly if your study plan is succeeding, it is still important to understand why, so that you can improve upon your success. You therefore need to have guidelines for self-assessment so that you can be consistent with performance improvement throughout the program. If you manage things correctly, then your performance should constantly improve throughout the program.
Study objectives & tasks
The first place to start is developing your program objectives. These should feature your reasons for undertaking the training program in order of priority. Keep them succinct and to the point in order to avoid confusion. Do not just write the first things that come into your head because they are likely to be too similar to each other. Make a list of possible departmental headings, such as: Customer Service; E-business; Finance; Globalization; Human Resources; Technology; Legal; Management; Marketing and Production. Then brainstorm for ideas by listing as many things that you want to achieve under each heading and later re-arrange these things in order of priority. Finally, select the top item from each department heading and choose these as your program objectives. Try and restrict yourself to five because it will enable you to focus clearly. It is likely that the other things that you listed will be achieved if each of the top objectives are achieved. If this does not prove to be the case, then simply work through the process again.
Study forecast
As a guide, the Appleton Greene Collaborative Evaluation corporate training program should take 12-18 months to complete, depending upon your availability and current commitments. The reason why there is such a variance in time estimates is because every student is an individual, with differing productivity levels and different commitments. These differentiation’s are then exaggerated by the fact that this is a distance-learning program, which incorporates the practical integration of academic theory as an as a part of the training program. Consequently all of the project studies are real, which means that important decisions and compromises need to be made. You will want to get things right and will need to be patient with your expectations in order to ensure that they are. We would always recommend that you are prudent with your own task and time forecasts, but you still need to develop them and have a clear indication of what are realistic expectations in your case. With reference to your time planning: consider the time that you can realistically dedicate towards study with the program every week; calculate how long it should take you to complete the program, using the guidelines featured here; then break the program down into logical modules and allocate a suitable proportion of time to each of them, these will be your milestones; you can create a time plan by using a spreadsheet on your computer, or a personal organizer such as MS Outlook, you could also use a financial forecasting software; break your time forecasts down into manageable chunks of time, the more specific you can be, the more productive and accurate your time management will be; finally, use formulas where possible to do your time calculations for you, because this will help later on when your forecasts need to change in line with actual performance. With reference to your task planning: refer to your list of tasks that need to be undertaken in order to achieve your program objectives; with reference to your time plan, calculate when each task should be implemented; remember that you are not estimating when your objectives will be achieved, but when you will need to focus upon implementing the corresponding tasks; you also need to ensure that each task is implemented in conjunction with the associated training modules which are relevant; then break each single task down into a list of specific to do’s, say approximately ten to do’s for each task and enter these into your study plan; once again you could use MS Outlook to incorporate both your time and task planning and this could constitute your study plan; you could also use a project management software like MS Project. You should now have a clear and realistic forecast detailing when you can expect to be able to do something about undertaking the tasks to achieve your program objectives.
Performance management
It is one thing to develop your study forecast, it is quite another to monitor your progress. Ultimately it is less important whether you achieve your original study forecast and more important that you update it so that it constantly remains realistic in line with your performance. As you begin to work through the program, you will begin to have more of an idea about your own personal performance and productivity levels as a distance-learner. Once you have completed your first study module, you should re-evaluate your study forecast for both time and tasks, so that they reflect your actual performance level achieved. In order to achieve this you must first time yourself while training by using an alarm clock. Set the alarm for hourly intervals and make a note of how far you have come within that time. You can then make a note of your actual performance on your study plan and then compare your performance against your forecast. Then consider the reasons that have contributed towards your performance level, whether they are positive or negative and make a considered adjustment to your future forecasts as a result. Given time, you should start achieving your forecasts regularly.
With reference to time management: time yourself while you are studying and make a note of the actual time taken in your study plan; consider your successes with time-efficiency and the reasons for the success in each case and take this into consideration when reviewing future time planning; consider your failures with time-efficiency and the reasons for the failures in each case and take this into consideration when reviewing future time planning; re-evaluate your study forecast in relation to time planning for the remainder of your training program to ensure that you continue to be realistic about your time expectations. You need to be consistent with your time management, otherwise you will never complete your studies. This will either be because you are not contributing enough time to your studies, or you will become less efficient with the time that you do allocate to your studies. Remember, if you are not in control of your studies, they can just become yet another cause of stress for you.
With reference to your task management: time yourself while you are studying and make a note of the actual tasks that you have undertaken in your study plan; consider your successes with task-efficiency and the reasons for the success in each case; take this into consideration when reviewing future task planning; consider your failures with task-efficiency and the reasons for the failures in each case and take this into consideration when reviewing future task planning; re-evaluate your study forecast in relation to task planning for the remainder of your training program to ensure that you continue to be realistic about your task expectations. You need to be consistent with your task management, otherwise you will never know whether you are achieving your program objectives or not.
Keeping in touch
You will have access to qualified and experienced professors and tutors who are responsible for providing tutorial support for your particular training program. So don’t be shy about letting them know how you are getting on. We keep electronic records of all tutorial support emails so that professors and tutors can review previous correspondence before considering an individual response. It also means that there is a record of all communications between you and your professors and tutors and this helps to avoid any unnecessary duplication, misunderstanding, or misinterpretation. If you have a problem relating to the program, share it with them via email. It is likely that they have come across the same problem before and are usually able to make helpful suggestions and steer you in the right direction. To learn more about when and how to use tutorial support, please refer to the Tutorial Support section of this student information guide. This will help you to ensure that you are making the most of tutorial support that is available to you and will ultimately contribute towards your success and enjoyment with your training program.
Work colleagues and family
You should certainly discuss your program study progress with your colleagues, friends and your family. Appleton Greene training programs are very practical. They require you to seek information from other people, to plan, develop and implement processes with other people and to achieve feedback from other people in relation to viability and productivity. You will therefore have plenty of opportunities to test your ideas and enlist the views of others. People tend to be sympathetic towards distance-learners, so don’t bottle it all up in yourself. Get out there and share it! It is also likely that your family and colleagues are going to benefit from your labors with the program, so they are likely to be much more interested in being involved than you might think. Be bold about delegating work to those who might benefit themselves. This is a great way to achieve understanding and commitment from people who you may later rely upon for process implementation. Share your experiences with your friends and family.
Making it relevant
The key to successful learning is to make it relevant to your own individual circumstances. At all times you should be trying to make bridges between the content of the program and your own situation. Whether you achieve this through quiet reflection or through interactive discussion with your colleagues, client partners or your family, remember that it is the most important and rewarding aspect of translating your studies into real self-improvement. You should be clear about how you want the program to benefit you. This involves setting clear study objectives in relation to the content of the course in terms of understanding, concepts, completing research or reviewing activities and relating the content of the modules to your own situation. Your objectives may understandably change as you work through the program, in which case you should enter the revised objectives on your study plan so that you have a permanent reminder of what you are trying to achieve, when and why.
Distance-learning check-list
Prepare your study environment, your study tools and rules.
Undertake detailed self-assessment in terms of your ability as a learner.
Create a format for your study plan.
Consider your study objectives and tasks.
Create a study forecast.
Assess your study performance.
Re-evaluate your study forecast.
Be consistent when managing your study plan.
Use your Appleton Greene Certified Learning Provider (CLP) for tutorial support.
Make sure you keep in touch with those around you.
Tutorial Support
Programs
Appleton Greene uses standard and bespoke corporate training programs as vessels to transfer business process improvement knowledge into the heart of our clients’ organizations. Each individual program focuses upon the implementation of a specific business process, which enables clients to easily quantify their return on investment. There are hundreds of established Appleton Greene corporate training products now available to clients within customer services, e-business, finance, globalization, human resources, information technology, legal, management, marketing and production. It does not matter whether a client’s employees are located within one office, or an unlimited number of international offices, we can still bring them together to learn and implement specific business processes collectively. Our approach to global localization enables us to provide clients with a truly international service with that all important personal touch. Appleton Greene corporate training programs can be provided virtually or locally and they are all unique in that they individually focus upon a specific business function. They are implemented over a sustainable period of time and professional support is consistently provided by qualified learning providers and specialist consultants.
Support available
You will have a designated Certified Learning Provider (CLP) and an Accredited Consultant and we encourage you to communicate with them as much as possible. In all cases tutorial support is provided online because we can then keep a record of all communications to ensure that tutorial support remains consistent. You would also be forwarding your work to the tutorial support unit for evaluation and assessment. You will receive individual feedback on all of the work that you undertake on a one-to-one basis, together with specific recommendations for anything that may need to be changed in order to achieve a pass with merit or a pass with distinction and you then have as many opportunities as you may need to re-submit project studies until they meet with the required standard. Consequently the only reason that you should really fail (CLP) is if you do not do the work. It makes no difference to us whether a student takes 12 months or 18 months to complete the program, what matters is that in all cases the same quality standard will have been achieved.
Support Process
Please forward all of your future emails to the designated (CLP) Tutorial Support Unit email address that has been provided and please do not duplicate or copy your emails to other AGC email accounts as this will just cause unnecessary administration. Please note that emails are always answered as quickly as possible but you will need to allow a period of up to 20 business days for responses to general tutorial support emails during busy periods, because emails are answered strictly within the order in which they are received. You will also need to allow a period of up to 30 business days for the evaluation and assessment of project studies. This does not include weekends or public holidays. Please therefore kindly allow for this within your time planning. All communications are managed online via email because it enables tutorial service support managers to review other communications which have been received before responding and it ensures that there is a copy of all communications retained on file for future reference. All communications will be stored within your personal (CLP) study file here at Appleton Greene throughout your designated study period. If you need any assistance or clarification at any time, please do not hesitate to contact us by forwarding an email and remember that we are here to help. If you have any questions, please list and number your questions succinctly and you can then be sure of receiving specific answers to each and every query.
Time Management
It takes approximately 1 Year to complete the Collaborative Evaluation corporate training program, incorporating 12 x 6-hour monthly workshops. Each student will also need to contribute approximately 4 hours per week over 1 Year of their personal time. Students can study from home or work at their own pace and are responsible for managing their own study plan. There are no formal examinations and students are evaluated and assessed based upon their project study submissions, together with the quality of their internal analysis and supporting documents. They can contribute more time towards study, when they have the time to do so and can contribute less time when they are busy. All students tend to be in full time employment while studying and the Collaborative Evaluation program is purposely designed to accommodate this, so there is plenty of flexibility in terms of time management. It makes no difference to us at Appleton Greene, whether individuals take 12-18 months to complete this program. What matters is that in all cases the same standard of quality will have been achieved with the standard and bespoke programs that have been developed.
Distance Learning Guide
The distance learning guide should be your first port of call when starting your training program. It will help you when you are planning how and when to study, how to create the right environment and how to establish the right frame of mind. If you can lay the foundations properly during the planning stage, then it will contribute to your enjoyment and productivity while training later. The guide helps to change your lifestyle in order to accommodate time for study and to cultivate good study habits. It helps you to chart your progress so that you can measure your performance and achieve your goals. It explains the tools that you will need for study and how to make them work. It also explains how to translate academic theory into practical reality. Spend some time now working through your distance learning guide and make sure that you have firm foundations in place so that you can make the most of your distance learning program. There is no requirement for you to attend training workshops or classes at Appleton Greene offices. The entire program is undertaken online, program course manuals and project studies are administered via the Appleton Greene web site and via email, so you are able to study at your own pace and in the comfort of your own home or office as long as you have a computer and access to the internet.
How To Study
The how to study guide provides students with a clear understanding of the Appleton Greene facilitation via distance learning training methods and enables students to obtain a clear overview of the training program content. It enables students to understand the step-by-step training methods used by Appleton Greene and how course manuals are integrated with project studies. It explains the research and development that is required and the need to provide evidence and references to support your statements. It also enables students to understand precisely what will be required of them in order to achieve a pass with merit and a pass with distinction for individual project studies and provides useful guidance on how to be innovative and creative when developing your Unique Program Proposition (UPP).
Tutorial Support
Tutorial support for the Appleton Greene Collaborative Evaluation corporate training program is provided online either through the Appleton Greene Client Support Portal (CSP), or via email. All tutorial support requests are facilitated by a designated Program Administration Manager (PAM). They are responsible for deciding which professor or tutor is the most appropriate option relating to the support required and then the tutorial support request is forwarded onto them. Once the professor or tutor has completed the tutorial support request and answered any questions that have been asked, this communication is then returned to the student via email by the designated Program Administration Manager (PAM). This enables all tutorial support, between students, professors and tutors, to be facilitated by the designated Program Administration Manager (PAM) efficiently and securely through the email account. You will therefore need to allow a period of up to 20 business days for responses to general support queries and up to 30 business days for the evaluation and assessment of project studies, because all tutorial support requests are answered strictly within the order in which they are received. This does not include weekends or public holidays. Consequently you need to put some thought into the management of your tutorial support procedure in order to ensure that your study plan is feasible and to obtain the maximum possible benefit from tutorial support during your period of study. Please retain copies of your tutorial support emails for future reference. Please ensure that ALL of your tutorial support emails are set out using the format as suggested within your guide to tutorial support. Your tutorial support emails need to be referenced clearly to the specific part of the course manual or project study which you are working on at any given time. You also need to list and number any questions that you would like to ask, up to a maximum of five questions within each tutorial support email. Remember the more specific you can be with your questions the more specific your answers will be too and this will help you to avoid any unnecessary misunderstanding, misinterpretation, or duplication. The guide to tutorial support is intended to help you to understand how and when to use support in order to ensure that you get the most out of your training program. Appleton Greene training programs are designed to enable you to do things for yourself. They provide you with a structure or a framework and we use tutorial support to facilitate students while they practically implement what they learn. In other words, we are enabling students to do things for themselves. The benefits of distance learning via facilitation are considerable and are much more sustainable in the long-term than traditional short-term knowledge sharing programs. Consequently you should learn how and when to use tutorial support so that you can maximize the benefits from your learning experience with Appleton Greene. This guide describes the purpose of each training function and how to use them and how to use tutorial support in relation to each aspect of the training program. It also provides useful tips and guidance with regard to best practice.
Tutorial Support Tips
Students are often unsure about how and when to use tutorial support with Appleton Greene. This Tip List will help you to understand more about how to achieve the most from using tutorial support. Refer to it regularly to ensure that you are continuing to use the service properly. Tutorial support is critical to the success of your training experience, but it is important to understand when and how to use it in order to maximize the benefit that you receive. It is no coincidence that those students who succeed are those that learn how to be positive, proactive and productive when using tutorial support.
Be positive and friendly with your tutorial support emails
Remember that if you forward an email to the tutorial support unit, you are dealing with real people. “Do unto others as you would expect others to do unto you”. If you are positive, complimentary and generally friendly in your emails, you will generate a similar response in return. This will be more enjoyable, productive and rewarding for you in the long-term.
Think about the impression that you want to create
Every time that you communicate, you create an impression, which can be either positive or negative, so put some thought into the impression that you want to create. Remember that copies of all tutorial support emails are stored electronically and tutors will always refer to prior correspondence before responding to any current emails. Over a period of time, a general opinion will be arrived at in relation to your character, attitude and ability. Try to manage your own frustrations, mood swings and temperament professionally, without involving the tutorial support team. Demonstrating frustration or a lack of patience is a weakness and will be interpreted as such. The good thing about communicating in writing, is that you will have the time to consider your content carefully, you can review it and proof-read it before sending your email to Appleton Greene and this should help you to communicate more professionally, consistently and to avoid any unnecessary knee-jerk reactions to individual situations as and when they may arise. Please also remember that the CLP Tutorial Support Unit will not just be responsible for evaluating and assessing the quality of your work, they will also be responsible for providing recommendations to other learning providers and to client contacts within the Appleton Greene global client network, so do be in control of your own emotions and try to create a good impression.
Remember that quality is preferred to quantity
Please remember that when you send an email to the tutorial support team, you are not using Twitter or Text Messaging. Try not to forward an email every time that you have a thought. This will not prove to be productive either for you or for the tutorial support team. Take time to prepare your communications properly, as if you were writing a professional letter to a business colleague and make a list of queries that you are likely to have and then incorporate them within one email, say once every month, so that the tutorial support team can understand more about context, application and your methodology for study. Get yourself into a consistent routine with your tutorial support requests and use the tutorial support template provided with ALL of your emails. The (CLP) Tutorial Support Unit will not spoon-feed you with information. They need to be able to evaluate and assess your tutorial support requests carefully and professionally.
Be specific about your questions in order to receive specific answers
Try not to write essays by thinking as you are writing tutorial support emails. The tutorial support unit can be unclear about what in fact you are asking, or what you are looking to achieve. Be specific about asking questions that you want answers to. Number your questions. You will then receive specific answers to each and every question. This is the main purpose of tutorial support via email.
Keep a record of your tutorial support emails
It is important that you keep a record of all tutorial support emails that are forwarded to you. You can then refer to them when necessary and it avoids any unnecessary duplication, misunderstanding, or misinterpretation.
Individual training workshops or telephone support
Tutorial Support Email Format
You should use this tutorial support format if you need to request clarification or assistance while studying with your training program. Please note that ALL of your tutorial support request emails should use the same format. You should therefore set up a standard email template, which you can then use as and when you need to. Emails that are forwarded to Appleton Greene, which do not use the following format, may be rejected and returned to you by the (CLP) Program Administration Manager. A detailed response will then be forwarded to you via email usually within 20 business days of receipt for general support queries and 30 business days for the evaluation and assessment of project studies. This does not include weekends or public holidays. Your tutorial support request, together with the corresponding TSU reply, will then be saved and stored within your electronic TSU file at Appleton Greene for future reference.
Subject line of your email
Please insert: Appleton Greene (CLP) Tutorial Support Request: (Your Full Name) (Date), within the subject line of your email.
Main body of your email
Please insert:
1. Appleton Greene Certified Learning Provider (CLP) Tutorial Support Request
2. Your Full Name
3. Date of TS request
4. Preferred email address
5. Backup email address
6. Course manual page name or number (reference)
7. Project study page name or number (reference)
Subject of enquiry
Please insert a maximum of 50 words (please be succinct)
Briefly outline the subject matter of your inquiry, or what your questions relate to.
Question 1
Maximum of 50 words (please be succinct)
Maximum of 50 words (please be succinct)
Question 3
Maximum of 50 words (please be succinct)
Question 4
Maximum of 50 words (please be succinct)
Question 5
Maximum of 50 words (please be succinct)
Please note that a maximum of 5 questions is permitted with each individual tutorial support request email.
Procedure
* List the questions that you want to ask first, then re-arrange them in order of priority. Make sure that you reference them, where necessary, to the course manuals or project studies.
* Make sure that you are specific about your questions and number them. Try to plan the content within your emails to make sure that it is relevant.
* Make sure that your tutorial support emails are set out correctly, using the Tutorial Support Email Format provided here.
* Save a copy of your email and incorporate the date sent after the subject title. Keep your tutorial support emails within the same file and in date order for easy reference.
* Allow up to 20 business days for a response to general tutorial support emails and up to 30 business days for the evaluation and assessment of project studies, because detailed individual responses will be made in all cases and tutorial support emails are answered strictly within the order in which they are received.
* Emails can and do get lost. So if you have not received a reply within the appropriate time, forward another copy or a reminder to the tutorial support unit to be sure that it has been received but do not forward reminders unless the appropriate time has elapsed.
* When you receive a reply, save it immediately featuring the date of receipt after the subject heading for easy reference. In most cases the tutorial support unit replies to your questions individually, so you will have a record of the questions that you asked as well as the answers offered. With project studies however, separate emails are usually forwarded by the tutorial support unit, so do keep a record of your own original emails as well.
* Remember to be positive and friendly in your emails. You are dealing with real people who will respond to the same things that you respond to.
* Try not to repeat questions that have already been asked in previous emails. If this happens the tutorial support unit will probably just refer you to the appropriate answers that have already been provided within previous emails.
* If you lose your tutorial support email records you can write to Appleton Greene to receive a copy of your tutorial support file, but a separate administration charge may be levied for this service.
How To Study
How To Study
Planning your study environment
Your study conditions are of great importance and will have a direct effect on how much you enjoy your training program. Consider how much space you will have, whether it is comfortable and private and whether you are likely to be disturbed. The study tools and facilities at your disposal are also important to the success of your distance-learning experience. Your tutorial support unit can help with useful tips and guidance, regardless of your starting position. It is important to get this right before you start working on your training program.
Planning your program objectives
It is important that you have a clear list of study objectives, in order of priority, before you start working on your training program. Your tutorial support unit can offer assistance here to ensure that your study objectives have been afforded due consideration and priority.
Planning how and when to study
Distance-learners are freed from the necessity of attending regular classes, since they can study in their own way, at their own pace and for their own purposes. This approach is designed to let you study efficiently away from the traditional classroom environment. It is important however, that you plan how and when to study, so that you are making the most of your natural attributes, strengths and opportunities. Your tutorial support unit can offer assistance and useful tips to ensure that you are playing to your strengths.
Planning your study tasks
You should have a clear understanding of the study tasks that you should be undertaking and the priority associated with each task. These tasks should also be integrated with your program objectives. The distance learning guide and the guide to tutorial support for students should help you here, but if you need any clarification or assistance, please contact your tutorial support unit.
Planning your time
You will need to allocate specific times during your calendar when you intend to study if you are to have a realistic chance of completing your program on time. You are responsible for planning and managing your own study time, so it is important that you are successful with this. Your tutorial support unit can help you with this if your time plan is not working.
Keeping in touch
Consistency is the key here. If you communicate too frequently in short bursts, or too infrequently with no pattern, then your management ability with your studies will be questioned, both by you and by your tutorial support unit. It is obvious when a student is in control and when one is not and this will depend how able you are at sticking with your study plan. Inconsistency invariably leads to in-completion.
Charting your progress
Your tutorial support team can help you to chart your own study progress. Refer to your distance learning guide for further details.
Making it work
To succeed, all that you will need to do is apply yourself to undertaking your training program and interpreting it correctly. Success or failure lies in your hands and your hands alone, so be sure that you have a strategy for making it work. Your Certified Learning Provider (CLP) and Accredited Consultant can guide you through the process of program planning, development and implementation.
Reading methods
Interpretation is often unique to the individual but it can be improved and even quantified by implementing consistent interpretation methods. Interpretation can be affected by outside interference such as family members, TV, or the Internet, or simply by other thoughts which are demanding priority in our minds. One thing that can improve our productivity is using recognized reading methods. This helps us to focus and to be more structured when reading information for reasons of importance, rather than relaxation.
Speed reading
When reading through course manuals for the first time, subconsciously set your reading speed to be just fast enough that you cannot dwell on individual words or tables. With practice, you should be able to read an A4 sheet of paper in one minute. You will not achieve much in the way of a detailed understanding, but your brain will retain a useful overview. This overview will be important later on and will enable you to keep individual issues in perspective with a more generic picture because speed reading appeals to the memory part of the brain. Do not worry about what you do or do not remember at this stage.
Content reading
Once you have speed read everything, you can then start work in earnest. You now need to read a particular section of your course manual thoroughly, by making detailed notes while you read. This process is called Content Reading and it will help to consolidate your understanding and interpretation of the information that has been provided.
Making structured notes on the course manuals
When you are content reading, you should be making detailed notes, which are both structured and informative. Make these notes in a MS Word document on your computer, because you can then amend and update these as and when you deem it to be necessary. List your notes under three headings: 1. Interpretation – 2.Questions – 3. Tasks. The purpose of the 1st section is to clarify your interpretation by writing it down. The purpose of the 2nd section is to list any questions that the issue raises for you. The purpose of the 3rd section is to list any tasks that you should undertake as a result. Anyone who has graduated with a business-related degree should already be familiar with this process.
Organizing structured notes separately
You should then transfer your notes to a separate study notebook, preferably one that enables easy referencing, such as a MS Word Document, a MS Excel Spreadsheet, a MS Access Database, or a personal organizer on your cell phone. Transferring your notes allows you to have the opportunity of cross-checking and verifying them, which assists considerably with understanding and interpretation. You will also find that the better you are at doing this, the more chance you will have of ensuring that you achieve your study objectives.
Question your understanding
Do challenge your understanding. Explain things to yourself in your own words by writing things down.
Clarifying your understanding
If you are at all unsure, forward an email to your tutorial support unit and they will help to clarify your understanding.
Question your interpretation
Do challenge your interpretation. Qualify your interpretation by writing it down.
Clarifying your interpretation
If you are at all unsure, forward an email to your tutorial support unit and they will help to clarify your interpretation.
Qualification Requirements
The student will need to successfully complete the project study and all of the exercises relating to the Collaborative Evaluation corporate training program, achieving a pass with merit or distinction in each case, in order to qualify as an Accredited Collaborative Evaluation Specialist (ACES). All monthly workshops need to be tried and tested within your company. These project studies can be completed in your own time and at your own pace and in the comfort of your own home or office. There are no formal examinations, assessment is based upon the successful completion of the project studies. They are called project studies because, unlike case studies, these projects are not theoretical, they incorporate real program processes that need to be properly researched and developed. The project studies assist us in measuring your understanding and interpretation of the training program and enable us to assess qualification merits. All of the project studies are based entirely upon the content within the training program and they enable you to integrate what you have learnt into your corporate training practice.
Collaborative Evaluation – Grading Contribution
Project Study – Grading Contribution
Customer Service – 10%
E-business – 05%
Finance – 10%
Globalization – 10%
Human Resources – 10%
Information Technology – 10%
Legal – 05%
Management – 10%
Marketing – 10%
Production – 10%
Education – 05%
Logistics – 05%
TOTAL GRADING – 100%
Qualification grades
A mark of 90% = Pass with Distinction.
A mark of 75% = Pass with Merit.
A mark of less than 75% = Fail.
If you fail to achieve a mark of 75% with a project study, you will receive detailed feedback from the Certified Learning Provider (CLP) and/or Accredited Consultant, together with a list of tasks which you will need to complete, in order to ensure that your project study meets with the minimum quality standard that is required by Appleton Greene. You can then re-submit your project study for further evaluation and assessment. Indeed you can re-submit as many drafts of your project studies as you need to, until such a time as they eventually meet with the required standard by Appleton Greene, so you need not worry about this, it is all part of the learning process.
When marking project studies, Appleton Greene is looking for sufficient evidence of the following:
Pass with merit
A satisfactory level of program understanding
A satisfactory level of program interpretation
A satisfactory level of project study content presentation
A satisfactory level of Unique Program Proposition (UPP) quality
A satisfactory level of the practical integration of academic theory
Pass with distinction
An exceptional level of program understanding
An exceptional level of program interpretation
An exceptional level of project study content presentation
An exceptional level of Unique Program Proposition (UPP) quality
An exceptional level of the practical integration of academic theory
Preliminary Analysis
It would help if program students spent some time researching Collaborative Evaluation, before their 1st Workshop Development Process (WDP1), in order to establish an understanding of the purpose and benefits of this process, providing an educational foundation to build upon. Collaborative evaluation is an approach that actively engages program stakeholders in the evaluation process. When stakeholders collaborate with evaluators, their understanding increases and the utility of the evaluation is often enhanced.
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholder understand and use. Among similar “participant-oriented” evaluation approaches (Fitzpatrick, Sanders, & Worthen, 2011), Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The articles use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.
A PCG Education White Paper by Christine Donis-Keller, Julie Meltzer, and Elizabeth Chmielewski provides quite a useful introduction to Collaborative Evaluation within the Education industry.
“Is our program working?” This is a key question in education today, particularly in this era of heightened accountability. A collaborative program evaluation model is an extremely useful way to answer this question when education organizations want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative program evaluation model, the client (e.g., districts, states, public and independent schools, nonprofits, and foundations) works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports program improvement, and offer summative information on the effectiveness of the program. This PCG Education White Paper describes the benefits and essential elements of a collaborative program evaluation model. The paper is based on the experience of PCG’s research and evaluation team.
There are many reasons why an educational organization might choose to hire an evaluator to assess program effectiveness and determine if money is being well spent. An evaluation might be commissioned at one or more critical junctures of program implementation: when programs are first established, modified, or expanded; when stakeholders advocate for more information; when student outcomes do not meet expectations; or when a case for additional funding needs to be made. Skilled evaluators will apply a variety of data collection methods, approaches to analysis, and reporting techniques to respond to all of these situations. Regardless of the type of educational program being evaluated, program directors and staff dedicated to continuous improvement want to know three things.
Questions that collaborative program evaluation can answer:
1. Is the program being implemented according to plan? Why or why not?
2. Is the program having the desired effect? Why or why not?
3. Is the program having the intended outcome? Why or why not?
Evaluation Implementation – Is the program being implemented according to plan?
If the theory of action behind the project states that certain actions need to be taken and specific structures established or services delivered before results are achieved, it is important to make certain that these are in place and actually happening. Otherwise, there is little point in evaluating efficacy or impact. For example, if teacher professional development were needed to show how to conduct Socratic circles in the classroom and the professional development did not take place, it would be a waste of resources to seek evidence of changed practice through classroom observations.
Evaluation of Efficacy – Is the program having the desired effect?
An important step in an evaluation is to document evidence that the program is having the intended medium-term effects that will presumably lead to the desired long-term outcomes. If the program is being implemented as planned, it is important to examine if the program has the power to produce the desired medium-term effects. For example, if regular meetings of trained data teams are supposed to result in changes in teacher practice, with the ultimate long-term result of increases in student achievement, it is important to check if changes in classroom practice are actually occurring. If adjustments to instruction that are aligned with evidence-based best practices are occurring, we would eventually expect to see improved student outcomes, assuming the program’s design is effective.
Evaluation of Impact – Is the program having the intended outcome?
Outcomes define what the program is designed to achieve. It is the role of the evaluator to examine whether or not the program has produced a change in particular outcomes over time. This analysis should occur in light of the program goals and the evaluation questions. For example: Does the well-run mentoring program result in fewer discipline referrals? Do students aspire to go to college in greater numbers after their school fully implements an arts-integrated program? Do students’ test scores increase as a result of using a computer-based math program with fidelity? An external evaluator brings outside perspective and expertise to the task of assessing and reporting the degree to which educational programs are meeting the needs of students. In a collaborative evaluation, the client and the evaluator discuss the data and determine why this might be the case. Internal staff members may be too close to the work to be able to determine the impact of the program or they may not have time to step back and examine their work over time. Internal staff also may not have the expertise required to carry out a sound evaluation. The evaluator can, based on the program, clarify the questions the client wants to answer, create an evaluation design that mixes and matches the appropriate data collection and analysis methodologies, design custom data collection instruments and approaches, and draw upon content expertise to provide valuable feedback and insight to those responsible for the programs.
The benefits of Collaborative Evaluation
Collaborative evaluation is a proactive evaluation model that enables program staff to engage in continuous program improvement. Specific benefits of the model include:
A customized evaluation design that reflects the nuances of the program being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in program implementation over time.
Increased validity of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of program staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among program staff
Potential cost efficiencies. Each of these benefits is described in detail below
Address program nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with program staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of program staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide program staff to identify and capitalize on external and internal program networks that they can tap to help them to achieve program goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between program staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the program. By communicating regularly about program developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the program.
Increased validity of results
Another benefit of working collaboratively with program staff in developing the evaluation is increased validity of the study. Because the evaluation team develops a high level of understanding of the program, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the validity of the study and the credibility of the findings. The use of a collaborative program evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate program improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that program staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress.
The evaluation team works with program staff to develop tools to measure desired outcomes of the program. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on program implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among program staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative program evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the program’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of program improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of program staff through their active participation in the design and execution of the evaluation. With increased capacity, the program staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process occurs in three phases: 1. Getting Underway, the phase where the theory of action is developed; 2. Full Engagement, the phase where designing data collection tools is undertaken, data collected, and findings reported; and 3. Wrapping Up, the phase where an action plan is developed to use the evaluation results.
Collaborative Evaluation in practice
A collaborative program evaluation can employ a variety of approaches, but focuses on building a relationship between the evaluation team and program staff with the goal of building the capacity of program staff to use evaluation results and promote program improvement (O’Sullivan, 2012). The process of a collaborative evaluation occurs in three general phases: (1) getting underway, (2) full engagement, and (3) wrapping up. While the phases appear linear, they are, in fact, dynamic and iterative as implemented throughout the evaluation process. Program staff and the evaluation team are engaged in a continuous cycle of dialogue to • Build program and evaluation knowledge. • Communicate about progress with the evaluation and developments in the program. • Review evaluation findings and recommendations. • Revisit, as necessary, evaluation questions and tools to ensure that they will generate key information needed for decision making and to make program improvements.
Phase 1 – Getting under way
By taking a collaborative evaluation approach to develop knowledge of the program and to design the evaluation, program development and implementation are also improved. During this initial phase of the evaluation, the evaluation team engages with program staff to answer a series of questions and clarify program details that will guide the evaluation design. In this phase, questions include • What are the official goals of the program? • What steps are required to achieve these goals? • What is the program’s current stage of implementation? • What questions do we wish to answer about the program through the evaluation? • What is the best way to measure the outcomes we’re interested in? • What roles will the evaluation team and program staff play throughout the evaluation process? The evaluation team seeks to understand program purposes, evolution, activities, functions, stakeholders, and the context in which the program operates. This is accomplished not only through a review of relevant program documents, but also through conversations with various stakeholders. A goal of this phase is to identify, with program staff, the theory of action that undergirds the program. That is, what do program staff believe needs to happen in order to get the results they seek?
A Case Study in brief – A theory of action assures the right questions are asked at the appropriate time. In an evaluation study of a reading intervention program that was being implemented in several classrooms, a school district originally sought an evaluator to focus on student outcomes. However, before an assessment of outcomes could occur, PCG staff worked with the district to develop an understanding of what full implementation of the intervention should look like by developing the theory of action. The evaluator helped the program staff to see that if the program was not being implemented with fidelity (a mid-term outcome) and students had not had equal or quality access to the program, then there would be little sense in looking to see if students’ reading scores had improved (the long-term outcome). Thus, the first evaluation questions focused on assessing fidelity of implementation: Are teachers using the program assessments? Are teachers working with the program for the recommended amount of time? Are teachers using the program technology and writing components of the program? Interim evaluation findings revealed that implementation was highly uneven across classrooms. The evaluation team provided that feedback to the program staff who in turn made changes to improve the chance that the program would meet its long-term goals.
The theory of action is translated into a graphical representation called a program logic model. A logic model displays program inputs, as well as targeted medium effects and long-term outcomes. Review of the logic model can help the evaluators and program staff understand what assumptions are in place in the program, what the purpose of the program is, and what steps are needed to obtain the desired result (Dyson & Todd, 2010; Helitzer et al., 2010; Hurworth, 2008). Stakeholders may not have formally articulated their theory of action and may be operating within several different theories depending on their vantage point (Weiss, 1998). The process of identifying an explicit model can focus the program staff as they develop and implement the program and allows the evaluation team to better match data collection and analyses to the project’s goals and objectives (Fear, 2007). The evaluation team and the stakeholders work together to establish the logic model to develop a common understanding of the program. This process clarifies how the program components relate to one another and informs the development of questions to be answered in the study. Questions depend upon the program’s stage of implementation and might include • Is the program being implemented the way it was intended? • What is the level of satisfaction of various stakeholders with the program? • What is the effect of the program on teaching practices? • What is the effect of the program on student achievement? Agreement on and explicit identification of evaluation questions helps frame the scope of the evaluation, data collection activities, and the level of involvement in evaluation activities by the evaluator and other stakeholders. In this phase, evaluators may also conduct a review of existing research on similar programs to help support best practices and change processes within the program and to ascertain whether similar research has been conducted that can inform the study design. Once questions are agreed upon and desired program outcomes are clear, program staff and evaluators collectively design the evaluation. This includes deciding on measures and types of data collection tools, data collection processes, timelines, plans for how to analyze formative data to drive program improvement, summative data that will be collected to demonstrate program impact, and when to report on findings. The data collection processes (e.g., surveys, focus groups, observation checklists, interviews, student performance data) and analytical methods (e.g., qualitative, quantitative, mixed) proposed by the evaluation team will vary depending on the questions being asked; the scope of the evaluation; how much the program staff is able to assist with data collection given their time, skills, and interest; and the evaluation budget. A collaborative program evaluation does not mean that program staff must participate in every evaluation activity. However, developing a collaborative evaluation design does require making explicit decisions about roles based on the feasibility, time, skills, and interest to participate in each phase (Corn et al., 2012). Because the evaluation design is developed in consultation with program staff, it is more likely to reflect an understanding of the nuances of the program and the concerns of stakeholders. Also, given that stakeholders participate in framing the evaluation design and process from the beginning, they are more likely to understand and use evaluation findings (Rodriguez-Campos, 2012).
Case Study Briefs – Developing surveys that represent core program ideas. For an evaluation of an arts program in a public school system, PCG evaluators, in conjunction with program staff, developed survey instruments by first establishing a list of categories of inquiry. Questions were developed within each category using program materials, a review of the literature, and interviews with staff, teachers, and artists at school sites. Survey items were reviewed with the program staff to confirm that the survey reflected their theory of action, tapped core ideas about program implementation and impact, and used local context-based language understood by all stakeholders. Optimizing data collection within the context of the program. In a PCG evaluation of a state-sponsored coaching program to support schools not meeting annual performance goals, the state was interested in whether coaches, placed in schools to provide content expertise, supported student growth. The evaluation team designed an evaluation plan to incorporate data collected from a number of sources including school visits, and worked closely with program staff to create the conditions for successful data collection. Based on experiences with a prior program, principals and teachers were anxious about the placement of coaches from “the state.” As a result, the coaches and state program staff had worked to build trust at their sites, which had to be maintained during the school visits conducted by the evaluation team. Therefore, it was collectively decided that a representative from the program would travel with the evaluation team and would participate in interviews with school staff as part of the evaluation. This clarified that the evaluation was supported by program staff, facilitated the collection of data, and put school staff at ease.
Phase 2 – Full Engagement
Collaborative evaluation enhances the quality of communication and level of trust with the client, which contributes significantly to the process of ongoing implementation, evaluation activities, and program improvement. After working in concert to articulate the theory of action and design the evaluation, the evaluation team and program staff are ready to fully engage with each other and the evaluation activities. Components of this phase are repeated in an ongoing cycle of data collection, analysis, reporting, and use of evaluation results. As the program and the evaluation evolve, this phase also includes periodically revisiting the evaluation plan to rethink evaluation questions in light of findings, any program developments that might influence the evaluation design or outcomes, and new questions that emerge. In a collaborative evaluation, designing data collection tools is undertaken as a partnership between the evaluation team and program staff to ensure that the tools will appropriately measure the implementation and impact of a particular program (Lusky & Hayes, 2001). During this phase, evaluators and program staff come to consensus around the questions: What is the available evidence to answer the evaluation questions? How can we most effectively answer the evaluation questions? What is feasible to collect? Tools developed might include focus group or interview protocols, surveys, and observation checklists. In addition to deciding what evidence can and should be collected, the evaluation team works collaboratively with program staff to optimize data collection opportunities. Program staff have knowledge of the climate and opportunities for data collection that will least interrupt the flow of daily program activities and will allow the evaluators to experience the program in an authentic way. Program staff can support data collection efforts by communicating needs directly to sites or staff. Involving program staff in data collection builds their understanding of both the evaluation process and the findings. The evaluation team shares evaluation and content expertise with program staff, and program staff share deep knowledge of their work and the context in which it is done. Collaborating in the data collection process builds staff capacity to conduct ongoing progress monitoring using these instruments beyond the end of the formal evaluation study. Depending on the nature of the data collected, data analysis may follow a similarly collaborative process. Evaluators bring technical and conceptual expertise to the analysis of quantitative and qualitative data gathered, but it is through the expertise shared by program staff and collaborative dialogue that it becomes clear the types of analyses that will be most meaningful to program staff and other stakeholders. For example, individual schools and districts may wish to see their own survey or achievement data, whereas state administrators may be most interested in data aggregated by region or level of urbanicity. In addition, evaluators may bring findings to program staff as they emerge in order to collaboratively brainstorm possible explanations and additional analyses to pursue. For example, if a program designed to increase literacy achievement for all students seems to have a particularly large effect on students classified as English language learners, stakeholders may wish to delve more deeply into these data to more fully understand this finding. Once the data have been analyzed and findings have been compiled, the evaluation team and the program staff must decide upon the most relevant ways (given multiple audiences) and intervals to report findings. Reports of findings should include an interpretation of those findings and recommendations appropriate to the specific context of the program (Poth & Shulha, 2008). Ideally, the reporting schedule should be arranged so that the findings can both inform the ongoing actions of program staff and enhance decision making by stakeholders. The evaluation team may agree to provide a set of interim reports or presentations that help program staff reflect on implementation and impact, including a formal written report with associated materials (e.g., an executive summary, a presentation, or a set of documents tailored for specific stakeholders). A collaborative program evaluation process ensures that evaluation reports and related materials are tools that program staff can use to share information about their program with internal and external stakeholders. These tools can be used by staff to build a wider understanding of the program’s implementation, efficacy, and impact, to share context-sensitive recommendations for greater stakeholder involvement and program improvement, and to provide key information about the program to potential funders. The evaluation team and the stakeholders collaboratively review the evaluation findings. This review reinforces understanding of the results and methods used to obtain them. Based on the review, the evaluation team and the program staff generate additional questions raised by the data and clarify next steps for the evaluation. For example, program staff and evaluators might consider: Given changes in program personnel and more rapid than planned expansion, what measures are most salient to assess progress toward goals? Is there a need for additional or alternate data collection tools?
Case Study Brief – Formative feedback supports attainment of project goals. In the evaluation of the same arts-integration program mentioned earlier, the annual evaluation report included information about program selections made at each school over time. In considering which aspects of the evaluation report would be most useful to share with school-based site coordinators, program staff decided to share school-by-school and district-level results. Sharing school and district data provided program staff with a platform from which to revisit a discussion of overarching program goals that call for distribution of program selections across multiple art forms and types of arts experiences. Consulting annual program selections and district-level results helped school-based staff ensure that future program selections represent these goals.
Phase 3 – Wrapping Up
In a collaborative evaluation model, the final phase lays the groundwork for program staff to build upon and continue to use evaluation results, even after the conclusion of the evaluation contract. Questions related to program implementation that may be answered during this phase include: How can we make best use of the evaluation findings? Based on current implementation, what steps need to be taken to increase fidelity? How do we create conditions to expand and/or continue our successes? Additional questions specific to the evaluation may also be considered near the conclusion of the evaluation period such as: What are our evaluation needs going forward? What infrastructure and leadership will support ongoing data collection and use of evaluation results? At the core of the collaborative program evaluation model is the use of evaluation results, not only to understand program impact and inform decision making, but also to improve program implementation and student outcomes (Cousins & Whitmore, 1998; O’Sullivan, 2012). Use of evaluation data traditionally has not been part of the evaluation cycle. In many types of evaluations, the assimilation of evaluation results and development of a plan based upon them has often been left to the evaluation client. In a collaborative program evaluation, the evaluators may facilitate data-informed action planning to help program staff develop a plan to implement recommendations from the evaluation. Often, this is explicitly built into the process from the beginning. By working together throughout the evaluation process, both partners develop a deeper understanding of how the program operates and what impact is anticipated. Consequently, the evaluation team is better equipped to provide actionable recommendations relative to areas that need improvement. The evaluation team can also plan with the project staff how they might continue to use the data collection tools or evaluation processes developed for their program to continue to track and monitor the ongoing implementation and the effectiveness of program improvements. Supporting the program staff to develop this capacity facilitates implementation of recommendations and subsequent evaluation of how the recommendations are implemented. Involving program staff with the evaluation design, data collection, review of evaluation results, and discussion of recommendations can position staff to continue the cycle of inquiry and action initiated by the evaluation. In fact, several studies have demonstrated how a collaborative program evaluation process can help a school develop and sustain a learning culture (Hoole & Patterson, 2008; Suárez-Herrera, Springett, & Kagan, 2009). Iterative review of evaluation results and recommendations builds what Fitzpatrick (2012) calls “evaluative ways of thinking—questioning, considering evidence, deliberating” into the life of an educational organization.
Case Study Brief – Facilitated action planning based on evaluation results builds a learning culture. In the evaluations of several Smaller Learning Community (SLC) grant recipients, the evaluations were structured to provide program staff with frequent data to inform progress toward program goals. This included survey and interview data related to the extent to which the program created a learning environment conducive to student growth, increased the achievement of all students, and established a schoolwide culture that supported more personalized learning. Achievement data were also examined. Coaching from the evaluation team was built into the evaluation to develop staff capacity to make data-informed instructional decisions. Coaching supported the staff to write meaningful objectives, delineate action-oriented steps, and identify success indicators as part of their plan to advance the program.
Conclusions
The primary reason that program staff can trust the findings of a quality collaborative program evaluation is because they know that the evaluator understands their context and their concerns and will work with them to achieve their goal of continuous program improvement. As described in this white paper, the benefits of collaborative program evaluation include • an evaluation design based on sound principles that reflects the nuances of the program being evaluated; • an evaluation design that is flexible and adaptable to the purposes of the evaluation; • increased validity of results; • greater buy-in among stakeholders in both the process and results; • development of a culture of inquiry among program staff; • development of program staff’s capacity to continue to monitor progress toward program goals beyond the duration of the evaluation period; and • potential cost efficiencies. The collaborative program evaluation model allows the evaluation team and program staff to stand shoulder-to-shoulder in determining how to improve program implementation and effectiveness, thereby increasing the probability of improved student outcomes. In this type of evaluation, evaluators apply appropriate data collection and methods of analysis to determine whether the program is having the desired impact and provides recommendations for program improvements. While a collaborative program evaluation requires an ongoing commitment by all parties, it also produces high value to stakeholders and greatly increases the likelihood that educational programs will meet their intended goals and objectives.
Course Manuals
Course Manual – Customer Service
Customer service is the provision of service to customers before, during and after a purchase. Customer service is a series of activities designed to enhance the level of customer satisfaction – that is, the feeling that a product or service has met the customer expectation. The importance of customer service may vary by product or service, industry and customer. The perception of success of such interactions will be dependent on employees “who can adjust themselves to the personality of the guest,” according to Micah Solomon. From the point of view of an overall sales process engineering effort, customer service plays an important role in an organization’s ability to generate income and revenue. From that perspective, customer service should be included as part of an overall approach to systematic improvement. A customer service experience can change the entire perception a customer has of the organization. Some have argued that the quality and level of customer service has decreased in recent years, and that this can be attributed to a lack of support or understanding at the executive and middle management levels of a corporation and/or a customer service policy. To address this argument, many organizations have employed a variety of methods to improve their customer satisfaction levels, and other key performance indicators (KPIs). Good customer service involves developing bonds with customers, hopefully leading to long term relationships. It creates advantages for both customers and the business alike. Customers benefit because the business is providing a service that meets their needs. The business benefits because satisfied customers are likely to be repeat customers. They will stay with the business. However, good customer service is not easily achieved. It takes time to establish. It requires investment to deliver consistent standards. A large part of customer service success is creating a seamless experience. Customer needs are anticipated; systems are in place; employees are trained. The company runs like a well-oiled machine. But what happens when the unexpected happens? Customers have an unusual request or they simply don’t know the rules of the system? The unexpected, provides the opportunity to stretch the system, improve the system, or even forget the system and impress a customer.
Consistently deliver upon promises
Customers are more likely to feel loyalty towards a company if they can be sure they are going to get exactly what they need from the company without any delays or problems. The company should not make promises that they cannot keep as this may damage their reputation.
Focus in detail
Very often it is the little things that make customers feel valued by a company and therefore they are likely to remain loyal. Small touches such as addressing customers by their name and showing a genuine interest in the customers can make all the difference.
Providing value
If customers know they can rely on the product or service and it is going to serve their needs then they will continue to use it. The company may need to adapt the product or service as required in order for it to continue to meet customer needs.
Customer loyalty
Reward loyal customers by letting them become the first to know about any upcoming offers or promotions that the company is offering. This can be extended by making special deals available to selected customers before they are rolled out to everyone.
Exceed expectations
Exceptional customer service is a great way for a company to earn loyalty from its customers. Customers will remember your brand when you go out of your way to help them. It is this kind of experience that customers will share with others.
Problem resolution
Experiencing problems with a product that customers have purchased from a company may not in itself be a disaster, especially if the customer is loyal and has had no previous problems with the company. What can cause a problem though is if the customer finds it difficult or time consuming to rectify the issue.
Incentive schemes
A loyalty program where the customer can accumulate points which can be redeemed for money off purchases can be the difference if the customer is choosing between two similar businesses. Customers like to feel rewarded for their loyalty. This kind of program can also be used as a marketing strategy.
Employee loyalty
Customers can tell if an employee is really dedicated to the company they work for or whether they are only there for the money. An employee’s enthusiasm for the company and for the products or services they offer can be infectious. This will create a more enjoyable experience for the customers and make them more likely to return.
Customer feedback
This will serve two purposes. Firstly it will make the customers feel engaged with the company which will enhance their feelings of loyalty. Secondly it can give the company the chance to see what they are doing right and if there are any areas where they can improve.
Companies spend a lot of money on marketing in order to devise strategies that will attract new customers. While these strategies are important, then remember that they will be most effective when they are implemented with the needs of the customers in mind. As much effort should be put into retaining existing, loyal customers as there is in attracting new ones.
Collaborative Evaluation – Internal Analysis
The core objective of the Customer Service section of the course manual is to enable the Head of the Customer Service Department to implement a process within their own department, with a view towards undertaking a thorough and detailed internal analysis into the history, current position and future outlook for Collaborative Evaluation within their department. In other words, we need to ascertain how much Collaborative Evaluation has been used within Customer Service projects, how successful it has been and how things could be improved. Is the department centralized or decentralized in its management structure. In other words, is corporate strategy developed from the top down, or from the bottom up? This will directly impact upon the way in which Collaborative Evaluation is used and interpreted. Remember: Collaborative Evaluation is an approach that actively engages program stakeholders in the evaluation process. When stakeholders collaborate with evaluators, their understanding increases and the utility of the evaluation is often enhanced. So, you will need to select either a large project that has recently been undertaken within your department, or number of smaller projects that have been recently undertaken within your department, identify who the program stakeholders were in each case and then consider how Collaborative Evaluation was used, whether it was successful and how it could be improved.
Collaborative Evaluation systematically invites and engages stakeholders in project evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholders understand and use. Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each project’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The processes use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches. Collaborative processes are being promoted as an alternative decision-making process for managing. This is a relatively recent phenomenon, and, given its growing popularity, it is important to develop and apply methods and criteria for evaluation, to determine strengths and weaknesses, and to identify best practices for effective use of the collaborative model. Evaluation based on multiple criteria and at several points in time can assist those involved in designing and organizing collaborative processes to ensure the process is responsive to stakeholders’ and achieves its objectives. The success of both the process and the outcome of collaborative processes can be effectively appraised using participant surveys. Evidence from case studies of collaborative approaches show these processes can generate higher quality, and more creative and durable agreements that are more successfully implemented due to increased public buy-in and reduced conflict. Collaboration can generate social capital, by facilitating improved relationships between stakeholders, generating new stakeholder networks, enhancing communication skills, and co-producing new knowledge with stakeholders. However, collaborative processes are a relatively recent phenomenon, particularly when compared with historical planning and decision-making processes.
“Is our project working?” This is a key question, particularly in this era of heightened accountability. A collaborative project evaluation model is an extremely useful way to answer this question when departments want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative project evaluation model, the client works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports project improvement, and offer summative information on the effectiveness of the project.
Collaborative evaluation is a proactive evaluation model that enables project staff to engage in continuous project improvement. Specific benefits of the model include
A customized evaluation design that reflects the nuances of the project being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in project implementation over time.
Increased reliability of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of project staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among project staff.
Potential cost efficiencies.
Each of these benefits is described in detail below:
Address project nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with project staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of project staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide project staff to identify and capitalize on external and internal project networks that they can tap to help them to achieve project goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between project staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the project. By communicating regularly about project developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the project.
Increased reliability of results
Another benefit of working collaboratively with project staff in developing the evaluation is increased reliability of the study. Because the evaluation team develops a high level of understanding of the project, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the reliability of the study and the credibility of the findings. The use of a collaborative project evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate project improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that project staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress
The evaluation team works with project staff to develop tools to measure desired outcomes of the project. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on project implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among project staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative project evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the project’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of project improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of project staff through their active participation in the design and execution of the evaluation. With increased capacity, the project staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process incorporates four phases: planning; implementation; completion; and dissemination and reporting. These complement the phases of project development and implementation. Each phase has unique issues, methods, and procedures.
Course Manual – E-Business
e-business, is the application of information and communication technologies (ICT) in support of all the activities of business. Commerce constitutes the exchange of products and services between businesses, groups and individuals and can be seen as one of the essential activities of any business. Electronic commerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses. The term “e-business” was coined by IBM’s marketing and Internet teams in 1996. Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers. The internet is a public through way. Firms use more private and hence more secure networks for more effective and efficient management of their i