Collaborative Evaluation – Workshop 1
Executive Summary Video
The Appleton Greene Corporate Training Program (CTP) for Collaborative Evaluation is provided by Ms. Gordon MPH MS Certified Learning Provider (CLP). Program Specifications: Monthly cost USD$2,500.00; Monthly Workshops 6 hours; Monthly Support 4 hours; Program Duration 12 months; Program orders subject to ongoing availability.
If you would like to view the Client Information Hub (CIH) for this program, please Click Here
Learning Provider Profile
Ms. Gordon is a Certified Learning Provider (CLP) at Appleton Greene and she has experience in management, human resources and marketing. She has achieved a Master’s in Public Health (MPH) and a Master’s in Anthropology (MS). She has industry experience within the following sectors: Education; Healthcare; Non-Profit & Charities; Technology and Consultancy. She has had commercial experience within the following countries: United States of America, or more specifically within the following cities: Washington DC; New York NY; Philadelphia PA; Boston MA and Chicago IL. Her personal achievements include: facilitated twenty-two programs on campus that underwent accreditation processes for a USA University; implemented an evaluation and accreditation process for an Accreditation Council and co-developed the first web-based, online, integrated accreditation system in the United States and world; Her service skills incorporate: learning and development; management development; business and marketing strategy; marketing analytics and collaborative evaluation.
Mission Statement
The first stage of the program is to understand the history, current position and future outlook relating to collaborative evaluation, not just for the organization as a whole, but for each individual department, including: customer service; e-business; finance; globalization; human resources; information technology; legal; management; marketing; production; education and logistics. This will be achieved by implementing a process within each department, enabling the head of that department to conduct a detailed and thorough internal analysis to establish the internal strengths and weaknesses and the external opportunities and threats in relation to collaborative evaluation and to establish a MOST analysis: Mission; Objectives; Strategies; Tasks, enabling them to be more proactive about the way in which they plan, develop, implement, manage and review collaborative evaluation, within their department.
Objectives
01. Obtain a clear understanding of the core objective of Workshop 1. Time Allocated: 1 Month
02. Analyse the history of Collaborative Evaluation within your department. Time Allocated: 1 Month
03. Analyse the current position of Collaborative Evaluation within your department. Time Allocated: 1 Month
04. Analyse the future outlook of Collaborative Evaluation within your department. Time Allocated: 1 Month
05. Analyse internal strengths and weaknesses, relating to Collaborative Evaluation, within your department. Time Allocated: 1 Month
06. Analyse external opportunities and threats, relating to Collaborative Evaluation, within your department. Time Allocated: 1 Month
07. Identify and engage up to 10 Key Stakeholders within your department. Time Allocated: 1 Month
08. Identify a process that would enable your stakeholders to decentralize Collaborative Evaluation. Time Allocated: 1 Month
09. Estimate the likely costs and the ongoing financial budget required for this process. Time Allocated: 1 Month
10. Estimate the likely hours and the ongoing time budget required for this process. Time Allocated: 1 Month
Strategies
01. Each department head is to personally set aside time to study Workshop 1 content thoroughly.
02. List the key projects that have been undertaken historically within your department and analyse how and if Collaborative Evaluation was used and where it was successful.
03. List the key projects that are currently being undertaken within your department and analyse how and if Collaborative Evaluation is being used and where it is being successful.
04. List the key projects that are scheduled to be undertaken in the future within your department and analyse how Collaborative Evaluation can be used in order to ensure success.
05. Research internal strengths and weaknesses, relating to Collaborative Evaluation, within your department.
06. Research external opportunities and threats, relating to Collaborative Evaluation, within your department.
07. Review the files and resumes of employees within your department in order to identify those with a collaborative nature.
08. Research and identify a process that would enable your stakeholders to decentralize Collaborative Evaluation.
09. Liaise with the Finance department to evaluate the likely costs and the ongoing financial budget required for this process.
10. Liaise with the Human Resource department to evaluate the likely hours and the ongoing time budget required for this process.
Tasks
01. Read through the entire workshop content while making notes including: Profile; MOST; Introduction; Executive Summary; Curriculum; Distance Learning; Tutorial Support; How To Study; Preliminary Analysis; Course Manuals; Project Studies; Benefits.
02. Create a task on your calendar, to be completed within the next month, in order to list and analyse historical projects.
03. Create a task on your calendar, to be completed within the next month, in order to list and analyse current projects.
04. Create a task on your calendar, to be completed within the next month, in order to list and analyse future projects.
05. Create a task on your calendar, to be completed within the next month, in order to research and analyse internal strengths and weaknesses, relating to Collaborative Evaluation, within your department.
06. Create a task on your calendar, to be completed within the next month, in order to research and analyse external opportunities and threats, relating to Collaborative Evaluation, within your department.
07. Set up interviews with employees within your department in order to identify those with a collaborative nature.
08. Implement a process that will enable your stakeholders to decentralize Collaborative Evaluation.
09. Set up an appointment with the Finance department to evaluate the likely costs and the ongoing financial budget required for this process.
10. Set up appointment with Human Resource department to evaluate the likely hours and the ongoing time budget required for this process.
Introduction
Workshop Objective
The first stage of the program is to understand the history, current position and future outlook relating to collaborative evaluation, not just for the organization as a whole, but for each individual department, including: customer service; e-business; finance; globalization; human resources; information technology; legal; management; marketing; production; education and logistics. This will be achieved by implementing a process within each department, enabling the head of that department to conduct a detailed and thorough internal analysis to establish the internal strengths and weaknesses and the external opportunities and threats in relation to collaborative evaluation and to establish a MOST analysis: Mission; Objectives; Strategies; Tasks, enabling them to be more proactive about the way in which they plan, develop, implement, manage and review collaborative evaluation, within their department.
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike “distanced” evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholders understand and use. Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program’s evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The processes use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.
Collaborative processes are being promoted as an alternative decision-making process for managing. This is a relatively recent phenomenon, and, given its growing popularity, it is important to develop and apply methods and criteria for evaluation, to determine strengths and weaknesses, and to identify best practices for effective use of the collaborative model. Evaluation based on multiple criteria and at several points in time can assist those involved in designing and organizing collaborative processes to ensure the process is responsive to stakeholders’ and achieves its objectives. The success of both the process and the outcome of collaborative processes can be effectively appraised using participant surveys.
Evidence from case studies of collaborative approaches show these processes can generate higher quality, and more creative and durable agreements that are more successfully implemented due to increased public buy-in and reduced conflict. Collaboration can generate social capital, by facilitating improved relationships between stakeholders, generating new stakeholder networks, enhancing communication skills, and co-producing new knowledge with stakeholders. However, collaborative processes are a relatively recent phenomenon, particularly when compared with historical planning and decision-making processes.
“Is our program working?” This is a key question in education today, particularly in this era of heightened accountability. A collaborative program evaluation model is an extremely useful way to answer this question when education organizations want to find out if their initiatives are achieving the intended outcomes, as well as why this is the case. In the collaborative program evaluation model, the client (e.g., districts, states, public and independent schools, non-profits, and foundations) works with the external evaluator to determine the questions that will be explored through the evaluation. They continue to work collaboratively to ensure that the context is understood, that multiple stakeholder perspectives are taken into account, and that data collection instruments are appropriate in content and tone. The model produces data that can proactively inform program implementation, provide formative information that supports program improvement, and offer summative information on the effectiveness of the program.
Collaborative evaluation is a proactive evaluation model that enables program staff to engage in continuous program improvement. Specific benefits of the model include
A customized evaluation design that reflects the nuances of the program being evaluated.
An evaluation design that is flexible and adaptable to the purposes of the evaluation and to changes in program implementation over time.
Increased reliability of results.
Greater buy-in among stakeholders with both the data collection process and the evaluation findings.
Development of program staff’s capacity to continue to monitor their progress toward program goals beyond the duration of the evaluation.
Development of a culture of inquiry among program staff.
Potential cost efficiencies.
Each of these benefits is described in detail below:
Address program nuances
All evaluators should tailor evaluation services to the needs of each client (Patton, 2002). In the collaborative evaluation model, this is accomplished by evaluators working closely with program staff to identify evaluation questions and engage in an evaluation process that is attuned to the needs of program staff and stakeholders. As a result of the close knowledge built through collaborative program evaluations, such studies also guide program staff to identify and capitalize on external and internal program networks that they can tap to help them to achieve program goals (Fitzpatrick, 2012).
Flexible design
In a collaborative evaluation, continuous communication at the outset between program staff and the evaluation team is essential for laying the groundwork for mutual understanding. Ongoing communication is also a key ingredient for ensuring that the evaluation plan continues to be relevant to the program. By communicating regularly about program developments and context, evaluators can make adjustments in the evaluation plan to accommodate changes in the program.
Increased reliability of results
Another benefit of working collaboratively with program staff in developing the evaluation is increased reliability of the study. Because the evaluation team develops a high level of understanding of the program, data collection can be designed to accurately capture aspects of interest, and appropriate inferences and conclusions can be drawn from the data that are collected.
Greater buy-in for results
Engaging an experienced outside evaluator alone increases the reliability of the study and the credibility of the findings. The use of a collaborative program evaluation also improves buy-in for the study’s results from a variety of stakeholders. Staff members who actively participate in the evaluation better understand how the results can be used to facilitate program improvement, while administrators and other decision makers are more likely to have confidence in the results if they are aware that program staff helped inform elements of the evaluation study (Brandon, 1998).
Increased ability to monitor progress
The evaluation team works with program staff to develop tools to measure desired outcomes of the program. Because tools are designed in collaboration with program staff, staff are better able to understand the purpose of the tools and what information can be gleaned from each. This makes it more likely that staff will feel comfortable with and use the instruments to collect data in the future to monitor ongoing progress, an added benefit to the client.
Development of a culture of inquiry
Because use of evaluation results is a primary goal of collaborative evaluation, the evaluation team may also facilitate a process in which practitioners examine data on program implementation and effectiveness throughout early stages of the evaluation. This process of reviewing evaluation results can foster the development of a culture of inquiry among program staff and support the goal of continuous improvement.
Potential cost efficiencies
There are several ways that a collaborative program evaluation can reduce costs in the short term and over time. There can be immediate cost savings because evaluation resources are tightly coupled with the program’s stage of development. The model can help avoid costly data collection strategies and analytic approaches when there is little to measure because the project is in a nascent stage of implementation. Cost savings may also emerge over time because of program improvements based on formative feedback. Additional savings may be found as the evaluation team develops the internal capacity of program staff through their active participation in the design and execution of the evaluation. With increased capacity, the program staff can then continue the progress monitoring process by themselves.
The collaborative evaluation process incorporates four phases: planning; implementation; completion; and dissemination and reporting. These complement the phases of program development and implementation. Each phase has unique issues, methods, and procedures.
Collaborative Evaluation – History
Most companies or organizations want to attain sustainable, successful goals for themselves, their employees and communities of interest. But more often than not, they have difficulties achieving them. It’s the “getting there” that’s hard. This corporate training program features a process-driven system that can help. If one is committed to and serious about reaching their goals, there is almost an inevitability about succeeding. It is based on efficiency, collaboration, quality, rigor and contemporary technology. What is more, it emphasizes excellence.
In describing the process, I will also note aspects of my own background that will, hopefully, provide some context to the explanation of this system. My professional career over the past three decades featured teaching, training, mentoring and consulting, with an emphasis on evaluation and assessment in the area of accreditation in higher education. The evaluations revolved around a set of standards that were developed and approved over a period of time by members of a board of directors of an evaluation commission and members-at-large of an associated profession. As well, there were policies and procedures written relating to the process, but the standards were of top priority and ‘written in stone’. These standards were mandated requirements for programs to have in place before receiving any approval or accreditation from the board. Much of my experience centered on developing, writing and directing all of the processes involved in the evaluation. As noted, the area of focus was in higher education, specifically within healthcare professions. The evaluations took place primarily within the United States but there were several conducted abroad and all were in higher education or learning institutions. In those outside of the United States, there may have been some cultural and language modifications made to the processes, but the predominant qualitative standards were similar throughout the world.
To achieve the long-term goal of what was known as accreditation in universities, there was a detailed peer-reviewed process that needed to be closely followed and monitored over the course of several years. The actual process was completed by the program using a web-based system, the first of its kind in accreditation within the United States and abroad. More will be discussed about this integrated (and online) system under “Future Outlook.
The process was rigorous, with articulated goals, i.e., principles of quality, self-reflection and collaboration. A primary objective was to require institutions to demonstrate both the strengths and weaknesses of their programs. In this way, there was an honest approach on the part of all professionals involved, i.e., the staff, faculty, senior administrators and evaluators. In a sense, everyone started on an equal footing and received equal treatment. This was considered an extremely important component of the process by the Board of Directors. A second critical objective was the approach the evaluators took. It had to be clear comprehensive and unbiased, in other words, not punitive. These objectives were stated at the onset of the process to members of departments and/or programs. The board also wanted all faculty and staff to know that they, the board, wanted them to succeed. Therefore, if institutions were serious and committed to the process and eager to improve (even if they already considered themselves stellar) a win-win situation could occur. This was gratifying to everyone concerned. The agency strongly believed in this management style and found the benefits to be plentiful. Most significant, program participants began to enjoy the process, felt less constrained when responding to questions and believed they were partners in the evaluation process. This was extraordinarily valuable to them and they also expressed an eagerness to hear how they could improve. The result of this peer-reviewed process with ongoing progress reports over a period of time, meant there was, under most circumstances, a stability of quality and continual improvement in the educational programs accredited. This was an organized effort to reach purposeful goals and will be further explained in the Future Outlook section.
Collaborative Evaluation – Current Position
My current position continued to be in the area of evaluation, but as a consultant, beginning in 2018. In December 31, 2017 I retired from my former full-time position of Executive Director in the accreditation agency I founded. My present consulting remains in the area of education/healthcare and accreditation. Throughout 2018, I consulted with programs undergoing Developing Status. These programs (as to ones undertaking accreditation evaluations) are starting from a ‘blank page’ and have never educated students in a specialized content area or at a terminal or doctoral level. This type of program also is referred to as a candidacy application.
The program submitting this type of application describes its intentions as fully as possible and explains how it will go about achieving the necessary components of its curriculum. Since the program is in its infancy or preliminary stages, it is not necessary for each area of the curriculum to be completely in place at this time. Instead, the administration (within the relevant university) will discuss the plan it has for its related course of study.
During the Developing Status stage, program administrators submit an application in the form or a questionnaire to the accrediting agency. The purpose of the application is to inquire how the program plans to achieve the goals it has for its curriculum and operating business. Each question pertains to a specific area, such as mission, goals and objectives, governance, policies, recruitment and admissions, faculty development, curriculum, methods of instruction, finances, facilities, student achievement, resources and advisement and other categories pertaining to this initial level of development.
A discussion takes place about when the program wishes to admit students and acknowledges that it will not do so until it receives approval from the accrediting body. Thus, if a program wishes to admit students in August/September of a given year and wants to recruit students in the beginning of that year, it needs to submit its application in sufficient time prior to the recruitment stage. Once an application is received, the Chair of the board designates two evaluators to review the Developing Status materials. These evaluators are usually members of the board.
The next step is for the evaluators to plan a fact-finding visit to the institution within the next two to three months. At a mutually convenient time between the program and accrediting body, a visit takes place. The purpose of the visit is to determine whether the program is ‘on track’ with the development of its program and headed in the right direction. This means evaluators on-site can verify the materials they received and meet with the institution’s senior administrators, faculty, practitioners and others who have been influential in the designing and planning of the program. It also is an opportunity for the program’s administrators to ask evaluators questions about the program’s progress and, in return, to receive responses. This usually is a collaborative, supportive meeting, spurring the program to move quickly, provided it is headed down the right path.
Once the Developing Status evaluation is complete, the two evaluators write a report of their findings and submit it to the board. The board works quickly to discuss the pros and cons of the evaluation and within two weeks to one month, provides a decision to the university or college about whether to approve or deny the report. It could also ask for additional information, if needed. A formal letter explaining the decision is written to the administrator in charge of the program. Once the letter is received and approval is given, the program can proceed with its timeline for development. If a program is deferred and a request for additional information is made, the program’s administrator can speak with the director of the accrediting body about its next steps. The timeline for submitting new or additional information would be explained in the letter sent. Once new materials are submitted and approved, the deferral is removed and Developing Status is awarded. If the board decision is to deny Developing Status, the program could re-apply, if it wishes, at a time mutually decided on by the accrediting body and the program. In most cases, the program will have an idea of whether it is on the correct path after its fact-finding meeting with the evaluators.
Collaborative Evaluation – Future Outlook
I have been involved in each aspect of the evaluation system noted for almost three decades and have believed in its virtues for the entire time. I value the fact that one engages in a process that is collaborative, self-reflective and stresses improvement on a regular basis. Although aspects of the process are time-consuming, I see enormous merit in the benefits it provides to individuals, team groups and programs. These benefits transcend any factors that may seem tedious. I also believe this type of process-driven assessment can be adapted to numerous settings in both for-profit and non-profit companies. The important fact is that there is a consistent emphasis on excellence, efficiency, quality and rigor as well as a consistency to questions asked within the formal structure. These characteristics are in-bedded into the processes. What is more, they can be replicated in institutions and businesses throughout the world.
How can this be done? As an example, in 2004, the accreditation body, in conjunction with an outside technology company, developed an integrated web-based, online system that incorporated all aspects of the accreditation process. It eliminated a former paper trail that was moribund, convoluted and had little long-term effect. The system our group wanted to devise was similar to “Turbo Tax”, an online data system used by many Americans to computerize their federal, state and city taxes. The idea was to enter data, update it as necessary and have permanent and easy access to it. We developed a similar electronic process and found this process strengthened the bond between the agency and its programs.
There were many components built into the system. For example, a constructive online interaction between an academic program and its faculty was developed as they pursued questions about their program. The Program Director or overall administrator could assign standards to faculty members and the entire group could meet electronically to privately discuss their assignments. In addition, programs had the ability to dialogue electronically with site-team evaluators after they submitted their application to the agency and their program’s materials were closed for making more changes. The evaluators received access to these documents through a special portal to study them and ask for clarifications, as necessary. The program also had the ability to respond. This type of efficient conversation, known as the Interactive Evaluation, was helpful to all parties and allowed evaluators to have a better understanding of the program prior to their physical visit on campus. This meant they could spend more time verifying particular areas on-site that still needed resolution. It also allowed them to see relevant facilities at the institution and meet with faculty, groups of students, senior administrators and clinicians. The period of time for a site-visit is between 2 to 3 days, and the time goes quickly. At the conclusion of this visit, the evaluators submitted a Preliminary Report online which was sent to the senior administration of the institution prior to the exit meeting of the visit. This value-added benefit was facilitated by the web-based system. Comments received by the agency about these online procedures, specifically the Interactive Evaluation and Preliminary Report were very positive. Over and above the benefits provided to the agency and programs, there were benefits the institutions found valuable during their own internal reviews. In addition, the aggregate data garnered from programs could assess the health of the profession at any given time and be of assistance to other stakeholders in the field, such as professional organizations or government agencies. In addition, there is also great value in using components of the process for ‘training the trainer or evaluator”. The same commitment and dedication to educating a ‘trainer’ is as significant as evaluating a program. For example, the ‘trainer’ has to learn how to be a role model and exemplify the attributes needed in encouraging the development of a qualitative program.
In conclusion, this process-driven program is a concrete example of how the many components of a curriculum could be integrated and easily utilized online. The value of being able to organize and analyze hundreds of pieces of data, what I have called synchronized conceptual thinking, has proven to be of benefit to academic institutions. Although the system was established for a particular healthcare profession, it was also designed so it could be adapted for other groups within the United States and abroad. The processes developed were meant to be productive for a variety of institutions and sustainable over a long period of time.
Executive Summary
Collaborative Evaluation – History
The Model for Collaborative Evaluation (MCE) project should begin with a discussion of the five fundamental components outlined in the Client Information Hub (CIH) under the Executive Summary, Methodology and Program Objectives sections. These areas would comprise a re-cap of the importance of the Collaboration Evaluation, the development or revision of a Feasibility Study, the marketing research needed for the entire project, a potential software product for online efficiency (an option) of the project and the related fund-raising activities that may be necessary if development of the platform was undertaken. These conversations would be held at the first workshop in Year 1, Month 1 and each of these topics would require substantial time, such as 1 – 2 hours over the course of the six-hour day. Below are brief historical descriptions for each fundamental area.
Collaboration
Collaborative Learning can be traced back over 40,000 years to Ancient Times. The basic human drive has always been to communicate and pass on and share information for posterity. We have seen this in cave paintings and drawings in pre-historic Australia and in Egyptian Hieroglyphics where the Dynasty II civilization took pictures to another level, i.e. combining them in sequence to form simple sentences. This type of communication turned single moments into paragraphs or prose.
Through the ages, the means of communication developed rapidly. In the latter half of the 20th Century, collaborative learning progressed and research demonstrated this fact. Students learned more and recalled more when they became partners in the learning process instead of passive recipients of knowledge. Before that, in ancient times, small close-knit communities made it possible for leaders like Confucius, Buddha, Jesus and Muhammad to emphasize learning through personal interaction as opposed to texts and scriptures. This was evident, too, in other cultures, such as the Malaysian/Indonesian idea of Gotong royong, the spirit of mutual help in a society or community.
But these tiny communities disappeared throughout the world when the beginning of urbanization began to develop. With the ability to travel and migrate, the ancient ways of learning made way for a new paradigm of learning through formalized curricula using a system of lectures, texts and examinations. Dewey, Alpert, Piaget and Vygotsky, well-known theorists, all supported different theories of social and collaborative learning. These thought processes brought about the emergence of Constructivism, the foundation of Collaborative Learning or active learning. The constructivist is not passive, but rather an active contributor to the whole learning process (Ryan O’Reilly 2016).
Collaboration rapidly occurred in the scientific communities as well. In 110 US universities, the size of research teams increased 50% from the 1980s – 1999. The collaboration between US universities and foreign universities increased 5 fold during the same period (Niki Vermeulen, John N. Parker, and Bart Penders). In all branches of life, the second half of the 20th Century and first two decades of the 21st Century were responsible for the increase in collaborative learning. Yet, there still continues to be a need for strengthening human connectivity.
Feasibility Study
Feasibility Studies have been in formal existence for at least fifty years. They provide evidence demonstrating that a project is worthwhile and has been as thoroughly studied/investigated as possible. Whether members of an individual department, division or entire enterprise undertakes it, the critical goal is that the study be completed and that it serves as the foundation for the project. Some of the areas the study would investigate are the goals and mission of the project, the budgetary costs to the institution and, most important, the human resources needed. As well, this plan for the project, i.e., the steps involved over the course of two years will be outlined and will function as its initial blueprint.
Marketing Research
Incorporated into a feasibility study and/or business plan should be marketing research. It is equally important. According to Kuba Kierianczyk, Associate Director, Insights and Strategy (2016), market research began in the 1920s. Daniel Starch, one of the first advertisers thought advertising should be read, believed and acted upon. During that same period, the giant and American pioneer, George Gallup, developed his statistical method to measure public opinion. This eventually became the Gallup Poll as we know it today. From the 1940s – 1960s, in-depth interviewing techniques plus quantitative surveys were popular and from the 1960s through the 1980s, new technologies such as computers, phone systems and the Internet were used for market research. In 2019, advertising and market research includes not only the consumer but also cultural insights and its surrounding context.
Online Platform
Suggested in the CIH sections, Executive Summary – Future Outlook, is a description of a collaborative, efficient platform (portal) with a protected login for clients to use. This integrated system was developed in 2004 and serves as an online evaluation tool where standards for accreditation/assessment/or validation could reside. The standards would include questions about the curriculum, such as admissions, recruitment, finances, resources, institutional policies, students, faculty, advising, quality of program and clinical experiences (if applicable). In general, this integrated system would house all the information needed for the client and the accrediting body plus employing a format that was consistent throughout. For example, in questions pertaining to standards where there are narrative responses, there would be a common set of questions asked about each standard. A text box also would follow each question. This type of consistency within the platform’s format provides a mechanism to more easily benchmark, trend and/or analyze the information collected. This consistency can also provide a ‘reliability factor’ to the data collected.
Suggested Fund-Raising Program
If the online platform was undertaken, it may be necessary to raise funds for the expenses it incurs. In addition to the motivating fund-raising effort, it would also be a means of marketing the program. The cost would be finite or a one-time matter that would include capital expenses for the platform, program development and potential maintenance costs for the first few years. The means of fund-raising will be discussed and these conversations could involve the staff involved in development activities within the client’s institution.
Collaborative Evaluation – Current Position
Collaboration
Collaborative learning that takes place within educational institutions, businesses and organizations today is an ongoing cooperation between leaders and members of a staff who are closely affiliated with a particular project. When cooperative behavior occurs, it can lead to enthusiastic results and prolonged sustainability. These outcomes are what the enterprise desires and when using the suggested MCE model, it will be headed in the right direction. But in situations where collaborative learning techniques are not purposely used, a more negative result could occur. For example, how often does a program or policy fail because a full explanation about it has not been provided to the participants closely involved in the project? Would it not have been more successful if everyone had been kept abreast with ongoing communication? How frequently do we see individuals working in isolation without feedback from others? Would it not be more useful to eliminate the ‘silos’ of learning and collaborate with fellow workers? And why is ‘collaborative training’ not included in curricula for students before they graduate from their professional program? Would it not be more instructive and helpful to teach bright young minds how to collaborate and work closely with other healthcare colleagues in the controlled setting of the university – before they assume positions in the workplace? Or how disheartening is it to find that workers are depressed because of the jobs they hold. Wouldn’t it be a different situation if employees were rewarded for work well done or given constructive feedback if changes needed to happen?
Most of us have witnessed some of these negative scenarios and, I daresay, felt helpless on those occasions. Conversely, when collaboration occurs, there are increases in productivity, cost reductions in services and a greater enthusiasm among employees for their projects. We will discuss the effects of positive coll