AMSTAT provides reliable credential evaluation services for numerous state and federal government entities, schools, hospitals, non-profit organizations, and business clients. Our successful clients cite these reasons for choosing to work with AMSTAT.
All of our principals have PhDs at leading universities, including Harvard, Stanford, and Columbia.
They include nationally renowned evaluation experts.
They have over 100 years of practical experience in evaluation.
They are experts in evaluation (e.g., process, formative, impact, outcome, and summative evaluation).
Our fees usually pale in comparison to the savings and additional profits that our work produces for our clients.
Over 90% of our clients request our assistance more than once, as our clients are almost universally happy with our different brands of evaluation.
We are more reasonably priced than most other companies offering evaluation consulting.
We offer personalized, comprehensive, and friendly support during and after your consultation with us.
We offer ultra-fast turnaround times.
We are happy to provide the help you need at any or all of the following steps in earning the answers you request.
developing a detailed program evaluation management plan;
reviewing program documents and records;
conducting pre-program development (needs assessment) evaluation;
conducting process or formative evaluation;
conducting impact, outcome, or summative evaluation;
conducting site and classroom observations;
conducting focus group interviews;
leading site evaluation team meetings;
inputting, organizing, and cleaning the data;
implementing data analysis;
developing written evaluation reports;
presenting evaluation results to governing board
Evaluation encourages us to examine the operations of a program, including which activities take place, who conducts the activities, and who is reached as a result. In addition, evaluation will show how faithfully the program adheres to implementation protocols. AMSTAT can conduct several types of evaluations. Some of them include the following:
Determine whether program activities have been implemented as intended.
Evaluate how well the program is working, the extent to which the program is being implemented as designed and whether the program is accessible and acceptable to its target population.
Provide an early warning for any problems that may occur.
Allow programs to monitor how well their program plans and activities are working.
Measure the activities of the program, program quality, and who it is reaching.
Help answer questions about your program such as:
Has the project reached the target group?
Are all project activities reaching all parts of the target group?
Are participants and other key stakeholders satisfied with all aspects of the project?
Are all activities being implemented as intended? If not why?
What if any changes have been made to intended activities?
Are all materials, information, and presentations suitable for the target audience?
Is the program being delivered as intended to the targeted recipients?
Is the program implemented as intended?
Have any feasibility or management problems emerged?
What progress has been made in implementing changes or new provisions?
Are program resources being used efficiently?
Why is a program no longer obtaining the desired level of outcomes?
Track program information related to Who, What, When and Where questions:
To whom did you direct program efforts?
What has your program done?
When did your program activities take place?
Where did your program activities take place?
What are the barriers/facilitators to implementation of program activities?
Ensure that a program or program activity is feasible, appropriate, and acceptable before it is fully implemented.
Conduct it when a new program or activity is being developed or when an existing one is being adapted or modified.
Examine whether the proposed program elements are likely to be needed, understood, and accepted by the population you want to reach.
Examine the extent to which an evaluation is possible, based on the goals and objectives.
Allow for modifications to be made to the plan before full implementation begins.
Maximize the likelihood that the program will succeed.
Help answer questions such as:
Do you continue the program?
If so, do you continue it in its entirety?
Is it possible to implement the program in other settings?
How sustainable is the program?
What elements could have helped or hindered the program?
What recommendations have evolved out of the program?
Measure the program goal.
Measure how well the program goal has been achieved.
Measure the degree to which the program is having an effect on the target population’s behaviors.
Tell whether the program is being effective in meeting its objectives.
Help answer questions such as:
Has the overall program goal been achieved?
What, if any factors outside the program have contributed or hindered the desired change?
What, if any unintended change has occurred as a result of the program?
Are desired program outcomes obtained?
What, if any, unintended side effects did the program produce?
Do outcomes differ across program approaches, components, providers, or client subgroups?
Examine the degree to which the program meets its ultimate goal.
Provide evidence for use in policy and funding decisions.
Measure the immediate effect of the program.
Measure how well the program objectives (and sub-objectives) have been achieved.
Help answer questions such as:
How well has the project achieved its objectives (and sub-objectives)?
How well have the desired short term changes been achieved?
Did the program cause the desired impact?
Is one approach more effective than another in obtaining the desired outcomes?
Compare program outputs and/or benefits to input costs to provide a ratio of cost to benefit.
Examine what resources are being used in a program and their costs (direct and indirect) compared to outcomes.
Provide program managers and funders a way to assess cost relative to effects. “How much bang for your buck.”
AMSTAT provides a variety of specialized evaluation services in health. They include:
Tobacco prevention, evidence-based medicine, clinical practice guidelines, children and family services, healthcare ethics, biostatistical studies, clinical trial design and methodology, medical education, biostatistical consulting, clinical program evaluation, performance measurement, and process improvement
We measured the following: the difference in students’ attitudes and knowledge before, during and after SEEK Program and PCI programs, training, and mentor teaching; the impact of ’s SEEK Program and PCI programs on [...]
We conducted the evaluation of the effectiveness of the program. The research question was as follows: RQ1. Is there a significant difference in staff proficiency before and after the program? A paired-samples t-test was [...]
The purpose of the study was to measure the difference in students’ achievement before and after the change of the program. The research question was as follows: RQ1: Is there a significant difference in [...]
We measured the difference in students’ and mentors’ perceptions of program quality and integrity before and after the change of CPE’s program. The research question was as follows: RQ1: Is there a significant difference [...]