Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Evaluation Principles for Veteran-Serving agencies and Veterans Treatment Courts

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Sidney Gardner and Larisa Owen
September 22, 2017

The good news is many programs for veterans are proliferating, as the VA and state and local agencies respond to the needs of veterans.

The other news is many of these programs are only measuring enrollment numbers, rather than the program outcomes. We would suggest a common-sense evaluation principle: if reports talk about what agencies are doing, rather than how well the clients are doing, the evaluation may be missing the point. The point is the program is supposed to improve the well-being of its customers. In veterans’ programs, those customers include not only the veteran, but also their kids.

Of the more than 1 million veterans who have served since 9/11 and then were discharged from active duty, more than 40 percent have children — an average of two each. These children are sometimes affected by their parents’ deployment, and making sure they are doing OK is, in part, an evaluation task. But too many programs aren’t yet meeting that standard.

One example: The VA and other federal agencies don’t use the same evaluation Wax - blogimage_VeteransInitiativeoutcomes in measuring the effectiveness of programs that treat substance use disorders, which affect a sizable segment of veterans. We once asked a thoughtful VA official why that happens, and his answer was “because each agency has its own programs and its own standards for evaluating them.” Another part of the answer, we found, is when many funders make grants, whether public agencies or private foundations, they only include funding for services, with little set aside to evaluate the impact of those services.

A rule of thumb is that with hard work 10 percent of a total grant budget can support an adequate evaluation, while it takes 15 to 20 percent of the budget to do a first-rate evaluation. A lot of funders seem to operate on the principle that funding should go for services, rather than evaluation, which seems to some of them like “overhead.”

Three types of evaluation plans can be described: basic, intermediate and intensive. Depending on the level of evaluation, different data collection methods are used, such as surveys, focus groups, observations, case management notes and screening and assessment tools. Basic evaluation establishes accountability and focuses on evaluation readiness and program processes. This includes preparing for the process evaluation, identifying measurable objectives (such as numbers assessed, number treated, court attendance, peer mentor/participant interactions), examining program implementation, comparing proposed plans to the project in operation, integration of evidence-based standards, identifying challenges and successes, documenting output (units of service) and counting participants and their family members.

An intermediate evaluation incudes all of the basic evaluation components but adds a more intensive process evaluation that includes a detailed review of program coverage, establishing client attributes at intake, conducting drop-off analysis to identify the points of client attrition, analysis of implementation fidelity, performance measurement, preparing the program for outcome evaluation, identification of the instruments, methods, and data sources used, project database development and implementation, project performance measurement and analysis and identification of the project’s short term and intermediate outcomes.

An intensive evaluation includes all of the basic and intermediate evaluation in addition to the impact (long term effects) of the program, measurement and analysis of the project’s long-term outcomes with outcome objectives (results you hope the project will achieve such as sobriety, employment, reduced recidivism), experimental or quasi-experimental designs, and cost studies.

But arguing that funding for services shouldn’t be devoted to evaluating those services is like saying we don’t care if the road takes us where we want to go — we just enjoy the trip. That’s a good principle for random exploring of the countryside, but not so good for evaluating whether a program really makes a difference in the lives of its clients.

Another part of the problem is that there are no baselines. Even if validated assessment tools are used, and even if there are good pre- and post-measures being used, the big question in evaluation sooner or later comes down to the punch line of an old joke. Woman asks her friend “How’s your husband? “ Second woman answers, “Compared to what?” Without a baseline measure of how well an agency performs for veteran clients, a new program has no standard to compare how well it is doing.

In an earlier article in this series, we discussed the invisibility of veterans and, even more often, the children of veterans in the intake data of most agencies. If there is no box on the intake form asking about the children of the veterans, you can’t evaluate the effectiveness of child welfare agencies or treatment agencies when they serve veterans and their family members — because they’re not identified in the caseload.

Ideally, an evaluation answers questions about which clients with what characteristics received which services and achieved which specific outcomes. It asks whether the program is reaching the veterans that most need the programs services, and whether veterans’ family members are screened and served. These are the evaluation questions that veterans’ programs should be asking and answering, as they ensure that the needs of those who served their country are met by the programs that are funded to help them.

Author: Mr. Gardner is President of Children and Family Futures. He has over 50 years of experience working in and with local, state and national government agencies, educational institutions and public policy organizations. He graduated from Occidental College and received a Master’s degree in Public Policy from Princeton University and a Master’s degree in Religious Studies from Hartford Seminary. Sidney Gardner, MPA [email protected]

Author: Dr. Owen is a Program Director with the Center for Children and Family Futures (CCFF) since 2004. Dr. Owen serves as a Veterans and Special Projects Program Director who works on several programs including leading the Veterans and Military Families (VMF) projects within the organization in addition to research and evaluation of VMF projects. Dr. Owen received her Bachelor of Science in Criminology and Legal Studies, holds a Master’s degree in Business Administration, and has a Ph.D. in Public Policy and Law. Larisa Owen, PhD, MBA [email protected]

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

Leave a Reply

Your email address will not be published. Required fields are marked *