Margaret Patterson: Well, hi, everyone. I guess we just need to have the PowerPoint. Mandilee do you want me to open that up on my end, or do you have it?

Mandilee Gonzales: I can share if you would like.

Margaret Patterson: OK. Super. That would be great.

Mandilee Gonzales: All right, stand by.

Margaret Patterson: So welcome, everyone. I know it's lunch time, and I won't hear any munching because you're all on mute. So feel free to enjoy. Happy Friday. We're all here. All right, super.

All right. So, again, program evaluation is our topic for today. And, so, we wanted to start with a poll question. And in the poll, we're just going to be asking you this question-- How familiar are you with program evaluation? And your options are anywhere from not at all to I'm ready to start a program evaluation tomorrow. So go ahead and select what you think.

And the numbers are coming in. We'll give everybody a moment to respond. OK. It looks like most people are coming in with some familiarity with program evaluation, or at least a little bit. And others are-- you have a few people that are not familiar at all. So we'll be giving you lots of good information today.

And even those of you that do have some familiarity with program evaluation, my experience has been in webinars like this that, even if I just pick up a pointer or two as I go along, it's worth the time and a refresher is always helpful, even when you're pretty familiar with this.

All right. So it looks like most folks have had a chance to respond. Mandilee, would you mind showing the results so that folks can see that, please?

Mandilee Gonzales: Sharing now.

OK, great. So we have about 40% that said they've been involved in program evaluation before. That's great. And another 30% that know a little and about 17% that are ready to go tomorrow. So that's super.

We've got a nice range here. And some folks that are new, and that's good too. So we'll make sure that we cover some of the basics to the extent that we can. All right.

So by way of overview, today, what we will cover this afternoon, we will start with defining what program evaluation is and who benefits. Then we will move on to talk about how evaluation can help you identify needs and how you can think about funding to meet the needs.

After that, we'll shift to considering evaluation questions and how to address them. We will think about processing data and an evaluation. And that involves collecting, analyzing, and interpreting the data and making recommendations. Then, I have some additional resources for you on program evaluation that may help answer some questions that you might have.

And we will end with taking a quick look at some immediate steps you can consider taking right after this webinar when you're back in your office on your own. So that's what we're going to be covering.

So let's start with what is program evaluation. And we have two complementary definitions here on this slide by Michael Scriven and Michael Quinn Payton, who are two gurus of evaluation, Michael Scriven from an earlier generation, of course.

The first definition emphasizes that we're trying to determine the worth or the value of something, in this case, the program. And both of these definitions highlight a systematic approach, that is we want to be systematic about how we collect and employ data. So the evaluation process needs to be clear, explicit, and transparent. So, in other words, we need a method, or methods, to make sure we can evaluate.

And how we collect and analyze and interpret the data needs to be logically consistent, whatever method it is that we use. Both of these definitions emphasize judgment, in the sense of finding worth or value, not negative judgment perse. Podems from 2019 reminds us that the word value is actually in the word evaluation, except for the E at the end, and that the judgment that occurs really depends on the context. So we're going to be talking about context a bit today too.

So we have a chat question for you at this point. And the chat question is, in what ways do you think a program evaluation needs to be systematic? So we're just asking you to put it in your own words, what I just talked about here on this slide a little bit.

OK. So Doreen is saying we should be evaluating 24/7. OK. But no challenge there, is there? OK. OK, so being consistent, so gathering the data, how we evaluate the data, absolutely. We need to have metrics.

Yes. So repeatable, so if somebody else does it later on, they'll get hopefully the same results. That would be ideal. Right, defining what exactly we're evaluating. It's very good. These are great ideas.

So building it into the foundation of each program and doing it periodically. Right. Yeah, being consistent. Super. Yeah, so looking at effectiveness. I think that's an important consideration as well.

And then somebody right before that said, look for what we do well and what works and doesn't and how to fix what's broken. These are all great ideas. So thank you for sharing those, certainly.

But when we talk about evaluation, we use several terms. And a lot of people can get these terms confused, so I thought it would be helpful to be clear and actually define some of these, or at least explain how they're used.

So first up is data. And data are raw facts, figures, or words that get analyzed in an evaluation. For example, how many teachers participate in professional learning in an agency. That's an example of data. It's just a figure, a number, of people. OK

It's a number, but it doesn't tell us much. Is that high? Is that low? Is it changing in some way? We don't know that just by looking at the number.

Once the data are analyzed though with the objective of creating meaning of some type, the pieces of data now provide information. For example, the average number of teachers in professional learning per year in the consortium is growing. Maybe that information is nuanced in some way.

Maybe the number is growing as a lot of new staff has just been hired. Or there's been a change in access to professional learning online, and so more people can get involved. Or maybe it's a response to the new needs that are coming up in the pandemic. So often we can explain the nuances as part of the interpretation of the data, and that helps us make it into information that's meaningful.

If we use the data or the information to support an argument, or maybe we have a hypothesis or belief, those data now become evidence. So, for example, based on the average growth, even after looking at the needs of new staff, we see a need for new resources for professional learning in our consortium.

Now, we're taking that information that we just glean from the data and saying, here's our evidence. This is why we need these additional funds.

OK. Another term coming up next is needs assessment. A needs assessment identifies what is needed by a specific group of people. For example, could be gaps in skills and knowledge where professional learning could help. The process of going through the consortium program quality self-assessment, that we talked about for those of you that were on the last webinar that I presented, can help assess these needs.

And we also want to distinguish between research and evaluation. So research is the systematic process of processing data to generate knowledge. Research and evaluation differ a little bit in purpose, with research taking a more general approach to finding generalizable knowledge, kind of for the good of the cause so to speak, and evaluation focusing on value and worth, as we identified earlier.

Evaluation also generates specific recommendations rather than simply general discussion, as we do in research. But both of them, however, use systematic methods to accomplish the work.

So I have a chat question for you now. What is an example of data that you see regularly that you could analyze to get information for an evaluation? So an example of data that you have around you regularly.

Oh, DIRs. I'm not sure what that stands for. That might be a California acronym that I haven't seen. OK, number of enrollments. OK. Yes, data from TOPS, absolutely. Summary tables, oh, job placements. Yeah, definitely. OK, information from focus groups or success rates. Yes. Barriers. Oh, that's an interesting one, to talk about barriers that people have identified. OK. Yes.

Transitions, certainly, certainly. Data from surveys. So, if you're able to take surveys, that's great. We're going to talk about that in a little bit too. Oh, it stands for data-- Thanks, just didn't know what that was. All right.

Yes, definitely attendance, learning gains people are making, completion data, certainly. OK. So, yeah, I think you're all coming up with really good examples of data that are probably right there on your desk in a lot of cases. And definitely fiscal data, that's important as well, so how much money are you spending, certainly.

So another thought that Podems from 2019 has is that evaluation is also influential and even political. Evaluation can influence and be influenced by many stakeholders. So in our webinar last week, we talked about doing a self-assessment of consortium program quality. And that process might bring up lots of needs that programs in the consortium have along with opportunities to address those needs.

So evaluating the needs as well as what is occurring in programs has the potential to be influential. So we have another chat question. Keep all this great thinking going. If your consortium were to evaluate what's happening in the agency programs, who do you think might benefit? So who are some folks that you think might benefit from an evaluation?

OK, students, love that student-centered focus, thank you. OK. OK, the communities. OK. Faculty, yes, so the teachers, right. Staff and programs. OK. So we're thinking very, very kind of close in. We're thinking of the people directly involved, the students and the staff. But it may be the schools as Gail Wi is communicating here.

Some other folks-- yes, community partners, certainly. But thinking even more broadly, yes, so it could be administrators. It could be other consortium members. I don't know that that one's popped up yet. It could be state staff, perhaps, even strategic planners. There's a lot of folks that could potentially benefit from an evaluation.

So another part of the needs assessment process is considering who does not benefit from the evaluation and whether they should. So let's take equity as an example. If your evaluation yields results that show that recruitment or services or supports are not equitable for all students, then the consortium can consider what to add to the strategic plan to meet that need and who will benefit.

If the program benefits, but the students don't, there's still a disparity out there. So say, for example, a needs assessment indicates that not enough students are participating in a high quality orientation. So the consortium decides to add resources and write a grant to create virtual orientations for all of the agencies in the consortium. Well, that's a win for the programs and possibly even for the funder.

However, if a prospective students can't access the virtual orientations for any reason, whether it's technology or just literally getting access to a computer, things like that, then the students may not benefit equitably. Some will, and some won't.

So we can also consider who will use the results, who will take the recommendations that we make from the evaluation. For a start, the consortium members and the program administrators can certainly use them. But others may use them as well, as we just pointed out, whether it's the students, perhaps other evaluators even, because a lot of us that do evaluation like to read some of these reports and possibilities. When questions come up, then we have examples that we can use, like we're doing today.

Other researchers, even politicians, I think that came up in the chat just a moment ago, and other stakeholders as well. So things to consider are-- who or what will benefit, how will they benefit, and when will they benefit. All right.

So we have some pictures here in this slide. So we said we can use evaluation processes to identify and fill needs. And it's tempting to see needs as something minor, like the photo on the left, where just one new little puzzle piece will cover the gap.

But the reality may be more like the photo on the right, where the gap is wide. And even though the categories may look pretty well identified-- the colors are all in nice patterns there-- getting them to connect is another matter entirely. And that's where definitions come in handy.

So earlier, we defined needs assessment, and we gave a few examples. So a needs assessment identifies what is needed by a specific group of people. Our example was identifying an adult learner need for orientation, we just talked about a moment ago, through a new virtual orientation to bridge the gaps in program services across the consortium.

So since in our example, the way we landed was that not all students could access the new virtual orientation, we could turn to partners for help, such as a library system-- now this is, of course, assuming that the libraries are, in fact, open-- or one-stop employment-related centers to find places where students could get the orientation conveniently and without cost. And that's a win for the students, still for the programs on the basis of what we said earlier. But it's also a win for the library, because they're getting additional people coming through their doors, and perhaps for other stakeholders as well.

So we can start this needs assessment process by defining the extent of the problem and then determine any perceptions or experiences of key stakeholders that might contribute to the need. We can take a closer look at what is addressed, the problem previously, in any remaining gaps and figure out which partners, or resources, can help meet the remaining needs.

So I have a chat question for you again. Think of some partners that your consortium turns to in order to meet needs for services. Who are your partners in the consortia? Not necessarily just for this type of situation, but your partner's overall that you can turn to.

OK, so the America Jobs Center, library, schools. Yes, absolutely. OK. So what I'm referring to is one step that actually is the America's Job Center. Oh, yes, Department of Rehab, certainly. OK. Lots of good ideas about partners here.

So, Eric, when you say other members, are you referring to other members of the consortium? OK, thanks. Health and Human Services, certainly. OK, great. Oh, yes, immigrant agencies, absolutely. Workforce Investment Board, certainly. I knew that acronym. Great. Great ideas. So we've got lots of examples of partners here. All right.

So we can start by defining the extent of the problem, which implies that we really need to know what the problem involves and figure out what's behind that gap. We really can't measure something and fully assess it unless we defined it. So that's kind of an axiom of evaluation.

The value of a logic model, which we'll cover in the third webinar that I present on the 29th, is in clarifying the problem, what we can do about it, what will happen first when we do that, and then what the short- and long-term results or outcomes will be. You may already have a logic model. We're not going to spend a lot of time on that today, but it can definitely help provide structure and frame for an evaluation.

But for today, to keep things simple, we can start by figuring out the problem. Let's look at another example. And this example is about gaps in support services at one agency that consortium could help cover so that adult learners get equitable services at all agencies in the consortium.

We may not understand or know why that one agency might be struggling to provide these services. It might simply involve having a conversation with agency staff or a discussion with learners to find out what's going on. If the problem is deeper, analyzing root causes can help, and there are techniques to do that. Then we determine any perceptions or experiences of key stakeholders that might contribute to the need.

In our conversation with agency staff or a discussion with learners, we could also ask them what it was like when they couldn't offer, or receive in the case of the students, a support service. And based on their experiences, what do they think would work to meet the need?

It's amazing what I found out in evaluations, that people will tell you. They'll not only tell you the problem, but they'll give you the solution as well. But, of course, you have to ask for it. And a lot of times, they know what's going on. They know what the needs are, and they can provide that information to you.

So we can take a closer look at what has addressed the problem previously and any remaining gaps and figure out which partners or resources can help meet the support need. So perhaps, the administrative data show that previous attempts to fill a gap had limited or no success. So what can we glean from that?

Maybe the funds that were originally set aside to address that need had to go towards pandemic solutions. Or maybe some support services were offered in a way that adult learners could not readily access them. By partnering or identifying additional resources in the consortium, the need could potentially be met.

So we have a chat question coming up here. Have you ever been in a situation where your agency or the whole consortium worked really hard on a gap but had limited success? And when you look back at what happened, what did you take away from that experience? I know that's a complex question, but here it is in the chat so you can look at it. So a situation with limited success, you tried your hardest but didn't get too far. What did you take away from that experience?

Frustration. Sorry, Shannon, I've been there, done that. We've all had experiences where things didn't quite go-- yes. Oh, learning to pivot, OK. That's the word of the year, isn't it? Pivoting. Yes. OK. Limited commitment or ownership, yes. So sometimes, if people don't have the commitment, it doesn't quite happen the way we wanted it to do.

Right, sometimes you have to just try and fail. Not happy thoughts for a Friday afternoon, but it's true. So, yeah, needing to have a common vision and vocabulary, absolutely. Yeah, and I think that's where we're getting in terms of defining things carefully. So we're all speaking the same language in what we want to evaluate.

Yes. OK. So, if it's a bargaining unit situation, there might be limited things that you can do to work with that, depending on the circumstances. OK. Yeah, I'm hearing this lack of buy in and lack of commitment, not in whole hog, basically. OK. Bail fast, OK, haven't heard that one before. Pivot and try a different way, absolutely.

Well, I really appreciate everybody pitching in here. I know it's not easy to talk about things that didn't go well sometimes. But I think we can all learn from each other. So John Warner's sharing, clear goals and action steps are sometimes missing. So we want to clarify that from the get go. And, again, pointing back to the logic model that we'll talk about next time, that can really help with that to the extent that you can do it. Great.

All right. So by identifying additional resources in the consortium, filling up our piggy banks in the picture here, the need could potentially be met. So for example, the consortium might need to apply for grant opportunities for funds to meet a need that's been identified. If you go to the three year strategic planning guidance document, on pages 28 and 29, you can find a list of potential funding sources to consider. I was really excited to see that, and I hope that you'll take a look at that.

And the Adult Education Program Technical Assistance Newsletter that comes out regularly also lists funding opportunities that are currently available, such as for things like technology and other needs. So that's a good place to check into periodically. Another option is to try to leverage funding that you already have, so existing funding across agencies in the consortium.

So going back to our example about the new virtual orientation, perhaps there was a few agencies in the consortium, or some partner agencies, that were able to access CARES funding or some other type of funding for technology that came out with the pandemic. And they got some resources to expand technology access in the region. There may be ways to leverage that funding for this need as well.

So we have a poll for this one. So a poll question is going to come open. And the poll question is, how open is your consortium to pursuing additional funding to meet needs identified for the three year strategic plan? So everywhere from we're not sure about funding to we're really eager to pursue it.

OK. Again, we have a great range here.

The answers are still coming in. The first one and the third one are running neck and neck, both about the same percentage. So some folks are not sure at this point. And some folks are open to it but need some more information.

OK, good point. Wendy Miller is making a point in the chat about having funding but not being able to supplant, which is very, very important. OK. So you need-- Cherie Watkins is making the point that consortia are not legal entities. So there needs to be somebody who's able to apply for those funding. Yeah. Good points.

It looks like folks have had an opportunity to contribute to the poll if they want to. So why don't we stop the poll here. And if we could share the results, so that everybody can see. Thank you, Mandilee.

All right. So what we're seeing here is the largest group is actually saying that they're open to pursuing funding that needs some more information. So that's good to know. But there's still some, quite a few folks that are kind of on the fence about that. And I'm not quite so sure. So I'll go ahead and close this.

Let's go back to that example that I mentioned a little bit ago about the need for support services in an agency within the consortium. So if every agency in the consortium offers counseling to explain available support separately, that means that counseling information and supports has been developed separately at all those different agencies and delivered repeatedly through lots of counselors.

If the consortium can find a way to perhaps braid funding, not supplanting, not getting into that area, but find a way to braid the funding to develop a single set of counseling information, and adult learners across the region can access it on demand without needing to repeatedly have it delivered by a person, that could potentially leverage funding and free up some resources for the consortium to develop other types of supports in the three year plan. So it's a thought. The concept of creating funding is not new, but it might be able to help around some of these issues where you can't apply funding in certain ways legitimately because of agency requirements, or state requirements for that matter.

All right. So earlier, on slide 9, we referred to that axiom of evaluation, that we can only measure what we've defined. And we also referred to Scriven's definition of evaluation as a systematic determination. So our evaluation questions-- some people call these research questions, but it's essentially the same idea-- help us decide what to measure and how to measure it.

For example, say that the needs assessment process revealed a need for recruitment at at least some of the consortium's agencies. That's important to know, but what does that look like? So an evaluation question might be what we've got on the screen here. Who is not being served by adult education in some agencies, and who still needs to be served?

If we look at tools suggested in the guiding questions section of the three year planning guidance, such as census data or the PIAAC skills map, we can look at who is in the counties that the consortium serves and who might be missing. So we've defined the groups we're looking for. We've defined the unit, in this case, the County. And we have a credible data source to address our question. So that's going to be really helpful.

So then, to address this evaluation question, we match the method to the question. So first, we decide, do we need quantitative or qualitative data, or both, to get there? Second, if we want to simplify, and we do, we start by looking at the available data that we already have.

So in our example, we can find quite a bit of quantitative data in the PIAAC skills map or the census, income, and poverty tool in the three year planning guidance. And that, you can find that on page 5, by the way. I didn't give that to you earlier. There's some links to those.

Then we can compare who these tools identify as in the pool with the data on who is already being served in the agencies and look for differences. That might be enough information to answer our immediate needs. If not, we could consider collecting new data, perhaps by interviewing or surveying adults who are eligible for adult education but don't participate. Maybe through TANF or SNAP or the American Job Centers or community agencies, those are all possibilities.

So I have a chat question at this point. Oh, thank you, Mandilee, for putting the guidance document in there. If you've looked at the census income and poverty tool or you've looked at the PIAAC skills map previously, how useful was it to you? So that's our chat question. And I realize maybe some of you haven't had the opportunity to look at either of these tools. So--

OK. So in some regions, the information is too broad. OK. All right, John's found it very useful. Yes. Yeah, I imagine a lot more people have looked at census data than at the PIAAC information. OK. OK. Good thoughts. Thank you for sharing those.

All right. So here's an example of the kinds of data that we could get from the PIAAC skills map, which came out in April 2020 by the way. So it's fairly current, but it is pre-pandemic. I do have to tell you that. No surprise there.

So this example looks at California. And we start by looking at skill rates. So the ages are from 16 to 74. And it's indicating here that more than a fourth of California residents have literacy skills at or below level one, so that means they're really struggling with literacy skills. And you can see that in the pink bar on the left. So the bar indicates literacy on the left and numeracy on the right. And the pink section of that bar on the left gives you the percent of people in California that are in that situation.

And then about a third of them have very low skills in numeracy, so dealing with issues associated with mathematics. For reference, the average California skill level is level two, which is pretty close to the national average. We're looking at everybody here.

On top of that, 17.5% of California adults did not complete high school. So many of the adults that we're seeing in the pink bars there, or the pink sections of the bars I should say, did graduate from high school, but they still have basic literacy and numeracy skills needs as well.

Most California adults are employed. You can see the 66.5% figure here. But still, the needs for basic skills, level two or below, in more than half the population here make it tougher for them to get a job, keep a job as they attempt to do that.

And then, in addition, we can also glean from PIAAC skills map that 15.1% of California adults are living in poverty, and about a fourth of them earn 150% or less of the poverty rate. Those are the state figures. What you could potentially take a look at is, well, how does the county compare to the state in that type of information?

OK. So our evaluation questions, some people might call them research questions, point us to what to measure and how to measure it. Let's look at a more complex example with our scenario of some gaps in wraparound supports happening in some, but not all, of the consortium's agencies. That's important to know, again, but what does it look like?

An evaluation question might be, how do the characteristics of adult learners compare in agencies with expanded wraparound supports to those of adult learners and agencies with limited wraparound supports? We could ask a very similar question about outcomes, which is why I have that in parentheses there.

If our hunch is correct, we would expect to see differences in characteristics and outcomes between agencies. And these differences could inform us on bridging the gap. So where is that happening? What does that look like?

So to address these questions about characteristics or outcomes, we match the method to the question. First, do we need quantitative or qualitative data, or both, to get there? How can we systematically define the two types of supports? What do you mean by expanded? What do you mean by limited?

And so we might be able to say, well, there's a certain number, a certain range, of supports. We might be able to define that ahead of time without knowing how the chips are going to fall.

Second, to simplify things, we can start by looking at the available data. We can compare administrative data on who is already being served in the agencies, we know that, and their outcomes-- we know that, you both listed the sources-- and look for differences. We may find that programs with limited wraparound support services are serving more English language learners, while programs with expanded supports tend to serve more, say, adult secondary learners.

Perhaps that's a sign of a language barrier, that more information in local languages is needed to help ELLs access existing supports. Or perhaps it's a population issue, in which the number of English language learners needing those instructional services is so large that almost all available resources are funneled into instruction.

So drilling down into the specifics with individual agencies might be useful. If that information doesn't answer our immediate needs, we could consider collecting new data, such as to find out in which languages the information needs to be communicated through a language survey.

As we think about which method to follow, we need to be realistic about how credible the available data are so that the choices about the method are easy to explain, and the methods we choose are logical. We want to keep in mind, if we go out and collect new data, it's important for us to protect the privacy rights of the individuals involved. In your consortium, you may have access to an institutional review board, perhaps through a school district, perhaps through a community college, that can assist you with that.

We also want to acknowledge that resources are limited. We all know that. So how can we share the work and spread it out? There may not be a lot of resources to collect new data. So we have to consider which new data are the most important and what's the simplest way to get them. Time is also a very finite resource. So the methods that are selected should take a reasonable amount of time.

So I have a chat question for you. True or false. I'll keep this one super simple. Approaches to addressing evaluation questions can be systematic yet simple and don't need to take a lot of time. True or false? What do you think?

OK. I love the way you're telling it.

Audience: True.

Margaret Patterson: OK, lots of truths. Sounds good. OK. It's definitely Friday. OK.

Well, let's move on to processing the data. I'm using the term processing here very broadly, from collecting the data all the way through to making recommendations. So here are some possibility that we could consider quick wins. So every agency, and likely the consortium, has documents. Maybe it's a previous evaluation with some recommendations, or a year end grant report, or an end of cycle strategic planning document, because I've been looking at some of the strategic planning documents from a previous cycle and realizing that people are writing reports from there.

So reviewing insights from those documents can help see what might be relevant and useful. You can look at your administrative data and performance data. What do we know about the program itself? And what do we know about students and their outcomes? These data are already collected. You've got them in tables. You've got them in your dashboards. If you do decide to collect new data, you want to keep your interviews and your survey simple and focused.

So surveys should be about 5 to 10 minutes at most, interviews about 15 to 30 minutes, because people have limited attention spans, and you want to maximize what you can get. Test them out in advance with a few teachers or students that won't be taking the survey or participating in the interview and just get their feedback if you can.

Or use the mommy test, which is something we refer to in research and evaluation as literally just handing what you're thinking of using, a survey or whatever, and showing it to your mother or another relative or a neighbor, somebody who may not know a whole lot about adult education but would have some general skills and can give you some feedback on it.

If it's feasible and relevant, consider taking a field trip to a few locations to observe what's happening and decide ahead of time what you're looking for and what you want to know. And then you can decide who on the team wants to be involved and spread out that work so that nobody's overburdened. I know we all have a lot going on.

Again, we want to be sure that whatever surveys or interviews or observations we do, protect the privacy and rights of the people involved and be sure that they know ahead of time what you're asking of them and communicate that their privacy will be respected.

So let's get to processing the data, and then we'll move along towards our wrap up, because today's going very fast. So, generally, we want the analysis process to be as clear and transparent as possible. A good reason for simplifying data collection is it also simplifies the analysis. We need to be upfront about the process, decide ahead of time what we're going to do, and follow whatever we decide.

If we have numbers, we need to think of a meaningful way to present them. If the numbers are large, try to use percents or averages. If the data are skewed and the numbers are large, skewed meaning that they all tend to lean in a certain direction, like you may have a bunch of really young people in your agency for some reason and not as many older ones, and that's an example of skew. If that's the case, then use a median rather than an average.

If the numbers are small, you can try to use graphics of some type instead of the summary statistics. But check any nuances as you need to figure out what might be behind a number. Stick to the major points. We don't want to go down a rabbit trail and get into all kinds of details that we get stuck in and can't get back out of. But there might be some things that explain it, what's going on with that number.

If we have words and sentences that we've collected, we need to decide how to code them. We'll let the themes come out of the codes that we develop rather than trying to impose themes ahead of time onto the words and sentences.

If we observe, say through video for example, we need to decide ahead of time what we are watching for and stick to it. And then you want to interpret it based on what you said you were looking for. That's being systematic.

All right. Now, we come to the results and the recommendations. Whatever we as evaluators find in analyzing the data is a result. Going back to our earlier definitions, our data now become information that we can use for making decisions. We now know, at least for the circumstances we have and the data that we used, what it means.

If we make an argument for using, the information the information becomes evidence, as we said earlier. We interpret the results carefully and acknowledge it is through our lens as the person doing the evaluation. But they're not our opinions. They are results from the data.

Finally, we get to recommendations. What did the evaluation process reveal that addresses our evaluation question? How do our insights plainly inform what we need to know about gaps or strategic planning? What's going to be the logical next step then from the results and the insights? And what does the consortium want to do in response to what those results tell us?

The recommendations that we make on the basis of the data analysis can then be discussed and considered to go into that 3 year strategic plan. It's important to limit how many recommendations we make though, because we can't do it all. And people can't really grasp hundreds of recommendations. It's just too much.

But a rule of thumb, sometimes people ask, well, how many recommendations should you have? I would say about 10 at most. So we can't do it all. So we have to prioritize. All right.

So to summarize the major points that we covered today, let's try a quick quiz. There's no grades here. I'm not going to give you a grade. It's Friday. We're just going to take this for a learning purpose. And there's no judgment if you don't get the right answer. It's OK. Let's just do our best.

So the first question is evaluation is, A, systematic, B, focused on finding worth or value, C, influential, or D, all of the above. What do you think? Go ahead and put it in the chat.

All right, I love how your spelling. All right, D it is. Good, let's move along. Number 2, true or false? Data, information, and evidence are all the same. True or false? Right, you're all paying attention. Thank you.

Number 3, true or false? Evaluation and research have different purposes. I think I must have made these too easy or something. We'll keep going.

Number 4, true or false? An axiom of evaluation is that we can only measure what we have defined. [laughs] We're all good, students. You're all good sports, definitely. Thanks for hanging in there with me. And you're paying attention.

And I know some of you already know this like the back of your hand. So that's OK. All right, yes. That one is true.

Number 5, to identify needs from data, we can start with, A, existing data; B, new quantitative data; C, new qualitative data; or, D, none of the above. This is number 5. What can we start with? Very good, so we're starting with our existing data. And that's letter A.

Moving right along here, question 6, places to identify potential funding are, A, the three-year strategic planning guidance; B, the newsletter from California Adult Ed Technical Assistance Project; technology funding announcements; or all of the above. I didn't even read it, and your already answered.

We're doing great here. All of the above and beyond. Yeah, there's lots of other places. I couldn't fit too much on one slide. So I had to limit it. Very good.

Number 7, another other true or false. In developing a survey or interview, testing it on a few people first doesn't matter. Oh, excuse me, that's number 8. Sorry, I read the wrong one.

Number 7 is PIAAC Skills Map provides county data on literacy, numeracy, graduation, employment, and poverty. Got ahead of myself there. I'm seeing some different ones popping up here. So the answer to number 7 about what PIAAC Skills Map actually provides is actually true.

And number 8, which is what I ended up saying previously, in developing a survey or interview, testing it on a few people first doesn't matter. We've got some differences of opinion here. Some of you were saying false. And some of you you're saying true.

This is kind of a double negative situation, which I shouldn't really asked it that way. But the answer that I would say here is that it's false, that it does matter if you tested in the few people because you could find that your survey or your interview isn't quite asking what you think it's asking from the perception of the people that are hearing it. So it is a good idea to test it out on a couple of people first if you can.

Number 9, I'm on the right one this time. True or false? When analyzing data, we should decide ahead of time how to analyze. What do you think? Yes, true it is. That one there's not any-- yeah, you just discernment about.

Number 10, true or false again. When interpreting results, we can throw in our own opinions. You heard me on that one. That one's false. We don't want to be doing it. It's tempting. But we don't want to be doing it.

And last but not least, in an evaluation, the number of recommendations we make is unlimited. So we don't need to prioritize. True or false? Did anybody get all 11 correct just for fun?

I'm sure some of you did. And if you didn't, that's OK because at least you've got the answers and you can go back and look at it if you want to. So before we close and wrap up, we want to identify some resources for you. So let's start by identifying some California specific resources because that's always helpful since you're right there. That might be useful to incorporating evaluation processes in strategic planning.

So these resources have already been developed. They're there for you. And they might save you some time as you think and plan together.

When you get the PowerPoint, if you don't already have it downloaded, you should be able to click right on these. But I think that Mandilee is going to be putting them in the chat for some of them.

So the first one is the California Adult Education Three-Year Plan Guidance that we've been talking about. And, particularly, you want to look at pages five through eight. There's an evaluation section in there about assessing needs and the appendices. So here's the link in the chat for that one.

There's also an infographic. I don't know if you've seen that yet. It has-- actually have it right here. Probably can't see it, but it has some nice pictures of gears and the three main objectives. What I like about it is just how it pulls together the components of the planning process. So that might be useful as well.

And then last time, we also mentioned the CAEP program evaluation resources. And so there's a link to that. again, There's some good stuff in there that might be helpful. So those are the California Resources.

In addition to that, I've found some other online resources that might be useful to you in incorporating these evaluation processes in your strategic planning. So the Beyond Rigor series is the first one here. And that one has PDFs on collecting accurate data, doing appropriate analysis, the right data, and being in a sea of context.

And what I like about this series is that it's very equity focused and particularly looking at folks with disabilities and other equity issues. So it's a really-- it's they're all very short and easy to access. So that might be helpful.

If you're interested in root cause analysis that I referred to earlier, there's a tool called the five whys. And you have a link to that. In addition to that, if you're interested in actually trying root costs analysis as part of your planning related to evaluation, you're welcome to contact me for a PDF handout on this root cause analysis. And it's by McCombs from 2015. So you can just email me and say, I'd like the McCombs hand-out on root cause analysis. And I'll be happy to send it to you.

The other links that I have here are to the guides that I've been referring to today-- the one by Podems from 2019. These are books. So you would actually have to get the book. But you might be able to get it through your local library or a university or college library as well.

And Wholey, Hatry, and Newcomer published the Handbook of Practical Program Evaluation. The third edition is the older edition. And at this link, you can get the whole thing for free. It's hundreds of pages to look through. Or you can purchase the fourth edition or look for it again in the library as well.

So what's your back in the office after this webinar? What are some immediate steps that you can take related to evaluation. We're thinking primarily of the consortium lead here. But your team can definitely pitch in on these. These are starting points on which you can expand later on in consortium meetings as you get together or within your agency, depending on how centralized your approach is. We talked about that last time.

So, first, begin with the end in mind. Think about the potential outcomes from an evaluation and what you want to get out of it. What do you want to be able to recommend in terms of strategic planning? So start at the end. And then just jot down a few ideas. We're keeping this process simple.

Then you want to make a list if what you think the consortium should plan to evaluate. You may have a few ideas already given the priorities that you identified or will be identifying through that consortium program quality self-assessment that we talked about last week. Next, you want to think about and, hopefully, write down who will benefit from that evaluation and who's going to use the results. You all have some ideas already from the chat that you could refer to.

Fourth, you want to go back to slides 8, 9, and 10 and review them and consider how your consortium could assess the needs. Then you can brainstorm just on your own some of those preliminary evaluation questions, the things that need to be measurable that you all can discuss and define as a group.

Six, you can identify that existing data that we just talked about so much that might be able to address the evaluation questions that you came up with and things that the consortium can analyze systematically. Maybe you're looking at trends across time. Or maybe you're disaggregating by a subgroup of some type. But look at the data that you have.

Next, consider what new data you need or you might need to address your evaluation questions and ways that you can collect those data while protecting privacy and the rights of the people involved. And then last but not least, make a list of potential funding sources your consortium can look into. Some of you are already thinking about that. And then consider applying for-- to support your strategic planning. Then you and the consortium can get together to consider these points further to decide the next steps.

So that's what you could do back in the office. I know that's quite a bit for a Friday afternoon. You might not want to do that this afternoon. But those are, I think, some key takeaways, some relatively simple steps. I realize none of this is fully simple. But we're trying to simplify to the extent that we can to save your time and resources.

So that brings me to the end of the presentation piece. We may have some questions that have popped up in the chat. And it's possible that I missed them along the way. So I apologize. But as you're looking at some of the links that just popped up, if you have some further questions, I'm happy to try and answer them.

And also, while you're thinking of questions, we do have the Logic Modeling webinar. It will be the last one that I'm presenting. There are plenty of others coming after me. But for the Logic Modeling one, that will be September 29th at noon, California time.

And my contact information is here. So please feel free to shoot me an email if a question comes up. I'll hold on for a moment. Yes, it is the 29th. That's what's on my calendar. And I'll hold on for a little bit if anybody has any questions. But thank you everyone for participating today.

Mandilee Gonzales: So, Margaret, it looks like we do have a question from Naomi Ennis. And she states, what are some of the most common tools slash software used for evaluation?

Margaret Patterson: That's a fantastic question. So there are lots of things out there. And I'm guessing that you don't want to be spending thousands of dollars on them. So to keep things simple, if you're analyzing quantitative data, you can actually do an amazing amount in Excel, which is going to be available in your computer, I would guess in 99% of our computers.

I use SPSS, which can be quite pricey. Some people use a platform called R, which is free. But it takes quite a bit of training to be able to do statistical analyses.

On the qualitative side, there are a number of platforms out there for being able to code and then analyze, pull together the coded data into themes. The one that I'm using right now is a software called Dedoose. And that's spelled-- you know what let me see if I can put that into the chat so you know how to spell it. It's an unusual spelling. So it's Dedoose like that.

And what I like about Dedoose is you can put all of the quality of information in. But if you have some indicators, for example, what county a bit of data came from or if you're talking about a program that has a lot of English language learners as opposed to folks with basic skills, whatever demographic information you have, you can put that in there and say, are people responding with these qualitative data differently on the basis of the demographic information that we have about them? And it's really, really informative at getting at some of those nuances we were talking about.

What I like about Dedoose is that it costs about $15 a Month and you only pay for the months that you use. So if you're just doing something quick and dirty and can go in, it will cost the $15. And it's pretty simple and straightforward to use. Again, there are other programs out there. Atlas is a popular one.

There are multiple other ones. NVivo I think is another one that people tend to like to use. But they might get more on the pricey side and require some training in order to be able to use.

So I try to keep things simple when processing data. But you can do a lot with Excel. I do all of my graphics in Excel. And you can do a lot with the straightforward programs, like Dedoose, as well.

You can also, I hope, tap into any evaluation resources that you might already have in your districts. I work for a school district at one point that had an entire evaluation unit. And so if it's not a huge data run, one of those folks might be willing to help you out so that you don't have to buy the software. But you can get the information that you're looking for.

And they may also be able to give you some free advice. So it could be through your school district. It could be through-- a community college probably has some type of evaluation group as well. It might be the folks that keep track of the institutional research and things like that. And maybe if you bring them some chocolates or offer to have lunch with them, they'll be willing to help you out.

I don't know if we have any other questions coming in. I didn't see anything else.

Mandilee Gonzales: No, I'm not seeing any other questions come in. So you can start the close out. Margaret, thank you so much. And thank you everyone today for participating in the webinar.

Please take a moment to give feedback to Margaret. The evaluation link was posted by my colleague Holly Clark. We will send a copy of the presentation to everybody who has registered. It was also posted in the chat box multiple times. If you didn't have an opportunity to download it, you can see that in your email coming soon. Otherwise, you'll also be able to access it on our Cal Adult Ed website.

And I think with that, we will go ahead and close out. Again, thank you all for joining us. And, Margaret, we look forward to the 29th for your Logic Modeling piece.

Margaret Patterson: All right, we'll see you then. Have a great weekend, everybody.

Mandilee Gonzales: Bye, thank you.