Veronica Parker: Dr. Patterson has 16 years of research administration experience at the state and national levels, ran a statewide research and data analysis consulting business for seven years, and presents extensively around the country. She partners with nonprofit organizations, post-secondary institutions, and state agencies, applying research to support adult educators and learners. She also has many years of experience as an adult educator and local and state administrator. Now, I will turn it over to Dr. Margaret Patterson, who will get us started with today's webinar.
Margaret Patterson: Great. Thank you, Veronica. And welcome, everybody. I'm really glad that you're here. This is the first of three webinars that I have the privilege to lead today. And in addition to what we're covering today, which is Consortium Program Quality Self-Assessment, we'll also be looking at a Program Evaluation 101 coming up on September 17 at noon Pacific time. And then September 29 at noon, we'll be looking at Logic Modeling. Those are some pieces that we hope will be in support of your strategic planning efforts that I know you're all looking forward to doing as time goes along.
Let's get started here. As we get started today, I have a question for you. And I'll ask you to go ahead and put this in the chat if you would. What was the last item that you bought yourself that had excellent quality? We're thinking about quality today. Anything that you can think of that you bought yourself with excellent quality? Ah, a truck. Nice. Ah!
Great variety of answers here. I love it. Running shorts, OK. Those are important to have if you're running, definitely. OK, super. I'm seeing a lot of really comfortable items here. This is important stuff. Oh, glasses, yes. Definitely. And a lot of vehicles. OK, super. Great examples. And of course, those vehicles can be quite expensive, so you hope that they have fairly high quality. Absolutely. Well, good.
By way of overview today, here's what we'll cover this afternoon. We're going to review the components of the self-assessment tool-- the Consortium Program Quality Self-Assessment tool. And I call it the self-assessment tool because it is a mouthful. So I hope you don't mind that abbreviation. We'll review the components of the tool as a start in the evaluation process. And we'll talk about a straightforward way to actually use it in planning.
And then I will share two approaches with you, and some steps that your consortia can consider taking for planning purposes. We will also look at two other planning resource-- or planning assessments, excuse me-- as resources. And then I have some additional resources at the end that may help answer any questions that you have related to the strategic planning process. That's what we're planning to start with today-- this afternoon.
Let's look at the self-assessment as a tool to be able to support the three-year planning process that you're about to engage in. First, it allows you and your consortium members the opportunity to start-- or perhaps you're even continuing-- important conversations about quality. Second, the self-assessment can guide you and your fellow consortium members as you develop strategies to enhance the quality that you already have. And third, it can also serve as what we could call a formative assessment-- in other words, to assess where your consortium is right now in five different areas. And we'll talk about those areas in just a moment. And to help you collaboratively think ahead to program improvement.
Let's take a look at that first point, starting important conversations about quality. It allows you as consortium members the chance to address questions like, how well are we collaborating? Are we collaborating? But if we are, how well are we doing it? What is the quality of our impact on our communities? And what are some of the strengths and challenges of our consortium? You may be aware of some of these already, but perhaps some of them you haven't considered before.
Let's look-- in this slide and the next two slides, we're going to have some chat opportunities. And our first chat opportunity is around the third bullet. We saw there, what are the strengths and challenges? What I'd like you to think about and put into the chat, if you would, is what do you see as some of the strengths of your consortium? You may or may not have talked about this already. But just from what you do know, what are the strengths of your consortium?
Vision, OK. That's a good one. Some collaborations. Definitely. Support and transparency. Yes. Seeing a lot of things about relationship, working together. Yes, definitely. Strong leadership. Being integrated with workforce, with community colleges. I'm guessing that that's what that acronym is for. Absolutely, being able to work with others. Yes. I'm getting the sense, from reading these, that you have had some of these conversations, which is super.
Next, on the second point, which is developing strategies to enhance quality, the self-assessment can guide you and your fellow consortium members as you develop strategies to enhance the quality that you already have. Some questions that you might consider addressing are, of the policies that we have, which of those policies need to be strengthened?
How can we enhance our procedures to meet student needs? Because I saw that that was a really big emphasis in what California Adult Education Programs have been putting out in terms of, how can we make things more equitable for students so that they reach us and participate? And what can we learn from examining our practices that will support their continuous improvement, meaning the continuous improvement of the practices?
Let's have another chat opportunity here. In what ways does your consortium emphasize continuous improvement of practice? We're thinking about continuous improvement here. OK, looking at fiscal and student data. Excellent. Thanks, Mitch. Right. Lots of data reviews, I'm seeing. OK. Leveraging resources. We're going to talk about that too. It's terrific that you all have had the opportunity to look at data so far, and to review it and figure it out and decide what to do with it next. That's great.
The third way that the tool can benefit your strategic planning process is in serving as a formative assessment as I mentioned earlier, and building towards program improvement. Again, we have some questions to guide your discussion. Where does the consortium stand with respect to program quality, is one question. In what ways can our stakeholders help us enhance program quality?
And when we think about the results that could improve consortium programs, have we thought about what those results might look like before we do the self-assessment? That might sound like an odd question. Why would you want to think about that beforehand? And the reason for doing so is, you might have in mind what your purpose is, what your process is, what your use of those results would be before you even start asking yourself all of those questions. It's just a little bit more explicit way of approaching that.
Thinking in advance of results-- this is another chat opportunity coming up here-- what are some ways that you might use the results of the self-assessment for strategic planning? If you go through this process and you look at the self-assessment, how do you think you can use it for strategic planning? I know these are tough questions, but you all are doing great. OK, definitely. Yes, identify gaps. Implement new courses. Yeah, focus. That's a great one. Yeah. Thinking about delivery methods. Right. Strengths and weaknesses are important. Absolutely on student persistence. That's a huge issue. Yes.
Enrollment and persistence. Because you have to get them in the door, and then you have to keep them, right? Exactly. Lots of great ideas in here. And one of the reasons that I ask all of these questions is because I know that you all have given a lot of these issues some thought. And it's really great to hear what everybody else is thinking about in their respective consortium. That's very helpful information to have. I'm really impressed with the number of answers that we're getting here, and the variety. It's a super amount of diversity, so thank you all for sharing those perspectives.
So far, what we've talked about with respect to the quality indicators, that sounded a bit abstract. So let's get a little bit more concrete. The quality self-assessment has five key quality indicators here. You can see them here, from capacity, to connection, entry, progress, and completion, transition. And they're rated on a scale from 1 to 5. It's a Likert-type scale, if you're familiar with that terminology. And it goes anywhere from a 1, which is a strong need for improvement, to a number 5, meaning exceptionally proficient-- and not only that, it can serve as a model for other consortia. There's quite a nice range in there.
And you and your colleagues in the consortia can start to think, well, where do we fall in these five quality indicators? Are we at a 4 and a 5 in everything? Probably not. Probably a lot of things. Wouldn't surprise me. But are there some aspects, for example, of capacity that, maybe we're doing out of 3, which is doing well, but needs a bit more work?
Do we have some indicators where, if we looked at the sub-indicators, and we'll be honest here, we might rate ourselves as a 1 or even a 2. And those are the things that, if we do rate ourselves that way, then we want to be sure that we're working on them. Because there's clearly some signs of challenge here. But it is worth being honest. And that can give you some guidance on what the priorities might be for the strategic plan.
Let's start with capacity. And 1.1 tells us that the consortium maintains effective collaborative processes for planning, implementation, and accountability. And the sub-indicators for that are consortium staffing, making progress toward goals, and processes and procedures for collaboration-- some of the things that you just been talking about in the chat, all of that collaboration and how that works. Those of you that talked about that quite a bit, maybe you're going to end up being a 3 or 4 or a 5 in there. Don't know what that will look like, but it's a possibility.
1.2 is, consortium agencies have the leadership, management, and accountability processes necessary to meet community needs for adult education. And the sub-indicators here have to do with knowledge, skills, and abilities-- so KSAs, if you're familiar with that acronym-- to meet the goals and objectives of the California Adult Education Program, resource allocations for agency leadership positions, and agency participation and consortium activities. We've got quite a bit going on there in number 1. There's more to number 1, to capacity.
We also have 1.3, which involves the resources that promote adult learning-- very important-- and provide high levels of access. That's an important feature, because people that are in need of services need to be able to get to the services. That looks at things like agency staffing for collaboration and having enough classroom and learning space to do so. And then the final sub-indicator for capacity is professional development. And does that provide sufficient opportunities for faculty and staff to turn new knowledge into practice?
Those are the indicators for capacity. And I realize that's very broad. And typically, we're going to be rating ourselves on these sub-indicators-- so in this case, the four that we just talked about. But to try this out, let's look at it in the aggregate, so all of this capacity together. If you had to rate yourself, right now, in your consortium, how would you rate your consortium's capacity? We'll let you put that in the chat.
Would you give yourself a 1, a 2, a 3, a 4, a 5? And you can just go with the number. I see a 4 coming in. OK. Lots of 4s. No pressure here. I can't say that I'm surprised to see some of these numbers because of what you mentioned previously. OK, we have a 2 coming in. So has a few things happening, but could use some improvement there. Great.
Let's continue on to the second area, the second indicator, of connection. And this has two sub-indicators. In the first sub-indicator, the emphasis is on engaging communities of high need through student recruitment, outreach, and use of data to inform recruitment. And I think some of you mentioned that earlier in the chat as we were talking. And then in 2.2, we're looking at a "no wrong door" approach to getting adult learners involved.
And that's measured through things like counselor knowledge of conservation programs and services, the counselors referring outside the home campus, wherever that is, and that agencies have-- program maps is what they're calling them here with aligned opportunities-- and that the curricula reflect the opportunity to transition in some way. Those are pretty important ones. This particular indicator only has two sub-indicators, so we'll go ahead and move along.
Number 3 is entry. This indicator has four sub-indicators to begin with. Once the students are recruited, how are they oriented to the program? Which measures are used to inform placement, planning further educational and career needs-- that's super important these days, to plan ahead-- and then continually improving the process? Additional sub-indicators for entry. Look at individualized educational and support plans and how counseling and wraparound supports are provided. This is not a new emphasis in adult education, but it's definitely one that we're hearing quite a bit about right now, is those wraparound supports and planning ahead for what they need as individuals.
We have a poll opportunity here, and that is to look at indicator 3.4. Yes, here it comes. On a scale from 1 to 5, where 5 is that exceptionally proficient, we're a model for others, and 1 is strong need for improvement, how do you rate yourself on counseling and wraparound supports? Oh, here come the numbers. Quite a bit of 2s and 3s. OK. Some more 3s. We're getting the standard bell-shaped curve here that's coming in. We'll allow another moment or so for people who are still thinking about it.
It looks like we actually have quite a few 3s here. That's the most frequent one. But actually, we're all over. That's good. And something that we haven't talked about yet, but that occurs to me as I see these results here, is that, well, maybe the ones that are exceptionally proficient in this would be willing to share some of their ideas or their approaches to these wraparound supports. That could turn out to be really, really important. Yeah, it looks like about half of you are somewhere right in the middle, and most of the rest are either above that or below that. Oh, OK. It looks like the results might not be showing up. I'm seeing them on my screen, but--
Veronica Parker: They should appear now.
Margaret Patterson: OK, super. OK, great. I'm glad that you can see them. Good. I'll give everybody a moment to look at that. But, yeah, really, this is a very nice distribution for the number of people that we have. This is exactly what I would expect. Let's go ahead and-- well go ahead and close this for now, and we'll move on.
Veronica Parker: It's closed, Margaret. You'll have to minimize it on your screen.
Margaret Patterson: OK, thanks. Great. A fourth quality indicator, progress, looks at alignment and articulation of programs, so how agencies are offering IET programming and how agencies follow up to determine student support needs. And then our final quality indicator, which is completion and transition, emphasizes completing the program, which is super important, and then transitioning to post-secondary and workforce opportunities. 5.1 looks at the effectiveness of those transitions. 5.2 focuses on partnerships that help make transitions happen. And 5.3 focuses on continuous improvement planning and how program effectiveness is measured and evaluated.
Let's take a closer look, in the chat, at 5.3. Think about program effectiveness. In what ways does your consortium work together to evaluate program effectiveness, if you're doing this at all? And I'm sure that some of you are. How is your consortium evaluating program effectiveness? OK, through outcome data-- so performance measurement. That's definitely one way to do it. Comparing to statewide averages.
Student feedback. Oh, that's interesting. I'm curious if that's, like, an evaluation form that you send out to students, or what that-- or if you actually sit in groups and talk about it, or how you approach that. An exit survey? OK, thank you, Kenneth. That's helpful to know. So Wayne is looking at transitions. We've got some good measures here. There might be more that we can do. And we'll continue to talk about that student satisfaction, looking at that. Absolutely. Those are all important indicators or measures of how things are going.
So far, we have talked about what's in the self-assessment tool. Next, we can look at how the consortium can actually use the tool. And I see two potential approaches, depending on how your consortium structures itself and how used to collaboration your consortium is. And I know in the beginning, a lot of you did mention collaboration, so one of these may work better for you than the other-- or perhaps a hybrid, even-- whatever you think works the best. But the goal here is to identify and focus first on those 16 sub-indicators that we just went through, but particularly, the ones that you rated a 1 or a 2. And then after that, to consider the 3s.
The way the more decentralized approach would work is that people, as individual members of the consortium, could go through these five program quality indicators and then rate each of the sub-indicators, and then turn in the ratings. At that point, somebody in the consortium would take the responsibility to compile and summarize those ratings. The consortium members would get together and meet to review the ratings.
This could very easily be done-- depending on how large the consortium is-- I would think, virtually. But it might take as much as an hour per indicator to go through and try to reach some type of agreement. And then the agencies in the consortium could each make recommendations on one or more related sub-indicators, and then the next steps that the consortium can consider for strategic planning. That's one approach. That's the more decentralized approach.
A more centralized approach-- and this is working within the consortium itself, with people working more collaboratively-- would be for the members to meet to rate the indicators and the sub-indicators. This is probably going to take an hour or two for each indicator-- to get through all of that and reach some type of agreement on the 16 sub-indicators. At that point, the consortium members would collectively make recommendations on how the 16 turned out. And then cross-agency teams could be selected to determine, what are the priorities and how do we carry out the next steps? So it's a little bit different of approach. And again, you might see different ways of making a hybrid out of that if that's the case.
As a chat opportunity, if you will, I'd like to ask, do you think your consortium would lean more toward a centralized or a decentralized approach to the self-assessment process that we've been talking about? OK, I'm seeing one vote for decentralized. A combination, possibly. And, yeah, it might very well make sense for everybody to do their individual ratings and then say, well, how can we pull this all together? OK, good.
It's nice to have options and to look at it. And some of you want to think about it a little bit more, and that's great. That makes total sense. And it is worth discussing as you go through the process. I realize that this is not easy. My hope is to be able to simplify things to the extent that we can so that your process goes as smoothly as it can.
In continuing to talk about using the tool, regardless of which approach you take, you're going to want to figure out some next steps. And that might involve-- it definitely will involve designating roles, determining responsibilities, and deciding on outcomes. And I'd like to make that as concrete as we can by looking at an example. And we're going to follow that through on the next couple of slides here. Our example in this case is to go back to indicator 3, which as you recall, is entry. And we're looking specifically at sub-indicators 3.1, 3.2, and 3.4.
And if you recall, 3.1 is about orientation. And it's looking specifically that the orientation is culturally responsive and promotes self-efficacy and confidence. It's getting it that equity idea that we mentioned a little bit earlier. 3.2 looks at using multiple measures to inform placement, education career planning, instruction, and continuous improvement. And then 3.4-- just turn my page here. I'm looking, by the way, at the quality indicators themselves, which I believe you all should have a copy of already. 3.4 is, consortium members collaborate in providing proactive counseling and support services to promote persistence and long-term student success.
Imagine, if you will, that in your case, the self-assessment indicates that assessment and placement is going very well. No problem there. But there are some challenges occurring around partnerships, and particularly around high-quality counseling, referrals to partners, and having some type of an alert system if there's a situation where the student is struggling. Then, we might want to think, all right, well, what can the consortium do to analyze and fill gaps in this circumstance?
In terms of designating roles, we might decide that the consortium thinks it's a good idea to collect some stakeholder data. And that might involve having some meetings with students in each agency, or perhaps taking an informal poll of partners on something like Google Forms. The role that we're designating here is, who is best equipped to collect these data?
Next in determining responsibilities, we're going to want to know, all right, we collected the data. Who's going to summarize it? Who has that responsibility? And not only who's collecting it, who's summarizing it-- who's going to make some of these decisions? And then deciding on the outcomes, which recommendations that we get out of this with respect to counseling and partners are going to be the highest priority? Those are some examples for this particular area.
Let's get even a little bit more specific than that in terms of designating roles. Who will lead? The leader, in this case, might be somewhat of an orchestra conductor, if you will, making sure that all of the instruments are well tuned and prepared to make the best sounds possible through the conclusion. Existing data might include the number of referrals to partners, or the number of meetings that occur with counselors.
Stakeholder data might involve meeting with students in each agency, or that informal poll that we talked about that might need to go to partners, counselors, and students-- perhaps on Google Forms. The person tracking time is going to be following up with those gathering the data within a certain number of weeks, because it's going to be a finite process, so that the information is ready for the next consortia meeting, whenever that occurs. And gathering this information might be summarized by one individual, or considered as a group and then written up and meeting notes.
The cross-consortium team or agency, depending on which approach you decide to take, might then realize, oh, there's a priority here-- from the gathered information. And that's ensuring, through follow-up, that students actually get the partner services that they need to stay in the program and complete it. And the team or the agency makes specific recommendations that can be considered for the upcoming strategic plan. That's an example that you might follow through with on this process.
Some specifics for determining responsibility include, when will the agency or the team-- however you approach it-- when will they meet? How often and how long? That's something you might want to decide about ahead of time. What are the project management needs? You probably need a timeline of some type. There's actually a timeline that's already suggested in the guidance document for strategic planning. That might be helpful. You don't have to start from scratch. What are the processes that need to occur? And who's going to follow up, and when will that happen?
And then, of course, as we've already referred to earlier, how will data be analyzed-- the existing data that we have. And if we need to collect any new input, how will that input be collected? We want to make sure that collecting those data can occur in a simple, straightforward manner and that we can analyze them quickly so that we can inform the decisions. Collecting data, all pf the pieces associated with this evaluation process, does not have to be cumbersome. It can be, but it does not have to be if you can follow some simple techniques. And we will definitely be talking about those in the program evaluation webinar.
And then deciding on outcomes. A few other considerations occur with respect to outcomes. Perhaps the consortium will decide to share the results through a slide deck of some type or an executive summary. Those are considerations to keep in mind. With respect to determining priorities, in our example, the consortium members involved would need to decide, well, which of those recommendations on counseling and partners are highest priority? Because we can't do them all. So we have to prioritize which ones can actually happen.
At that point, they can make recommendations to the full consortium to consider for entry into the strategic plan. We have another poll opportunity here. We have a scale of 1 to 5, but this time, 5 is very easy and 1 is very difficult. How easy has it been so far for your consortium to settle on priorities in its work? Quite a range here-- everything from really hard to really easy. And again, not surprising. Prioritizing is not easy for any of us, really, except for the five of you that are saying it is. We still have a few people making up their mind. That's fine. Give it another minute or so. Is everybody able to see this poll so far, at this point? I know the last one was a little bit tough. No, you're not seeing it. OK. What I'm seeing is, we have about half of people at a 3 again, and then about 25% at a 4, and then the rest are either very easy or more towards the difficult and very difficult range. c See it now. OK, thanks, Laura. I appreciate it. Everybody can see that. Good. Thank you all for participating in that.
I want to take just a little bit of a pause here before we go on to look at some other resources. And I know we've covered a lot of ground pretty quickly in about 45 minutes, but I just want to pause and see if there are questions that have come up with respect to the indicators or this process that I've outlined here that anybody would like to ask. OK, Crystal had a question. Crystal? I probably have lost track of your question, so let me see if I can--
Veronica Parker: The question is, using this tool, how would you explain rating your own agency versus rating the consortium?
Margaret Patterson: Oh, OK. That's a great question, because it could be very, very different. You as, perhaps, an administrator, or a person working very closely with administrators in your agency, probably have a pretty good handle on how things are. And that's one of the reasons why the collaborative approach might be beneficial. Because you know how you would rate it for your agency, but if you don't have all of the information collected in advance, it might be difficult for you to rate collaboratively otherwise.
I think that's an indicator right there that we need to have that information in advance with respect to the questions. I think there's another question. OK, Mitch is saying that, after agency ratings, they come to a consensus. All right. So yeah. And I think it's very possible-- in fact, it's probably always going to happen this way-- that there are some agencies that are way up on the high end of the scale.
They're in there at a 4 or a 5-- and then there are other agencies within the same consortium that maybe are struggling with that. Then it becomes a matter of, how can we bring everybody along so that the one agency, or two agencies-- however it works out in the consortium-- are not left high and dry, so to speak, in the process. That consensus process might be helpful. Other questions? And I was trying to monitor the chat as we went along, but most of this had to do with our chat opportunities.
Veronica Parker: Yeah, there were no other questions.
Margaret Patterson: OK, great. Well, we will have some time as we go along, at the end, to answer additional questions if they come up. OK, super. Let's go ahead and move on. In the overview, when we started, I promised to share some other resources with you that might help in the planning process. And we have two other assessments that might be useful. And these are the Western Association of Schools and Colleges' Adult School Procedures from earlier this year. And there is a link here to finding those Adult School Procedures. And-- oh, Veronica actually has posted that. OK, great. Thank you.
You can check out that link once you have the PowerPoint, and it'll take you right there. I think I've set it up-- because I had to hunt for it quite a bit, to be honest with you, to find it. But its purpose is to assess progress and evaluate, and to communicate value and efforts to improve. It's pretty much closely aligned with what we've been talking about in this self-assessment, the Program Quality Self-Assessment. And this document includes 10 criteria. And they each, of course, have multiple indicators. So it gets a little bit complex. But I will at least show you the criteria.
This is what they cover. Really, it's everything from looking at the mission of the school and learner outcomes, which is very, very similar to what we've just been talking about in the Consortium Program Quality Assessment. But then it also has indicators related to governance faculty and staff curriculum, instruction, and so forth, all the way to developing an action plan for continuous improvement. There's quite a bit of opportunity here to take a look at this, if you're interested. It might be informative. I'm not saying it's a requirement. It's just some extra information that's helpful. Oh OK, so the link in the PowerPoint isn't working.
Veronica Parker: I was just typing in the chat, Margaret, and to all attendees, you have to be in presenter view for the link to work. I just tested it on my device, and it's working. If you go into presenter view, it will work.
Margaret Patterson: Super. That's always helpful to know. As long as we can get along with the technology, we're OK. Another resource for planning is the Continuous Improvement Plan from the California Adult Education Program, CIP. A document and that just came out earlier this year, too. These are very, very recent resources, which-- I was very impressed that they exist, to be honest. It's great stuff here. And you can find the Word document for the CIP on the OTAN OAR site. And there's also a video. And I have a link for this-- when you're in presenter mode and can access it-- that you can actually listen to this video and have that available to you.
The goal, in the CIP, is to consider current relevant data that you have to write up some specific performance goals, and then identify strategies for achieving continuous improvement and learner success. Very much in line with what the Consortium Program Quality Self-Assessment is offering, but this is a little bit more focused on continuous improvement plan. And those of you that just went through that process-- I understand it was earlier this spring, or maybe late spring-- that might be fairly fresh in your memory, and there might be some information there that's helpful. We're not trying to add to the workload here for self-assessment. It's rather to look at components that are already there that might be informative.
If you look at the data in the appendix of the CIP, you'll find lots of possibilities. Things like a Data Integrity Report, the NRS tables that we're also familiar with, table 4 and 4B, that give us the information on learners who have enrolled and how they're doing, what their outcomes are. There's evidently a Persister Report. I'm not familiar with that, but it sounds like it might be specific to California.
It's a great name, though. I like the sound of it. Program implementation survey results through WIOA. All of the information that you get from follow-up surveys with respect to employment, and earnings, and so forth. A lot of good ideas in there to take a look at in terms of data that might be informative to the self-assessment process and strategic planning.
What we have covered so far today is, we've reviewed the components of the self-assessment tool as a start of the evaluation process, and we've talked about a straightforward way to use it in planning. I shared two approaches-- It looks like most folks are looking more at the decentralized approach, or perhaps a hybrid-- and some steps that you're consortia can take for planning purposes. And we've looked at two other planning assessments as resources. What I'd like to do now is to look at some additional resources that might be helpful to you in this process.
And we might be able to put some of these in the chat so that, if you would like to go and take a look as we're wrapping up, we can go ahead and do that. The first one is the set of California Adult Education Planning tools. And you can see the link for that there. It's actually a whole menu of planning tools, so you'll have lots of options. I see that Mandalee is going ahead and putting those links in the chat. If you want to go ahead and click on those, that will be fine.
The next one is-- actually, the next two are links to files on cultivating a planning mindset, so some things that you can think about somewhat related to what we've done today. What are the processes that we need to go through to think about planning? And human-centered design. That's another file that-- I found it was very well put together when I took a look at it. So I thought that might be-- because we are talking about improving things for these adult learners that were serving. We want to focus on the needs of the learners, and how do we get there? How do we meet those needs? Those, I think, will be informative.
In addition to that, we have a link to the Council on Occupational Education Self-Study Manual. That might be a valuable resource to consider as you go along. And then going back to the California Adult Education Program website, there are also research and practice resources. There's a link for that. And specifically, on that same page, you'll see a link to the program evaluation resources that Hanover Research has pulled together. Those might be useful as you think ahead.
Yeah, I'm glad to hear that, Lydia. I'm glad that you think that those are great resources. Because if you don't have to think through all of these things yourself, if there are some materials that are available to you, it just saves a lot of time and a lot of thought process. I hope that those resources will be valuable to everybody. And I encourage you to go ahead and take a look at them in your consortia. As you decide who's doing what, don't throw it all on one person. Spread out the wealth. And give everybody a chance to participate and help out.
I want to thank everyone for participating today. I'm happy to take additional questions if anybody has any questions. I realize we've had a whole lot of information thrown at us, and it might be something that you want to take back and process with your agencies, or even in your consortium meetings. I know you have until June. Those months will go flying by, as they always do, particularly with the start of a new school year happening here. But I hope that these thoughts are valuable to you in some way.
I encourage you to plan to join us for Evaluation 101 coming up and the logic models webinar. They'll both be later in September, and I look forward to meeting with you all. But I'm happy to stay on and answer any questions or kick around ideas if you'd like to, since we have time. Oh, and thank you for posting the information to register for the next sessions. We will continue to dive into the specifics.
Yes, that's a good question. The Evaluation 101-- program evaluation-- I'm getting pretty specific. I'm just finishing up the development of that particular webinar. But yes, it's very closely linked to the strategic planning process. But at the same time, it talks about program evaluation in general, what the processes are, how systematic it needs to be. And we're going through lots of specific examples in that one. And then the logic models one, it's not required to have a logic model for strategic planning for this particular purpose.
But I hope I can make the case that it helps. Because if you have a logic model, if you've taken the time to think through the components of what you want to accomplish, the activities and the outcomes-- the outputs and the outcomes, and what that all looks like-- it will make asking the evaluation questions and addressing the evaluation questions that much easier. A lot of this is being able to define things, measure things, and then go ahead and get the data, whether you have the data already or you need to collect it. We'll take a look at all of those and go through that.
The question is, is it possible to open the program evaluation document and walk through one section? I'm not sure which program evaluation document you're referring to. Are you referring to the one from Hanover Research? Because we-- I mean, we would have time to take a look at that if there's interest. Oh, the quality self-assessment. OK. Yes. I don't know if you-- let me stop sharing. You've got my contact information here, so if other questions come up-- let me stop sharing my screen. And let me see if I can open up the quality self-assessment and see if we can come up with an example, Janae, that you're particularly interested in hearing more about.
Let's do that. And let me find the-- just one second. Yes. Here we go. I've found it at this point. And let me go ahead and share my screen again. There we go. All right. Now, hopefully, you-- can you see that? I have to turn my chat back on. OK, yes. OK, super. Here it is.
And if you're not familiar with it-- some of you may have been through a three-year strategic planning process earlier. And if you're not familiar with it, this is a good place to look for it. If you can see the URL at the top, it's caladulted.org/DownloadFile/656. And this is it. It's 14 pages. There's quite a lot of detail here, which we did not go into. But you can see the five areas, and it talks about the ratings. It gives you an overview of the 1s through 5s that we talked about.
Here, for example, is the indicator of capacity that we started with. And we talked about number 1.1. But you can see that, within 1.1, which is a sub-indicator, there are specifics to walk you through that you can decide, well, where do we actually fall in each of these sub-indicators? So it's possible that, for example, in 1.1.1, which is talking about consortium management and coordination, that maybe nobody's in charge of that, or perhaps somebody is doing it on a part-time basis, which would be a 3.
Or if you're looking at 5, then you've got one or more staff who are charged with that. I'm guessing, from the way that this is written, that that's probably a full-time role of some type, but maybe that's not the case. But there's definitely a person who is dedicated with managing and coordinating. These sub-indicators actually give you a lot of play, if you will, to determine, where do we fall as a consortium? And they're specific to the consortium rather than to an individual agency. And that's important, that you're able to do that.
But as you go along here, you'll see that each of these sub-indicators-- here's 1.2 on leadership management and accountability processes. Are those in place? And what does that look like? For example, in 1.2.2, we have resource allocations. For these agency leadership positions that are out there, maybe the resources are not commensurate with what the community needs and the size of the program. In other words, they're doing the best they can with very few resources. That's a 1.
Then we can look all the way at the other end, where we have resource allocations for agency leadership positions that already are commensurate with what the community needs and the size of the program. That's a principal, a dean, whoever it is. And then everything in between. That may not be the clearest example of this. But it is an example of what the rankings indicate. And then you, as a consortium, can figure out, all right, how are we going to reach that consensus? Or are we going to do this all together and say, as a consortium, we see ourselves as a 3 in this one.
And then how does that roll up to 1.2, to 1.1, for example, in terms of capacity? And ultimately, you want one for each-- wherever you see these gray blue bars, you want a rating for that. And the rating will come from looking at these descriptors underneath each sub-indicator. Does that answer your question? We have lots-- OK, great. We have lots of-- and these are well worth looking for. But it goes all the way through to the fifth one. I'll try not to make everybody dizzy here, but you can see we have 2 and 3 going on to 4 and 5 is the last one, and then it stops there.
Beyond the introduction, the whole bulk of this document is just the indicators and the ratings and how they play out. I think it gives some very good information as far as how you can assess what's going on. Yes. Oh, that's excellent. I'm so glad to hear that, that they were designed by a team. I could tell that not one person sat down and wrote all of these, because that would be really tough, but it's great that, in the process of doing that, there was lots of input occurring. And depending on how you see these as a consortium, you may already have the data to answer the question just like that.
And then in other cases, you may say, oh, we really don't know. We need to ask some questions and get some more information. And that's what we'll be talking about more in the Program Evaluation 101 piece as we go along. The question coming up here-- would you suggest taking an average of each sub-sub-indicator? Both of those approaches could be valid. I think that's really the consortium's call to be able to say, because ultimately, you're going to end up prioritizing.
If you say, well, we've got some 1s here, and some 4s there, and whatever, do we want to take an average of those, or do we want to say, we as a consortium think that this particular sub-sub-indicator is so important that we want to give that extra weight that would be a very valid approach, particularly if it's something that's meeting the emphases that California Adult Education Program has determined are important moving forward.
So yeah, I think either approach would be valid. But you, as a consortium, I think, want to agree on whatever approach you take and stick with it. And that way, you'll be able to say, yep, we did it systematically, we know what we got, and we did it right from the way we defined what "right" is. These are great questions. Let me see if I can answer-- OK.
Referring to the question of how to rate your own agency versus your consortium, maybe ask the agency lead to fill out two assessments-- oh, OK. Two assessments-- so one for themselves and one for the consortium as a whole. Yeah. And they might be the same or they might be very, very different. And then as Mitch had mentioned earlier, how do we reach consensus-- if that's a decision we want to make-- how do we reach agreement, at least, as to which of these is important and which we want to follow? Absolutely.
I think as long as you, as a consortium, can decide on the process and say, this is what we think is most important to know-- because maybe you already know that there are some agencies that are struggling in certain areas and some that are way far ahead. Maybe there's a way to reconcile some of that through this process. Maybe there isn't. Maybe you decide that we can't figure it out. So let's make that important to the strategic planning process. Let's figure out how we're going to figure it out. And leave it at that. Yeah.
Emma is saying that they use an independent facilitator to reach a group consensus. OK. Yeah. Not everybody has resources to go outside. And one of the reasons that I will be talking about program evaluation so much next time is that you don't necessarily have to go out and hire an evaluator-- a third-party evaluator-- to get this done. You may have some expertise within your consortium.
Perhaps, in maybe in one of the adult schools, there's somebody in the district that has evaluation background and can answer some of your questions. Or maybe there's an administrator who has a lot of experience with the evaluation and can answer some of those questions as well. Rely on your own resources. And you may have to figure out what that looks like if you don't have the funds off the top of your head to do that.
One of the things that we'll also be talking about in program evaluation is the possibility for funding to do whatever it is that you ultimately recommend for strategic planning, that there are some options out there that she could take a look at. We'll be talking about those. And I'm going to try to be as specific as I can so that you-- right, you're right. Absolutely, Emma, an evaluator is different than a facilitator. But I guess what I was trying to communicate with that is, sometimes, it makes sense to go outside the consortium if you have the resources to do that. But in some cases, you have the in-house person who can handle that as well.
One of the nice things about using a facilitator, if it's possible, is that that person isn't intimately involved, necessarily, in what's going on in the consortium. And so they can guide everybody and not be focused on, how is this going to affect my agency and my programming? That is definitely an advantage of having that outside facilitator. OK. Well, that's good. That's good, if it's not as expensive as [inaudible]. Yeah. If it's possible and-- there are lots of different ways to get help when you need it. Absolutely.
These are all great questions. I hope that you will take the time to look at these materials. I know that it's going to take some reading. I wouldn't recommend it being nighttime reading, but if you have some time, say, early in the morning, when you're waking up, and fresh, and can take a look at some of these things. And if each person takes a piece of it and then you bring it all together, I think you'll find that some of these resources can be really valuable to the process.
We have an evaluation for today. I really appreciate that if you take a moment and fill that out. And if you have any other questions, my contact information is at the end of the slides, as you saw. Feel free to reach out, and I'll try to be as responsive by email as I can. But I highly encourage you to get involved in the upcoming webinars. And we'll continue to think this through together. Thank you, everyone, for your time and for pitching in today. It's always great to learn from each other. I appreciate that. All right.
Veronica Parker: Thank you so much, Dr. Patterson. We really appreciate this first part to your three-part series. I did add in the chat earlier the next two events with her that are going to be coming up on September the 17th for the CAEP Program Evaluation 101, and then the Logic Modeling on September 27. My colleague, Holly Clark, has also posted an evaluation so we can give Dr. Patterson our feedback. And this also helps us get a really good sense of how we best can support the field. And we do really appreciate everyone's participation, engagement, and questions. And I think that will conclude our webinar for the day. Thank you, again, Margaret, and everyone for joining in.
Margaret Patterson: Thank you. It's great talking with you all.
Veronica Parker: All right. OK. And I'll go ahead and end it now. Thank you.
Margaret Patterson: Sure.