As I stated before, today's webinar is on logic modeling. It's a webinar to support the three year planning process. And our moderator today is Greg Hill of WestEd. The webinar will be recorded and available on the California adult education website. And I'll be sure to post the URL of where you can find the webinar recording, as well as the PowerPoint presentation.

The PowerPoint is available for download at this time. It's located in the resources pod. And so, if you look down to the bottom left hand side of your screen, you'll see a file titled AP logic modeling. Just select that file, and select download, and it'll download to whatever browser your computer downloads files to. So, please be sure to use that as a guide. And then, also, you can jot down questions and be able to access them of Greg when the time presents.

Greg will be answering questions throughout today's webinar. So please leave questions into the chat pod. And whenever, as the webinar goes on, he will be answering your questions. If he does not get to your question right away, please be patient he'll get to it as soon as he can. In addition, there will time at the end of today's webinar to address any outstanding questions you may have. So, all questions will be answered.

We do take attendance throughout our webinar. So, if you are in the Adobe Connect room, and your experiencing today's webinar with other colleagues, however, you only signed in once, please be sure to let us know the colleagues you were with participating in today's webinar. So that we can account for everyone in our attendance. We have to report our attendance to the AEP office, so we want to make sure that we report accurate attendance. And everyone receives the credit that they deserve for attending today's webinar.

In addition, if you've logged in and you've used an initial, a consortium name, or an acronym, please also be sure to let us know who you are so we can account for you in our attendance records. I'm checking the chat pod, and I do not see anyone with any technical issues. However, throughout the webinar if you do experience technical issues, your audio goes out or you're unable, your screen goes out or something like let us know know via the chat pod. So that we can address you via private chat.

So, at the bottom of your screen, you'll see yellow highlighted tab. And that's my way of communicating with you and trying to fix whatever issue you may have. So, please be sure to let us know if any issues arise throughout the webinar. Now I will turn it over to Greg who will get us started with today's webinar. Greg.

Thank you, Veronica. Welcome everyone. I'm really glad to see so many folks on the webinar today. You'll have to forgive me I'm, as you can probably tell, I'm getting over a little bit of a bug. And so, if there are points where I sound a little muddle headed, it's largely because of cold and flu medicine. And please forgive me in advance. If, ultimately, you find that I start talking really fast, please don't hesitate to send a chat to ask me to slow down. As, who knows, I might get a burst of energy. But then feel terrible, which is usually how this little bug has been.

So, anyway, just moving on. So, today's webinar is on logic modeling. And more accurately, it's really a, kind of, just really an introduction to logic modeling within the context of the AEP three year plan. Right off the bat, I would say that there is great value in learning to develop logic models. And, of course, developing them in conjunction with partners and stakeholders. And would like to preemptively say that, if by the end of this webinar, you feel like it'd be valuable to have facilitated conversation-- facilitate a logic modeling workshop, or something along those lines. Please do reach out to TAP and let them know.

In my experience I found them to be really productive. And they really go a long way toward helping really firm up the shared vision of a consortium. And really clarify the underlying assumptions and expectations about how it is that a given approach might yield results, anticipated results. OK, moving on.

OK, what is a logic model? For those of you unfamiliar with what a logic model is, it is essentially a systematic and visual way to present and share your understanding of the relationships among resources you have to operate your program, the activities you plan, and the changes or results you hope to achieve. That's from the Kellogg guide for developing logic models. But simply put, it is a visual expression of how you believe your program is going to work. And the results you expect to achieve as a consequence.

If you were to Google logic model examples, like I did here, you will find lots and lots of them. Some of them are highly complex with lots of arrows. Some of them are not. There is no right way to do it, or rather, there are several right ways to develop a logic model. We've chosen this particular model, the one that's in the template, for some strategic reasons. Simply that we believe that the model proposed in that template, I think, is the best expression of a logic model structure that really, kind of, adheres to a causal chain. In other words, it's a lot easier to actually fill out than some of the other ones.

But, as I was saying, there are lots of complex examples. But in the end, they all really boil down to a chain of inputs, outputs, and outcomes. Inputs would represent, more or less, what you will do or provide, right? And that would also be inclusive of all the resources you need to do them. Because that would assume you have access to the resources you need in order to implement program project initiative, so on and so forth. Outputs would be what you expect to see directly as a result. These are the tangible outcomes-- or rather, sorry, these are the tangible results. Products, is a good word.

In that they tend to be, more or less, immediate. And much more measurable than, say, an outcome. Which is the higher order change that you would expect to follow logically from the fruits of your labor. Typically an outcome would regard changes in a community long term. Most often for larger initiatives, you wouldn't see those changes within under five years. Though it depends on the scale of the project.

Additionally, and this is in the logic model template for the three year plan, it's often valuable to provide additional contextual information. For the AEP logic model, we've included assumptions and external factors. And we'll go into definitions in just a moment. There are other bits of contextual information that some logic models will also incorporate. And as you are developing yours, feel free to experiment with those. And some of them actually are really presuppositions.

So, for example, you will commonly find a situation which would be the expression of the need or the problem that your logic model/program is looking to address. There are really a number of them. But the most common you'll find are assumptions and external factors. Before getting into those definitions, just stepping back to that broad strokes notion of what a logic model is. It can often be helpful to think about a logic model, at least the components, in terms of a series of if/then statements.

So, basically inputs, if we have the resources, and conduct these activities, do these things, right? Then that would yield these direct results/product. And those would be your outputs. Likewise, if your activities yield the outputs you expect. In other words, yeah, if all that works to plan, then we would anticipate participants. And our communities would benefit and change in x, y, z ways. Again, and these would be much more intangible.

Just a quick look at the logic model template. If you haven't downloaded this, fyi it is located in the AEP template, the big one. It's really just one document and there are instructions on how to unlock it. You can feel free to extract it and put it into Excel, or whatever format you think will make the most sense. Oh, hi Eileen. Thank you, by the way, for your good wishes, kind words. Anyway, so please do download that if you haven't already. And familiarize yourself with it.

So, the components that are in the AEP logic model are these, inputs, activities, outputs. And then we've broken out outcomes into three different sections. Short term, or intermediate outcomes, and then long term. It looks like, oh Mila, am I coming through OK now? If not please do, yeah say so. Excellent thank you, Veronica. Inputs, as I was saying, are the resources you have or need in order to do your work. So this would be students, faculty, facilities, additional funding, computers, textbooks, curriculum. All this stuff that you either have, or again, need. And in many cases, perhaps, even take for granted that you might need.

And activities are what you plan to do to bring about the change, right? Using all the resources you have. Put differently, they're your strategies. So what are you planning to do? Well, you know, expanding XYZ programs to ABC areas of the county. Or developing new programs in ABE and ASE focused on XYZ populations. They are, at the highest level, your core strategies.

Outputs, as we were saying, are the direct products of those things. And oftentimes a good way to, sort of, differentiate between outputs and outcomes is the outputs are direct and they are tangible. So they would be things like number of students enrolled, percentage of target population reached, things like that. Enrollment is kind of my go to as the example, really, of what an output is. And the reason why is we often will conflate in just everyday, common, in our conversations with our colleagues, and so forth.

We'll often conflate outcomes with outputs. And assume that, right, students who have enrolled, or are enrolled in our classes, are learning something. Well that's the idea. But, ultimately, enrollment doesn't necessarily mean that any new knowledge is being gained, right? It's really if there is a person in a seat. Or there is money that's been paid, or depending. and so but it is but those outputs are absolutely preconditions for any meaningful outcome.

So if you want to help students develop capacity in financial literacy, mathematics, there has to be a tangible mechanism to do that. And so, they're not going to learn anything unless they're enrolled in classes. They have to be in classes in order to receive the instruction. And then, hopefully, if the instruction is good, and the curriculum is strong, then you'll see changes in students' attitudes toward the topic. Attitudes toward, whatever. Their knowledge of the topic. Which, consequently, would ideally result in changes in behavior. And then, also, following from that, possibly changes within the community relative to the indicators that initially pointed you to the need to develop new strategies in the first place.

OK, continued outcomes. So as I was saying, the changes are benefits you expect to see as a result. So, the immediate outcomes are just like they sound. They're the specific changes in participants awareness, knowledge, skills, et cetera. A level of functioning that serves as a precondition for intermediate or long term outcomes. Most often you'll find that these immediate, these short term outcomes, take the shape of increased awareness or changes in attitude. And that's why I used those examples before.

Intermediate outcomes are very similar. In fact, they can often be conflated. But not in a bad way, they're often merged, rather. But when they are broken out, they tend to often, not necessarily all of the time, tend to be focused on changes in behavior or decision making policies. So essentially, if you think about it as a sort of a broadening, an inverted triangle or what not. Starting with the specific relative to students and their notions of things moving outward to how it is they engage with the world. Or if you're thinking about outcomes related to formative measures, or policies, then it'd be the implementation of certain types of policies. Or investigation of initiatives, so on and so forth.

Your long term outcomes are the lasting changes. Yeah, with organizational community or system type level benefits. So, there should be improve social conditions. Reduced rates of the health outcome. For our purposes, you might wish to consider the AB104 outcomes as, sort of, belonging in one of those two categories. The intermediate or long term, depending on the size of your program.

But, in the end, if the goal is to increase the number of students with post-secondary credentials, to increase wages, so on and so forth, then we'd be wanting to work backwards from there. So, I'm going to pause just briefly because there's this question from Remel. How do you measure those intermediate outcomes? Do we use an instrument?

You know, most often you measure those outcomes, yes, you have a formal evaluation. Now that isn't a requirement for AEP. For lots of initiatives, especially grant funded activities. There is a built in evaluation component there. But there's nothing-- well, again, while it's not required, there's nothing that prevents you from, say, incorporating an evaluation as part of your own three year planning effort. And I would strongly encourage it.

You know, the logic modeling process and, really, the strategic planning process as a whole, only really has value if you can show in the end that, yeah, something has changed from summative perspective. But also, there's value from a formative perspective. A good evaluation plan will incorporate, will actually run throughout your project. And provide you with valuable actionable feedback on what you're doing well. And areas where you can improve, and so forth.

But, typically, if once you start getting into the more intangible aspects of the larger, broader, intangible, less tangible outcomes. Like intermediate and long term. The instruments you might use would be, yeah, a number of-- it'd be surveys and interviews. It would really be an entire-- you'd have to have a methodology for it.

Don't let that scare you. There are lots and lots of resources out there on how to do that. And they can-- not all evaluations necessarily occur or are external to an organization. There are reasons to have an external evaluator, in some cases. But not in every case. But again, I would encourage you to look into that. What's most important, really, is that in your strategic planning process that by using a logic model and really forcing the collective mind within your consortium to fit within its parameters. That by the end you've established a shared set of assumptions about how those relationships are expected to work.

I mean, it's not uncommon for, say, for me to have a conversation with folks who have a really nifty idea about a program or several things that they want to do. But oftentimes there is a disconnect that I've seen between, I want to do this program because I want to address this need. But then the aspects of the program are never fully impact. And so, when all of a sudden folks are seeing the responsiveness that they expected, or the changes that they expected to see, when they first came up with their idea. You know, they have nothing to go back to understand why.

Anyway, just moving forward. I hope that makes sense. OK, onto assumptions and external factors. Assumptions are just like they say. Beliefs or values that buttress you planned approach. So these can range from assumptions about continued funding, or assumptions about the value of a particular approach. So, for example, I'm thinking if you were going to do some rapid prototyping-- or more accurately, let's say you wanted to implement something related to human centered design. Some of your assumptions simply might have to do with notions about the value of human centered design on the participant group that you're trying to reach.

Additionally, there are external factors, right? So, these would be things outside of your control. Contextual factors that can impact the success of your project. It's kind of corny, but I like to think of them as, kind of, interrupters. Because that's kind of what they do. So if everything's going to plan, cool. But there are always these things out there that have the capacity to really toss a wrench into the workings of things.

Again, oftentimes it's sort of like loss of funding is usually one of the big external factors. Obviously, there's a whole wide range of things that could go there. You know, the apocalypse could go in there. But, obviously that's at a grain size that would be a little bit absurd. I say that to be a little bit flip. But also to point out that, while on paper a lot of this seems pretty easy to comprehend, once you start digging into it what you will find is some of the more difficult aspects of developing a logic model really are around identifying the right grain size.

OK, so I'm seeing Romley respond. Really nice to see what others do in terms measuring those immediate that's good considering the difficulties in accessing psychological constructs. You know, yeah, I think that's true. And I would encourage folks to, yeah, if they're incorporating an evaluation plan into their strategic plans, then I think that's a great idea. And, remember, you can also measure the impact of your initiatives through other means. Because it sounds like you're really looking at, in terms of, like, the students. Which is good.

And generally I would say that, yeah, your focus really should be the student. How well do you know you are meeting the needs you've identified for the population, right? But there are, of course, other indicators. So, like if you consider longer term outcomes. So, if you are looking at increased wages, well there are measures for that, right? You have US Census, you have labor data, LMI data. You've got EDD, all that stuff.

Oh, there we go. So, long term we need measurable changes in the community or region. I think so, yeah. I mean, typically they are. And sometimes, I've even seen logic models that break out, kind of, a fourth category. Where it's immediate, intermediate, long term, and then impact. Which is super lofty. But I suppose you could do that. But at least that's how I've been thinking about it. Yeah, the long term outcomes really are meant to be an expression of the vision, right? That you have for that community, for your community. Simply put, I could even imagine folks putting down just AB104 outcomes here, and having that work.

And I don't think that would be wrong, necessarily. But, yeah, they should be lofty. In the end what you're really targeting are, I don't want to call them stretch goals, exactly. But, more or less, that's it. So, yeah, Danny that's exactly right. Although, most often yes. But, you know, I wouldn't necessarily limit it to them. But they are bigger than, certainly, short term. And much more difficult to measure.

In the template I think they're broken up by year ranges. So immediate is within the first year. Intermediate is between one and three. And then long term are between three and five, or so. There are other ways you can consider breaking up those outcomes. I've seen it where some folks, I think University of Wisconsin, who has Wisconsin extension actually, has a number of resources on logic modeling. And their particular approach, I think, I want to say that they break them up by, oh it's something like knowledge, action, and then impact. And so that's another way of thinking about it.

OK, so all that sounds novel. But let's look at some examples. I think that it'll start to make a lot more sense once we start looking at some of these. I will warn you that not all of them are entirely serious. That doesn't mean they're wrong or bad. It just means that it gets boring sometimes if we don't have a little bit of fun. And I would also add that not all of them are related to education. And that was a little bit purposeful.

There are more than enough examples out there for folks to find. What I didn't want to inadvertently promote is the idea that any one-- that the examples that I might show related to education, particularly adult education, are the way to do it. It's really a process. And the value, again, of the logic model is that it helps groups coalesce around their shared vision. And around the change they want to see. It's not really so much about the end product, insofar as that stands as an artifact of those processes. And frankly, as a resource that you can go back to as this touch point.

So, first one. I was poking around a little bit. And I thought a novel example might be good to just get the concept down. So, I found kind of a silly one. Which, oddly enough, was put out by the-- I forget the agency exactly. But it's something library of medicine. But it was pretty clever. And so this one, in particular, was like if the project was Dusk Till Dawn. So, the idea of serving vampires if they had health problems.

In this case, the input would be, well, we need staff, internet access, and garlic to protect ourselves. Computer projector, because I don't know why other than instruction. Protective stakes, probably. Other pointy things. One has to make sure that your staff is protected. Activities would be hands on training, and MedlinePlus and PubMed. You know, they might in this case, they're starting a 12 hour health reference hotlines for ailing vampires.

Outputs, they've entitled reach, and that's not uncommon. In fact, you'll see lots of models that in that, sort of, center area where we have outputs, you'll see things like participants. Or other similar type descriptions. And I think that there's value in that. And there's certainly value in thinking about the outputs in terms of-- being able to measure them in terms of the impact, or the reach they're having, relative to your populations in need. But for my experience, it can sometimes interrupt the causal chain and create lots of confusion. When outputs aren't outputs, basically, as we've defined them here.

But either way, that's how they're using them. So reach here or their output would be that they intend to write four classes to reach 50% of the internet. Internet savvy vampires. Approximately 25% of overall population. I think it's probably pretty silly, but pretty good, in that notice it's tangible. So they're starting these activities, yadda yadda yadda, and they know that, OK, what would be evidence of completion of those activities? Or success generally. Well, four classes were held. 50% of the population were reached.

In a real life example, like I said before, XYZ proportion of the population might be enrolled. Or maybe even it's simpler. That 80% of those who enroll in XYZ programs see functional literacy gains, whatever, within a year, or something. Or that they move up at least one level, or something. And so it's tangible. It shows that, at a very minimum, people are doing the things that you set out to do. And they are receiving some benefit. At least in the form of, again, the benchmark interim type achievements that need to occur in order to see the long range plan come to fruition.

Once we start getting into these outcomes, they start to get broader. But take note of some of the verbs, here. So the vampires find the classes to be engaging and relevant to their lives. So, this is a change in attitude. They demonstrate that they learn how to find resources. This one's a little bit complicated because it's difficult to prove. But that's OK. There are ways to do it. And it's relatively low hanging fruit.

If we imagine, shifting back, Ronald, to some of your questions about how do you measure it. Well if, say, you wanted whatever short term outcomes that students demonstrated command of the English language, or mathematics, blah blah blah. That might be measured by-- and there are lots of ways to do this. But one of the indicators might be grade point average. It might be grades, it might be success on tests, things like that. There are other measures.

As we move from left to right, it gets broader. And so it's, like, these folks use this research for themselves to grow. And then long term, you know, vampires have improved health. So this is huge. So, they last longer, and they're stronger, and they stop killing people. Some of the assumptions I find pretty funny. But, not wrong. Vampires are willing to share, they want to come in, that's a fair assumption. If you're thinking about starting a program, you're assuming that the population is there that actually needs this.

Or more importantly, actually, you should have done your homework to know whether the need is there. More accurately, it ends up being that they'll actually show up. That your activities recruiting them would be successful, perhaps. And likewise, in this particular case, since there's certainly danger involved, you've got librarians who would be willing to teach them, et cetera. And then, of course, external factors. You can take a look at here, too. And these are, again, things that would be outside of your control.

So I'm going to pause quickly. I know that was silly. But does anybody have any questions so far? I've got a couple more examples here for us to look at. And so, don't fret. Looks like Ryan is typing. OK, will it be mandatory to submit goals in a logic model format? That's a great question.

So, in the AEP template there is really just one goal that's required. And that's more, I mean-- if you're familiar with the phrase theory of change, I would liken it more to that. It's the, sort of, overarching goal that you're really looking to achieve over the three year period, or whatnot. It should be big. It could be akin to your vision or mission. I like the idea of using a theory of change model. Because it articulates the participants. And really tries to be little more concrete than, say, vision or mission statements. But that's the only-- so that's one goal you have to provide. And that is part of the logic model, sort of.

There are, however, progress indicators that you're expected to provide. There are three to five progress indicators. Which are absolutely objectives. They are goals. If you really-- and, you know, why I wasn't planning to get into that too deeply here. If you were to think about the role that those objectives have relative to the logic model, they would essentially line up to the left. They would say, this is the thing we want to do, this is what we need, this is what we're going to actually-- our activity is to meet us there, yadda yadda yadda.

OK, most often those things will be drawn, in one form or another, in terms of your indicators of progress would be drawn from either your outputs or your short term outcomes. Simply because of the time frame. And because they're tangible. We do have a three year planning sequence webinar in January. And so, you know, the plan is to make that a bigger part of it. To make that conversation a part of that sequence. And so there is another opportunity. Awesome, Thanks Ryan.

Thanks Eileen. Is that what AEP would like to see in the plan? If by the graphic? Yes. So that's why we have that template. And every consortium is expected to at least complete one. And we'll talk more about that in just a minute. I will say this, the template is there for you and the plan is yours. So, if you're finding it to be really difficult, for example, to differentiate between short term and intermediate outcomes. Things like that. I don't see any problem with, say, collapsing some of those. Or if you want to, for example, add some columns just for your own personal understanding. Like I said, some logic models include a column for participants. Which can be really valuable as a touchstone.

But let us know if that's a thing that you feel like you need to do. You can unlock the template. But we would ask that, generally speaking, the structure remain intact as much as is reasonable, or feasible. OK, so we have a couple of more examples here. And I'll try to walk you through a little bit of these as quickly as possible. So, this is from the CDC. And I thought it was pretty good in that it corresponds to the structure that we've been talking about.

But I think, more importantly, it does a really nice job of showing how each component really leads logically to the next. And it's an interesting expression of a multi-layered logic model. Logic models can be used for a lot of things. They're often asked for in grant applications, but that shouldn't lead you to think that they can't be utilized for larger program project planning, strategic planning activities.

And I think this is a good example of how you can reckon with that. So in this case, the resources they're working with is the fitness center, this employee input direction, and then they have some resources they need around promoting or incentivizing use of the fitness center. So across from left to right you've got-- they really do lead naturally-- so you have the fitness center. What are the activities?

I'm going to provide one-on-one training, physical activity, competitions, et cetera. What will that result in? Increased fitness center use. Notice it's not increased fitness. It's use of the facilities. That's an output. Fitness, which is the broader goal, would fall somewhere in the world of an outcome.

We have here input and direction, support from administration, et cetera, employee input and direction via email communication active, living, etc cetera, classes, outputs, mutual accountability between mass transit personnel and mass transit district in place. This kind of verges on an outcome, but it seems to fall within the realm of process and what is practical, so I could imagine living there. And then when we get into the outcomes, we have this particular project-- organize them around these key processes relative to the users, which I think is good.

Cognitive processes, increased understanding, increased problem solving, or behavioral processes, increase self-confidence-- these are all by the way good examples of the kinds of things that you would commonly see in a logic model that's focused on a particular participant group. And then long term would be-- they go bigger. They keep using it-- fewer medical claims. So if you're thinking about, well how big is big when it comes to a long term outcome? Reduction in medical claims is a pretty big one.

But anyway-- so we've got a question from Emma. Just to clarify We submit one document on behalf of the consortium. Oh, I love-- this is-- thank you Emma. What level of detail are we required to share in this document? I have eight members. Are we using the weeds, trees, or forest level?

That's nice. I like-- I going to use that. Weeds, trees, or for-- I've never heard that. Well, that's a great question and that's where we're actually going. Before that, just one more example. And you can look at this on your own, but I'll keep it on the screen for a minute.

This one is related to education. It's from an IES guidance document on developing a logic model and I think it's pretty good. Notice the long-term outcomes. There's just one. And you will likely find that as you move from short-term to long-term that you should get fewer, or most likely will. But again, you want to highlight that. If you look at some of those short-term outcomes, you're seeing changes in-- remember what we talk about-- attitudes, knowledge, et cetera.

OK, so to Emma's question and no doubt the question that's on most of your minds. How do we do this at a consortium level? There are a few ways to do this. And I'm going to talk about it in the context of a nested logic model, but that doesn't necessarily mean that you have to do it is a nested logic model. It's really just a way of thinking about how it is that your consortium logic model could-- what relation it might have to the activities of individual members.

So most commonly you can organize a logic model for complicated projects by component or level or you can call element or item. So this is an example from the University of Wisconsin again, where multi-component would be-- so you would depict goal sites within a single comprehensive initiative. So Emma, to your point, this one is thinking about-- at the highest level there is this community tobacco control program and for it there are larger inputs, outputs, outcomes.

But then all of those would in some form or another relate to other sub initiatives will or sub goals. That would ladder up to the large three year plan. And so that's where we're talking about inputs activities, outputs. Oftentimes, depending on the grain size, what you would label as an activity for a program level might actually be an input at a consortium level. It just depends.

Alternately, you can think of it in terms of multi-level where it's really about the functioning of your system. Where and in this case, it's just macro level, agency, program. And so this little thingy-- that's the type of thingy, right? I have this little note here. So this could easily be adapted to consortium agency program. Or I think more accurately, program agency.

AEP has well laid out objectives. Would this be our high level? That's a great question. So if you're talking about the old objectives-- what is it-- seamless transitions and so forth. I think that's a how and I think it absolutely-- those can connect really-- and they do. They are outcomes. And I'm trying to think of what they are. So there's seamless transitions, there's acceleration, and-- let's see, what else?

I think those are good outcomes. I don't know that I would organize a logic model around them, although, you could. You absolutely could. The only reason I might caution you there is because a lot of people had difficulty interpreting those and got very much in the weeds. But if you have some consensus within your consortium around what those outcomes would look like within your community and you want to use those as sort of your framing device, I think that's great. I think it'll work really well.

But for other-- again, it'll depend on the cohesiveness of your consortium. If you are working together, if you tend to plan together, and you tend to take a top down approach, I think that would be great. Other consortia that don't, where they're more independent-- where each agency is a little bit more independent, they may not work quite that way. And so you may want to instead, work with the agencies to develop their own logic models and then collaborate on how they might ladder up and inform each other. Does that help Emma?

OK, if we don't do something standardized for all consortia how would we be able to check off elements in the annual or planned in NOVA? Would they be able to program thousands of goals-- they're not going to do that. No. I see what you mean. No. No, Emma that's great. Do that.

It's meant to-- so then I have two responses. Baranka that's a great point. Think of the logic model a visual map of what you're planning to accomplish. And so, that is why we've provided that template. And optimally, you're not making a bunch of changes, but remember the plan is still your plan. And if for some people they need additional features in order for it to communicate most clearly to the stakeholders, then you can do that.

But it's not as though after you submit your plans, you'll be going into your logic models and extracting all sorts of stuff. It's really just-- the only items that may be extracted-- and I'm not sure if that's true-- would be those progress indicators, which absolutely should align to the outputs and the outcomes of your logic model. But those would likely also align to the your-- in some form or another-- align to the strategies you proposed.

All a logic model really does is it lays bare the inherent logic of your plan. And again-- the example I like to use is the notion that by creating class, and then having students enroll, will yield changes. Well, there's a lot that happens between students enrolling in a class and individuals within your community having higher levels of literacy.

And so, what the logic model makes you do is it forces you to layout-- well, OK, so if I have this ESL class, how is it that I'm going to connect the provision of instruction to the larger initiatives? And you may find that you have to augment or adjust your program. So for example, if you have seen-- by conducting your community needs assessment-- that there is an enormous gap in programs for-- or better yet not even programs.

But let's say you have a super high ESL population, and you are importing lots of workers from outside of a 30 mile radius to do farm work. well, while at the same time you have a large population within that 30 mile radius who are unemployed. And so as a consortium, you've decided that we need to have programs to teach ASL and we need programs to help employ the individuals who are unemployed.

So if you just have an ESL class, it's not going to necessarily follow that just because you're learning English better, that it's going to expedite your transitioning into a job. Especially, if those are skilled jobs. Although, yes, absolutely it will in some form or another. But if you really want to see that change, and if you're trying to make that a part of-- if you're trying to make a concerted effort to really see that population flourish, then you'd have to say you're going to open up an ESL class, and you are also going to coordinate that with workforce training provided by the WIB.

That may also mean that you're going to provide Spanish language GED classes. And maybe that might also mean that you provide all of those classes within the context of an employer based program. And maybe you come up with all that beforehand and that's great. But by going through this process, what I think you'll find is you'll get lots of creative thinking out of it and it'll really build consensus.

So I hope that helps. I'm not sure I entirely answered your question, but I hope so. But no, you shouldn't have thousands of goals. OK, so is a logic model of strategic plan? It is a visual-- I would say this. It is a visual expression of one, yeah, absolutely. Think of it this way. When you're done, you should be able to carry around this logic model.

And I think for all of you who-- when people ask, what do you do? What are you doing? This is it. You should be able to pull that out. And what that also means is that, no, it shouldn't be 42 pages long. Honestly, a strategic-- rather logic model-- is one page, maybe two. But even that is super excessive.

So it's really a one page thing that summarizes at a high level, but with sufficient specificity. To be able to clearly delineate what you're doing and how you expect the changes you say you want to come about, will actually come about as a result of what your plan is. Does that help? I imagine that some of you were thinking you had to do-- that they're going to like 30 pages long or something and to which I'm terribly sorry. I should have clarified that first.

So probably the best thing about this webinar is-- other than the plane flying of my office and the bird-- are these resources. The Kellogg Foundation Logic Model Development Guide is-- that is the model, more or less, that the AEP logic model was based on. It is, to my mind, probably the best there is out there. It's clear, comprehensive, easy to use.

I want to say the logic model tools provided by the CDC would be a second, which is actually for me the third on the list. They follow similar structures. Then you have the UWEX Online which is University of Wisconsin Extension. They actually have an entire course on logic modeling. And they have lots of great resources. And some of the graphics that we used in this one are drawn from there as well.

So the concepts-- theirs I find to be a little-- it diverges slightly from what we're doing, but I don't think-- if you went through the course you'd be great and you'd make a great logic model despite the sort of variation. And I would say the same thing is true of the NRS resource. The UWEX and the NRS resource are using the same models. And the NRS guide is actually-- I put the link to where it is on the AEP site.

So, anyway. So the last website webinar went over by like 30 minutes and so I made a concerted effort to make sure we didn't go over for this one. And so we have a few extra minutes here. I would like to just reiterate that tap is available to help facilitate some of these conversations if you ask for it and we'd really encourage you to do so. Are there any questions or concerns that folks-- or perhaps questions I haven't responded to that you'd like me to respond to. And if so I'm sorry I miss them.

Well, thanks Baranka Thanks, Emma. OK, so then with that, Veronica did you want to take us out as it were?

Yes, I will Greg. Thank you very much and thank you all very much for participating in today's webinar. I posted the URL of where all of the three-year planning archived webinars and resources are located. So please be sure to check out that page if you need to go back to revisit some of the past webinars.

The logic model webinar will be uploaded this afternoon, so you'll be able to review that as well and share with colleagues. Next week on Friday, December the 7th, there will be a HCD third cohort webinar-- a launch webinar. I'll post the URL for that so if your consortium or members of your consortium would like to participate in the third HCD cohort, you are more than welcomed to do so. And please attend the launch webinar.

We will take a break for the holidays, and when we come back after the first of the year, we will resume with our three-year planning webinar mini-series, which will start I believe on January 16. But I'll post the URL of all of those upcoming webinar, so please be sure to check them out. I will close the webinar and when I do, please be sure to complete the evaluation and let Greg know what brought about today's webinar. And if there are any additional needs that you have related to three-year planning or other requirements for the AEP program. Thank you all very much and have a great weekend.