Jay Wright: OK. Hello, everybody. Hopefully, everybody can hear me OK. I'll blow up my screen now. OK, hopefully, everybody can see the blown up screen, and everybody can hear me just fine. And you can see, it's a power-packed agenda.
Everybody is probably wincing as they're looking at this now because it's, how can you fit this much goofy text on one slide? Cut it in half. And it's probably still way, way, way, way, way too much.
What we're going to try to do is talk a little bit about both quantitative and qualitative. You see the words there. So we'll spend the first half talking quantitative, looking at some TE reports. Again, the purpose here-- we just have an hour and a half-- is not to get into the gory of TE details. That's like six or eight hours we don't have, but there's about 10 or 12 suggestions all of which relayed around TE reports.
So the idea will be not to necessarily give you step by step how to follow every step in TE or how to follow every step in your agency's data system, but to just give suggestions on reports you can use, goals you can set, approaches you might want to look to isolate, key areas of your data. And then, we'll use the same general approach for qualitative. We'll review some information about performance and persistence that my hunch is most of you have seen many, many times, looking at agency and student level strategies that again, looking at the participant list, I know many of you have heard about many, many times. But we'll start looking at some things you can do at your agency and your consortium.
Also, again, in the name of ideas, some of them will seem like wonderful ideas, some of them probably not so much. But if you give 10 or 15 quantitative related ideas related to TE reports and so on and roughly the same number of strategies you can implement with your students and staff, there you go. You have something. Hopefully, you'll find a few good nuggets in both sections as we go.
The other thing I'll bring up is there's actually three handouts. This PowerPoint you see is where we'll spend 99% of our time. If we have time at the end, which I have to admit I'm pessimistic about, there is some information that relates a little bit more to what I'm talking about Thursday actually than today. However, my understanding is I have an hour and a half today and just an hour Thursday. So there is one that I probably will have a tough time getting into Thursday, so I'm throwing it out there for right now.
And then there's another one that really goes specific on barriers to employment and ad hoc reporting. We'll talk a little bit about those concepts here. But there's one that gets a little deeper, you might say, in the TE weeds than what we have time for now.
The last thing I'll bring about before I dive in is there will be more detailed sessions at the CAEP Summit. There's a panel discussion. Again, from looking at you, a couple of you might even be panelists before, but a lot of you have been to the panel discussions we've hosted.
The panelists do an overwhelmingly outstanding job in talking about this goal setting concept using TE incorporating students, incorporating staff. So there's a really good session that we'll be doing at the summit with the panel on some superstar ways of looking at this. There's a session I am doing where I'll dig a little bit deeper than I do today into TE and crisis-- and so on. And there's a couple others where we're looking at rallying TE data managers and administrators. So there's a bunch of sessions there.
The final thing I'll say is Veronica and I just had a discussion yesterday on what strategies we need to look at at CAEP TAP for all the various ways of looking at goal setting. So if any of this looks especially interesting to you, or I guess if anything looks especially God-awful, keep that in mind. Let me know. Let Veronica know because we are looking at what do we need to really map out as the specific training ideas probably for January, February. There's a lot of little nuggets we're talking about here that probably fit into that.
So without further ado, let's start. So starting agonizingly enough with NRS table 4, this is just the table set I realize it's a CAEP training. But we're talking a lot about the concepts of NRS performance and persistence. So we need to do a quick overview of table 4 and 4B, just a table set, and give the basic definition.
So here's a screenshot of NRS table 4. A lot of you have seen this already. Hopefully, my understanding is, this is an advanced session, so this one should be familiar to almost all of you. This says what California and all states use for federal reporting.
This is the one a lot of people would say is the biggest plan because it's the one that relates to measurable skill gain. From our point of view, specifically, it captures all the students that make gains from pre-test to post-test. It also has information about high school equivalency and high school diploma.
I'm not going to get into too much gory detail here, but the way it's set up is we've got the six levels for ABE, six levels for ESL. The pre-test is what puts the student in one of these levels. The gain between pre and post is what determines whether they make a gain. You can see columns E and F basically relate to measurable skill gain and the high school diploma and equivalency, respectively.
So columns E and F is where we're looking for our payout. That's where we're looking to show student success. We've got the percentage shown in columns I and M. That's basically looking at each level of all the students that qualified for that level.
What percentage of students at each level made that measurable skill gain? We're looking at that percentage. This table is for WIOA II funded agencies. However, there is a table that is there for CAEP. It's in the CAEP reports as well. I think there's an example or so.
So if you don't mind them, hold that question in about 15 or 20 minutes. It is one of the examples. This specifically is for NRS, but there is a carbon copy we use for CAEP that basically mimics the table exactly. The difference, though, is the one for CAEP will incorporate-- workforce, prep, and CTE, and so on where obviously the one for NRS does not.
OK, these look like-- yeah, well, for this one, yeah, that will actually-- the agent-- well, this agency level, probably not for this NRS report. You would want to have agency level again at a specific agency for the NRS. If you're just looking at the CAEP side, then you would be able to access this. Again, this is more for concept just to give everybody the basic concept of what we're talking about with the NRS performance and persistence.
So, Mitch, you'd have access to CAEP. NRS, though, you wouldn't need to have agency level access for one of your particular agencies. The pre-test required is a longer answer than you think, Kelly.
In general, the short answer is yes, a free test is required to get on NRS tables, period. The caveat that makes it a little longer than you're expecting is because of COVID. There's that federal memo from May of 2020 where they changed that. To get gains, you must have the pre-test, still. But just to get on the tables, period, since May of 2020, you can self-report learners into these levels by using that self-reported level in TE.
And again, that was a change the feds came up with just over a year ago, given all the obvious issues with being able to test everybody during COVID. I have not seen any updates to that since May 2020. So as far as I know, that caveat still is the same as what it was from a year ago.
So hopefully-- that was a lot of questions at once. Hopefully, that got everybody's question. But again, just to keep it so I can move a long. Table 4 is what we use to measure performance.
Then we have table 4B, which is-- looking more at persistence, we have the same 12 federal levels, but this is just looking at only those that complete a pre/post payer. So we're looking at just those that have a pre/post payer to more measure classroom performance.
When we're looking at federal performance overall, we must use table 4, that is, we're incorporating everybody that qualifies for federal reporting. From an instructional point of view, a lot of people would say the only fair way to do it is look at only those that complete a pre/post payer so we can more fairly evaluate instruction. So table 4B gives you the same data, but only looking at those that can complete the payer.
And then, the other one I wanted to just point out is the NRS persister. Again, this is another one used for federal reporting, but it's comparing table 4 to 4B. That is, it's looking at the percentage of students on table 4 that also make it on 4B. Looking at it more simply, of everybody who qualifies for federal reporting at each level, what percentage of students stuck around long enough or persisted long enough in order to at least complete a pre-test and a post-test? That gives us that percentage of persister.
On column D of the report, that is, table 4, we can look at all 12 levels and look at performance level by level. Persister, we can look at those same 12 levels and look at pre/post test persistence level by level. So I bring this up just to get the baseline.
When we're looking overall at NRS performance and NRS persistence, this is what we mean. We're looking at those that make gains on NRS table 4. We're looking at those that qualify-- of those that qualify for that, how many at least complete a pre-test to the post-test.
This is just one of many ways to look at it. But when we're looking at the one way for federal reporting, this is what it is. So I'm just going to give-- I'm going to stop there.
Quick sanity check. That was a lot. There were a lot of questions. I think I got them all, but let's quickly do a sanity check here.
Everybody hanging in and for the three or four of you that had questions, especially, does that wrap it up? Does everybody feel like they've got their foot on the rail at least to get started?
[sighs]
OK, not seeing too much. I'll have to assume no news is good news. OK, foot is on the rail. Thank you, Maryanne. You're giving me the lifeline. I'll move forward.
So we're in a CAEP training, so let's transition now from the basic definitions for NRS. Now, we're going to go ahead and move on and look at it in terms of CAEP reporting. So here's that CAEP summary.
We've talked a lot, and again, most of you have been to a lot of these sessions, I know. So you've heard us talk about the CAEP summary in terms of those three sections-- left-hand section, middle section, right-hand section. For the record, this is the left-hand section. We're looking at literacy gains, pre/post.
Again, this is getting the information directly from NRS table 4. That's another reason why I started there. But again, it's looking at the enrollees in column B. That's basically the same concept as what we looked at in column B for table 4.
Column C, that's basically giving us the number of enrollees from table 4B. That is, of those who qualify in each program, how many of those stuck around long enough to complete a pre-test and a post-test? And then, column D, that's looking at that performance metric of everybody in each program that has the pre and post. How many of those actually made enough of a gain to achieve an EFL gain on table 4?
The red flags. OK, that's a little bit the whole presentation. I guess, that's about an eight-hour answer, so I'm not going to do that off the cuff. But I'll just say we're going to have 8 or 10 or 12 quantitative suggestions that all get your question in different ways that by no means exhaust your question. But I feel when we get to quantitative and we rattle that off, that's going to basically give you a few ideas related to that question.
So, again, literacy gains for pre/post. This is summarizing our pre/post information and maybe more specifically, summarizing this as it relates to that and NRS reporting. The consortium-- yeah, you can look at the CAEP summary with that consortium point of view. Again, you can't drill down, but yes, you do have access to the CAEP summary.
OK, looking at next, we've got the middle section with outcomes. So this is where CAEP starts veering a little bit from federal reporting, where one big outcome for CAEP reporting is pre-post, but we've got several other categories. We've talked about immigrant integration, and then, again, those six areas of AB 104. We have those other literacy gains, like occupational skills gain and workforce prep milestone. We've got secondary, post-secondary employment, wages, and transition.
So this is looking at those other areas in which students can achieve outcomes. So that requires a lot of the same information that it required in the left-hand section. The big difference here, like we've said many times, is the left-hand section requires that qualifying three test. The middle section does not relate to testing, so it doesn't require anything as it relates to testing.
And then, over here on the right-hand section, we have the CAEP services. The bar is a lot lower here. You don't need hours. You don't need demographics to qualify here. You don't even need class or program enrollment.
So this is a way in which we can capture all of those other students that maybe don't stick around long enough to enroll in a class, that really don't enroll in one of your official CAEP programs. So number one, this captures all of this and all of these students that might be receiving short-term services. At the same time, it's a way to capture some of those students that maybe didn't have anything to do with services but were looking at enrolling in a class or program but didn't really stick around long enough to do so. So it looks at people who enroll in services and/or those that were fly-by-nighters that you did do something instead of nothing but maybe not enough to get 12 hours or complete an entry record or whatever.
OK, then the final thing that I wanted to just over view on is the NRS, or is the CAEP Data Integrity. We've got the-- I'm going to get more into this and a couple of the quantitative, and we'll get into the basic definition Thursday. But you can look at the CAEP DIR.
There's also these new CAEP Hours Reports. One is called Cape Enrollees by Hours. The other is called Cape Service Enrollees by Hours. I'll just apologize in advance that those titles are pretty obnoxiously similar, but they're both basically ways to inventory your learners by what we're calling the three buckets. We've started to make a lot bigger deal about the three buckets because NOVA seems to-- looks like it's going to be forcing this issue a lot when you're doing the CAEP data or a goal setting in NOVA.
I won't be getting into that. Yeah, I don't think we'll be exhausting the subject because it's morphing. But yes, we'll be looking at some of those examples here very shortly.
So one example your question is at a good time. This is what these new reports are for, is one big state priority is being able to look at those three buckets. That's probably my word, not NOVA's. But in NOVA, it's looking at the students that have 12 or more hours, that is, those that are participants versus those that are in 1 to 11 hours versus those that are services only students with zero hours. So you've got the participants versus adults served versus services only.
You can see here, it gives those buckets 12 or more hours, 1 to 11, and 0. It seems innocuous, but it is a way in which CAEP is looking at goal setting. That is, ideally, you're going to be moving as many students as possible from the zero hours bucket to the 1 to 11 hours bucket.
Similarly, you're going to want to be moving people from 1 to 11 hours to 12 hours, that is, those students that might show up for short-term services. Maybe, it's just students that are kicking the tires on some of your adult debt classes, a lot of different reasons in which they might have for doing so. But you're looking to capture more students and get them enrolled in instruction and get them to do the 12 or more hours. So they're enrolling in your class so they can start moving toward career pathways and so on. So this is breaking it down into those buckets in a little bit more detail.
The enrollees by hours focusing in on those again that are in that services section. So we look at some of those with services, a lot of those are presumed to be those that don't have any hours. But a lot of times, it might be students that have some hours, or maybe students that even have 12 or more hours but just didn't enroll in an official CAEP class. So what this is doing is getting into a little bit more detail so you can see why students might be in that right-hand section instead of those middle sections, where you can get more information to figure out exactly how many hours they have, exactly what level of effort it's going to take with each student to move them into more detailed instruction or more detailed enrollment.
The service enrollees by hours has the same basic purpose. But instead of looking at those three buckets of hours, it's looking at the specific services received. So it's turning that other report inside out, looking at that same right-hand services section that we're looking to improve upon. But instead of detailing by the three buckets, it's looking at those that actually receive supportive training and transition services, respectively. And then, once it determines which specific services are involved, then detailing it.
So one is assuming probably, hey, we're not really doing services. This is detailing it so we can look and see, what is it we did that brought those students in so we can move forward just like in that other example.
OK, I'm going to do another quick sanity check here. Hopefully, those new hours reports at least made sense, a little bit quick transition. So they can be either one. They are-- by self-reported, I'll just assume you mean manually and putting it into TE. So yes, you can manually input it into TE.
A lot of you import services from a third-party system. It's not technically on the update form, but it is on the supplemental update form. So I got to say not many people do bubbling and scanning overall. The list that does bubbling and scanning for this is much smaller because it's a special form.
So I'll just say, if you really want to technically, there is a form that has this, but again, it's a separate form, so you don't do it that often. I'll just say, for services, probably the majority, bring it in from another system. But there is a way to type it in manually. There's also a way to bubble and scan if you want. Hopefully, that's your question, Crystal.
So getting into the quantitative and qualitative. Again, the idea here is to just give you an overview at high level to equip you with some ideas. There's lots of detailed trainings here that we could do to hone in on just one or two ideas and really get into the weeds. But for now, we're looking at several ideas that you might consider for goal setting using a quantitative approach and then roughly the same number of ideas using a qualitative approach with the theme across everything being goal setting and different ways of attacking goal setting and program evaluation.
So we'll look at quantitative. The process is the same. It's just what you're dealing with.
So on the quantitative side, we have some of those targeted data reports. So a couple of you already brought up some of them, so we're going to use a few of them randomly. Just to suggestions, there's lots more, of course, you can use. But the ones you're suggesting are some of the examples that we'll go through.
So we're looking at using some reports and looking at the neediest areas and then looking at hot spots. That is, we look at a report. We know maybe there's some programs where we're doing well, other programs, we're not doing well.
If it's looking at classes, there always are going to be some classes that are doing well, some other classes that are not doing well. So it's a generic concept, but I really do think it's worth reinforcing that those hot spots are always what you're looking at when you're looking at any data report, whether it's NRS, CAEP, payment points, assessment to instruction, and so on. As you're looking for those hot spots, that is, you're looking for where you find those really strong areas and those really weak areas and figuring out which classes are strong and which ones are weak, which programs, which students, which federal levels, and so on, and then rinse and repeat. Go to those targeted areas, look at those areas of strength and areas of need, start looking for areas of commonality.
When you find those particular strong and weak areas, we'll try to reinforce that concept with the examples. And then, toward the end, we'll look at qualitative examples, where we'll talk more about feedback from staff, feedback from students. What are some things we can put in place to establish clear channels of communication?
And then a lot of that rinse and repeat, where once we figure that out, how are we going to go ahead and identify our strengths and needs from a qualitative point of view? What can we really build on at our agency or our consortium to adopt and really have everybody follow an area that we already know is strong or the other way? What's an area where we know where we're weak? And where do we need to build in order to get a little bit better? And then, overall, we're developing that culture of data, whether we're looking at it from a quantitative or a qualitative point of view.
So here, we're looking at how to align with the continuous improvement plan. That's the CTE plan for WIOA II to be clear. But we've talked a lot more in CAEP land about how we're looking as much as possible to align that step with the goals-- with that CAEP goal setting.
A lot of the concepts that we brought forward with that CTE continuous improvement plan are basically the same concepts we're looking at with CAEP goal setting. So for one, a good way to kill two birds with one stone is go back-- if we're WIOA II, go back and look at some of the things you looked at with your continuous improvement plan. Maybe not all of them will be one to one, apples to apples that you can also do with CAEP, but it's been pretty above board that if you can keep some commonality or consistency with your CIP, that's something that's officially something you're encouraged to do. At the same time, that would be a way to make it easier.
But just some ideas there, we're look at in NRS performance goals, ways you can align to your WIOA and CAEP partners, ways you might want to align with your last visits or your CTE federal program monitoring. There's ways to look at special programs like IET and lots of leadership project resources, CALPRO, OTAN, WestEd, and so on.
So back to quantitative, this is what we just talked about. It's a placeholder to try to get into some specifics. So I just talked about it. So I'm going to move on.
We've got some different ways within CAEP reporting to look at this concept of goal setting. So these are just ideas. There's many, many other ways you could do, but these are the specific ideas we'll touch on, is look at enrollment and participant criteria.
We put that one in bold because it's very clear that's one that everybody is going to have to address whether you like it or not. Some of that relates to the fact that that's where TE and COMS are basically the same. We're finding more and more than it tends to be different more often than it's the same quite frankly. But one way that we know, it's basically the same as when we're just looking at enrollment and that concept of participant versus adults served. So we'll look at it that way.
I'll just say, for now, I don't think there's any state level benchmarks that I know of. I don't think we have Carolyn or Gary or anybody like that here. If anybody like that is here, you could come out. But sorry, I got to say, that's a better question probably for Gary or Carolyn.
In any case, we'll look at enrollment and participant criteria with that being involved because we know that something everybody is going to address. There's ways where we can evaluate our pre and post-test results from a CAEP lens. We'll look at targeting CAEP-related outcomes. We'll look at using the CAEP Data Integrity Report.
Another suggestion is we'll look at it from a straight NRS point of view. For some of you, that'll be a great approach, others, not so much. But certainly, we're going to put NRS federal reporting related activities on the table.
Another thing you might want to consider is looking at special populations, that is, looking at individuals with specific barriers to employment. Maybe looking at that concept of equity that lots of people have been talking about, or maybe just in your region, there's a specific population maybe at a specific ethnicity. Maybe it's individuals with disabilities. Maybe it's a special workforce population, but all kinds of ways of looking at special populations. Sometimes looking at it from a consortium point of view to make sure you're going hand in hand by consortia source instead of by agency.
And then, this last one is generic, but another one is defining goals with external stakeholders. Sometimes, that might be just looking at it as a consortium between college and K12s. Sometimes, that might be measuring yourself with WIOA partners. Sometimes, that might be looking at it with programs like IET, where you're relating your agency information to employers, things like that.
So the first one we're looking at is enrollment criteria. So this is going back to that new report we just mentioned. So we've table-set it because we are looking at this as an example.
So in this case, we're saying, of everybody that shows up on the CAEP summary, how many students actually make it into a CAEP instructional program? So we're looking at these arrows and saying, OK, we've got this number of enrollees total. We've got this number and column G that have the 12 or more hours.
So we can combine those other two and say, OK, we've got 609 with 12 or more hours. We've got about 246 that don't. So you've got these numbers and each bucket with the 12 hours, the 1 to 11 to 0. So there's different ways you could use this simple report and say, OK, we want a certain percentage of that 148 to get at least some hours. Instead of being services only, we want to move people from columns H and I into column G and get them enrolled in official CAEP program.
These are just dummy numbers here. But you can look at your agency numbers in this report or the consortium numbers and figure out maybe some specific programs where you're looking strong at their programs where you may need to improve, and just look at moving more students to have 12 or more hours. Maybe ESL, like in this example, is looking good. Some other programs are not looking as good, so we want to move some students into that column G. Lots of different ways to look at it, but this is just using enrollment criteria to get at least 12 hours to move them from that adults served to participants bucket like we've been talking about.
This is another way of the same question of everybody that we have reported for CAEP reporting overall. How many make it into a CAEP program? Again, the same basic concept. We're looking at having official enrollees rather than service only.
OK, more participant criteria. This is just looking at of those who enroll in a CAEP program, how many stay long enough to accrue at least 12 hours? So just so it makes a little bit better sense, go into the previous slide. I'm sorry.
So go into this one. You could do it either way. You could say, of services only, how many have at least one hour of instruction? Or you can say, of those that have at least some hours of instruction, how many persist at least long enough to have 12 hours and qualify for federal reporting?
I'll bring those up. Those are both very innocuous goals. But I'll just say that basically outlines what you're all going to be required to do, is you're going to be able to show that you're moving people from zero hours to at least some hours, and you'll also going to be, one, able to show you're moving them from less than 12 hours to at least 12 or more hours. That's the universe of everybody there, where you can look at enrollment criteria to get to monitor and get more hours of instruction globally.
OK, I'll just say any questions? I'm trying to just look at it overview. I know I'm getting into weeds no matter what I do, so I'm just going to stop. Does that at least make sense to everybody?
I'm not sure if this is going anywhere or whether Jay is just talking to a brick wall. So just for sanity, let me stop. All right, thank you, a fraction is at least saying yes.
All right, sorry, I'm being a little bit of a pill here. But I can't quite tell whether any of this is going or not. OK, thank you. All right, you're given me the warm, cozy, if nothing else.
So next topic relates to pre and post-test. So again, we're just looking at eight different ideas. Some of these ideas will be good for you, not all. So some of you might really want to focus on testing, depending on what consortium you are and what programs to feature.
This might be an obvious no-brainer for some of you. It may not be your bread and butter for others. But one thing you might want to look at is setting goals for pre/post test results. That's what we're doing for federal reporting.
So this is looking at the pre/post federal reporting way. But instead of looking at NRS, it's adapting it to the CAEP summary. The bottom line difference is this incorporates any testing you might be doing with non-WIOA programs like Workforce Prep and CTE and so on.
So this all hinges on that left-hand literacy gains section, obviously, of the CAEP summary, so we look at those three columns. We've looked at this before-- number of enrollees that qualify for federal reporting in column D. Of those that qualify, how many complete a pre/post payer? And C, of those that complete a payer, how many actually made enough of a gain to complete a level for federal reporting in column D?
So you can see those numbers in those three rows in this sample. So we're comparing-- so one thing we can do is, we're looking at the number who have a pre/post payer. That would be persistence. That would be one way of doing goal setting, or we can look at the number of who make an actual gain. That would be a way to look at performance.
So again, we're looking at that literacy gain so to get the persistence rate, or simply doing that column C divided by column B. So we can simply do that to get persistence. I'll just note 70% is my informal rule of thumb for good persistence. Obviously, last year, that's a little different. Then I'll stick with 70% from a historical point of view.
The other thing you can do is monitor performance. That would be column D divided by column B. So again, if we want persistence, or just saying in column C, what percentage in each program completed a pre/post payer?
For performance, we're looking at column D. What percentage of students in each program actually made that gain between pre and post-test? So again, this is the basic way of doing it for pre and post testing.
I'll say, this is the way we've looked at it from a CAEP point of view because number one, it uses the CAEP summary instead of NRS report. So it's easier for you to access. Number two, it's incorporating all 7-K programs. So if you're really looking at it fairly from your CAEP programs, you'd want to look at everybody in CAEP reporting, not just your WIOA II students.
So I'll say again, that's my second example. Everybody still clear on this one? I'm going to go example by example here. But still it only-- so sorry, what do you mean by that route?
Speaker 2: I just mean that effort gains can only get us so far with measuring performance and persistence. That's not-- that's just a side note. The tables tell me that--
Jay Wright: OK, meaning that they might need to possess more than just making a gain or getting a correct posting.
Speaker 2: Correct. Correct. Yeah. Yeah.
Jay Wright: OK. Got you. OK, I'll just say, yeah. That's just one way of measuring it. So we've had two ways of measuring enrollment, I guess, from a CAEP point of view-- one based on enrollment, one based on free post.
So the third thing we're bringing up is just targeting outcomes that's probably more for performance than persistence. But this relates to that middle section of the CAEP summary. Again, we've got that one new one we've talked about for a year or so. The I-3, that's using EL-7 co-apps for immigrant integration. And then those same tried but true represented columns G through L, 1, 2, 3, 4, 5, 6 columns, representing each of those six areas of AB 104 that we've talked about for quite a while.
I'm going to stop right there, do a bird walk for a minute. That's, one, an area where probably it would have been better if I did my Thursday presentation first and this one second. But just the way it is, my Thursday presentation that gets it. Work it 101 issues is the other way around, but we'll talk more about AB 104 on Thursday, just FYI.
But again, we've got those six areas that the California legislature has required us to follow for adult education. Those six areas are columns G through L. That is the areas that we're monitoring for student outcomes.
So in terms of using those outcomes-- sorry, my slide stick in-- we want to go ahead and just take a look and see if we can tally those numbers with each one of those columns. So we pared it down just to look at a couple to keep it a little easier, so we can look at specific outcomes.
So in this example, just for the sake of argument, we're looking at that column math, the ESL learners who passed I-3. We're keeping it simple and just assuming that for immigrant integration, we're focusing on ESL. That's not going to be true all the time, but for this example, we'll say it is.
So we'll just say, yeah, we've got the 2,200 ESL learners. About 1,000 of them have made some kind of I-3 outcome. That is, about 1,000 of our 2,200 ESL learners have passed at least one co-app. So we can just go ahead and take that.
And you can see my math is wrong, but the everything else is right. I'm not sure where I got the 67. New math, so everything's right here, except for the math. But you can just see you can look at-- you know what, I don't know why that math is wrong.
But anyway, I can visualize it, but I can't calculate it, I guess. But in any case, well, that's a good reason why we'll update the PowerPoint after the presentation. But we can just look. We might want to look at it as ESL only. We might want to look at it for all students, but we can simply keep it super simple here.
The percentage of students by program, how many made and I-3 outcome? We're just saying, OK, right now, we're right about 50%, give or take a little. So, hey, in a year, we want to have it be-- we're at 65%. I'm just making up the numbers.
To get here on the CAEP outcome section, it's only looking at those with 12 or more hours. So the CAEP outcomes is using that same criteria-- is that left-hand pre/post test section, that is. It's using the same criteria as NRS reporting, except it's completely ignoring everything related to pre and post testing. Hopefully, that makes sense, Crystal.
So, yeah. In order for it to be an official CAEP outcome, they need to have at least 12 hours of instruction. I'll just say that's one that was a big issue originally. But I think that one was resolved four or five years ago, where here, everybody agrees that it needs to be a participant with 12 or more hours of instruction.
I'm being a little loose here, but in general, I think the consensus was that in order for you to really report it as an outcome achieved by adult education, were basically saying, well, yeah, 12 hours isn't going to be perfect in every situation. But it's a good loosey goosey cut off to say, yeah. For us to say, yeah, the reason for that wonderful outcome is because of adult education, that 12 hours is the cutoff to where we can safely say, yeah, we're claiming that outcome.
A big reason why the student did that is because they enrolled in adult debt enrollees. Right, it's only those with-- I'm not sure. There's a little double negative in your question, Eric, but back to Crystal's question. Everybody who shows up in the south-hand section must have 12 or more hours of instruction. I think that's what you're saying, Eric, but you can confirm or deny but to you, Eric, it requires 12 hours. OK, so, Eric, Crystal, hopefully, that answers your questions.
But again, we can just look at an individual outcome, is another way to check it. What I'll just say is the thing that I think what somehow got lost in translation here is another thing you could do that we've brought up in the past, is you could look at it for an individual outcome, or you could target the number that made all outcome. So another way of looking at it is you can look at that total number of outcomes. And you could just aggregate the number here, where you're looking at column F through L and aggregating that total number of outcomes and comparing that to the number of enrollees and say OK your average enrollee made more than one outcome or less. We can look at it that way as well.
OK, so let me see. Moving away from outcomes and looking at the DIR, there is an update that I know some of you are looking at. We haven't quite gotten there yet by doing the updated targets with the end of year numbers. So I'll just say, moving here, there is a link that we did in April that has this updated up through the third quarter of the '20/'21 year. Sorry. For right now, that's still the most recent version we have.
In mid-October, after we finish the matter of the deadline to the feds and all that and getting the data to CDE for WIOA II under the CAEP office brigade, we'll have to wait and get all that done before we can follow up and finish things like the DIR numbers. But we should have it by mid-October. But you can look, and there's a document that gives you the statewide averages for all 27 items on the CAEP DIR, so you can compare your agency or consortium percentages to those averages. I'll just say, if you want something recent, you could compare it to third quarter of '20/'21, or you might want to compare it to end of year for '18/'19 or '19/'20.
I'll just say, as an aside-- throw that link-- this link. OK, hang on just a minute then. OK, hang on. OK, you have a meet-- OK, hang on. It's not wanting to go away from full screen. OK, hold on. OK, there we go.
All right, anyway-- so again, this has the 10 quarters. What I'll bring up is I'm making a lot of excuses here for why we don't have that updated document, obviously. But one question that is relevant is, when you get those end of year numbers from '20/'21, the quite frankly, there is all of you probably are going to be looking to improve mightily from most of the numbers we have on '20/'21.
I'll just stop here and just sanity check everybody. Do we all understand? OK, you did it again. OK, hang on just a minute. Why don't we-- well, we'll do this at the end of the presentation. Sorry.
I'm just going to wait. That was a little more time than I have. Anyway, maybe if one of you would be volunteering to do it. That might be a lot better than me pasting it into the chat.
In any case, where was I? So does everybody know-- sorry, I'm talking about way too many things at once. In any case, does everybody understand off the cuff with what-- without lots of detailed explanation why the end of year numbers from '20/'21 may be a lot different from the numbers we're hoping to get here in '21/'22? Let me start with that, and that's going to get me back on track.
Does everybody know what I'm referring to there? Or do I need to dive into that for a minute? You can chat or talk. Does everybody understand why we probably don't want to take the '20/'21 end of year numbers too seriously as it pertains to goals we're setting for the '21/'22 year? I'll just say, does everybody know what I'm talking about right now?
OK, there we go. That's our Corina. Thank you. All right, somebody does.
Right, so '20/'21, we know is probably the worst year we've had on the one time. If we have performance in '21/'22 like we did in '20/'21, we know things are turning really, really, really, really sour. So in a lot of ways, I've got to say, I'm not sure if it's fair to say, yeah, let's just use '18/'19 because there's a lot of reasons why maybe we're not going to be quite as good as we were in '18/'19. That might be a little bit of a reach.
But we're certainly hoping we're a lot closer to '18/'19 than we are to '20/'21. That's another really good discussion topic that we probably don't have time for now. But I'll just say, take it with a grain of salt. We've got all of these quarters of statewide DIR performance '18/'19, '19/'20, and '20/'21.
But take it with a grain of salt. We're not quite sure what this one's going to be. We definitely want to avoid just saying, oh, we'll improve on '20/'21. There's a term I use called setting ourselves up for failure. You probably heard me say it.
So this is the exact opposite of that, where we might be a little too close to setting ourselves up for guaranteed success. If all we're doing is improving on last year, we probably want to do a little bit better than just improving on last year. Sorry to be a downer there in a way, but Ryan, Todd, Karina, you're all bringing up good points to that.
OK, so here's just an example where we're just using the '18/'19. I might be setting the bar too high here, but hey, we have-- in this example, we'll just say, we're at 34.3 students with less than 12 hours of instruction in the '18/'19 year. So we're saying, yeah, we're going to improve from 34% to 24%.
So in this case, we're saying, yeah, we're going to use that '18/'19 number because that's really the metric that we want to improve upon as the last year that wasn't affected by COVID. Again, just an example. Easy way to do an example.
OK, so there's four examples we've given you. The fifth one is saying, hey, let's just use straight NRS address. For those of you that have mostly WIOA II agencies, you might want to just take advantage of NRS reporting, things like the CASAS Data Portal that give you a lot of additional resources.
So hey, if you have a lot of NRS agencies, you might want to take advantage that you can use the Data Portal, a lot of more user friendly ways to look at data, a lot of ways to drill down into these different programs. So in this example, we're just using the Data Portal to generate our agency level information up through '19/'20. So we're just cutting to the chase and saying, yeah, we're looking for federal levels.
Again, we're using NRS as our mechanism, so we're looking for those NRS areas of strength versus NRS areas where we need to improve. So we're looking at our data historically. If we use the Data Portal, it makes it a lot easier to make those comparisons.
So we're looking here as an example, ASE Low as an example, where this agency might need improvement. We're saying that because our level of performance is overwhelmingly lower than that state average and the state goal. So again, we're cutting to the chase here just to give ideas. So in this example, we're saying we're going to make baby steps here. We're not so sure that we can make a lot of improvements, so we're just saying we're moving from 22% to 25%.
Well, in this case, if we're using NRS, so there's two issues here. Yes, now you're a good catch. I'm talking about too many things at once. So one layer is, I'm using '18/'19 on purpose.
I'll just say, that's me being a little tricky, but I'm doing that because I'm saying, hey, because of COVID, I'm not so sure we want to compare ourselves to '20/'21. I'm not really sure we want to compare ourselves to '19/'20 either because we know those two years are outlier years based on COVID. So for the sake of argument, I'm using '18/'19. I'm saying, yeah, we're going to set the bar a little high.
We know we're not 100% out of COVID. We know it definitely ain't normal yet. But in my example, I'm saying, well, let's set the bar like it's normal because hey, we're a high bar setter not a low bar setter.
If you're saying, hey, wait a minute, I ain't no a high bar setter. I'm a lot more comfortable being a low bar setter, then you could use '19/'20. You could use '20/'21, pinning it down to the Data Portal.
If you use the Data Portal and use NRS right as we speak now, it's up through '19/'20. So everything '19/'20 is posted on the CASAS Data Portal. If you're using the CASAS Data Portal, we probably won't have '20/'21 up until early to mid-January. And that's just our timeline every year with the Data Portal. We've got to deal with things like reporting to the feds and HSC data match first before we can really finalize things and get it on the Data Portal.
If on the other hand, just to give a different example, you want to use the CAEP DIR, that, we should have ready by early to mid-October. If you're saying, let's use '20/'21 and make it reset, we're not worried about all that high bar/low bar non-sets. We just want to use the most recent year. For things like CAEP DIR, we will have that available a lot sooner.
If you're just comparing it to what you did from last year, well, then, you can do '20/'21 for everything. Those first couple examples were that in my opinion where the first two or three examples, we were using comparisons based more on what your agency's status said, not anything statewide like with the Data Portal.
So sorry, there's way too many answers. But I think all those answers in a way related to the question. So it was a good question when there's that many answers. I'll just say, Ahmed, you beg to differ. Push back if you need to now. That's the end of my answer.
So anyway, if you want to use NRS, you can use '18/'19 or '19/'20 for right here and right now. If we're using NRS, we're restricting it to just those that qualify for WIOA II. And, of course, we're really getting into that NRS nitty gritty with those 12 NRS levels.
Again, we're not required to get into NRS nitty gritty when it comes to CAEP goal setting, when it comes to NOVA. But certainly, that's one very good way or one very good option that you might want to consider using, again, one of eight ideas we're bringing up in the session. OK, let's see here. Just to move along here, this was your question.
I think this was your question, Emma. But if it was somebody else's question, my apologies. Here's CAEP table 4. Or maybe it was yours, Kelly. I'm not sure.
But anyway, we have what are called the CAEP tables and TE. Those are included in the same CAEP tables option on the TE menu, is where we generate the CAEP summary. Generally, you're not-- you don't generate it a lot because the CAEP tables are by default set up for you to generate the CAEP summary, but not those additional six or eight extra tables.
But if you want to look at your CAEP data in the format similar to how we have it set up for NRS tables, the answer is go ahead knock yourselves out. You go ahead and open up those CAEP tables, and you can run CAEP tables 1, 2, 3, 4, and 4B. I think it's just those for right now, but it's basically running reports, using the formatting that's exactly the same as the NRS tables, but including all seven CAEP instructional programs, whereas, of course, NRS, you're limiting it just to those that qualify for WIOA II.
So I'll just say, does that make sense? Again, what I'm saying here, Emma, is looking at it globally, trying to reset. This is not-- to be clear, nobody needs to follow 100% of these directions, but what we're doing is we're saying, for goal setting, you've got a lot of options here. That's the way it's set up for this goal setting. Some of that is because hey, that's just the way CAEP rules wants to give you as managers maximum flexibility to establish goals in a manner in which is relevant to your local consortium and your local agency.
The other is, of course, we've got two different systems. Some of you are reporting in TE. Some of your reporting in MIS. Sometimes, that's a never the "twain shall meet" issue. So there's also a necessity here at the state level to make sure you can set goals using both systems.
So whether you say tomato or tomato, there's not much in the way of prescriptive things that everybody is required to do. There's obviously much, much, much, much, much, much, much, much, much more presented in terms of examples where you can consider this as maybe being relevant to your agency. But ultimately, it sets you the manager needing to decide which of these metrics are going to be relevant to your local area, which of these aren't.
So the good news is, you have lots and lots of choice. You have lots and lots of flexibility. The bad news is, you have lots and lots of choice. You have lots and lots of flexibility. I think that's where your questions are emanating, is the paradox of good news, you have lots of choice and good news, you have lots of flexibility, quote unquote, "smiley emoji" "smiley emoji" "smiley emoji."
So back to the feature presentation, you can run tables 1 through 4 in your CAEP tables formatted exactly like it or rest, but incorporating all your CAEP students from programs like CTE and Workforce Prep, that if you run it the NRS way, obviously, have a 0% chance of showing up. So here, we're saying we can run NRS performance goals but incorporate things like CTE students that might be doing pre and post-testing or CTE students that might be enrolled in special programs or maybe looking at demographics, percentages for all CAEP students, rather than just NRS just to give some specific examples.
So I'm going to stop, and move to my next suggestion. There's lots of good suggestions from Emma and Ute here. But let's just reset because I've talked about at least five things at once in the last couple of minutes.
So Reset button. Everybody still with me? Any questions? Anything else to be brought up here?
You're having some good dialogue with each other. But just to make sure, the feature presentation, is it completely gone off the rails? Anybody willing to tell me whether it has or hasn't as we move to the next suggestion?
OK, thank you, Karina. Karim, not off the rails. OK, let's not go any further than that. But at least, we're not off the rails.
So my next suggestion is OK, we've given NRS, DIR, CAEP reporting. Another way to look at goals might be to look at it in terms of special populations. So obviously, we've talked a lot about collecting variables such as barriers to employment. Another example would be equity.
I'll also add, one of the supplemental handouts for which we do not have time that might be a training all unto itself to do here in a couple of months is there's a PowerPoint that focuses on barriers to employment and ad hoc NRS cross tabs. I know that title sounds exceedingly tantalizing, but it's a 9 or 10-slide PowerPoint that gets a little deeper in the TE weeds than we have time to do right now. So here, we're talking about special programs.
We're saying, let's look at barriers to employment. So we're keeping the-- because we're looking at all barriers to employment, the obvious necessity here is the metric needs to be super, super simple. So in this case, we're just saying, do they have measurable skill gain or don't they?
So we're looking at all roughly what, 14 different barriers in giving an item count of all the different area-- of all the different barriers to unemployment, how many made in MSG? How many did it? So just so you can understand the example, we'll look at, let's see, low level of literacy is a nice speedy learner over there to the right.
So we've got 2,400 or so that have low level of literacy, 1,165 had an MSG, 1,200 didn't. So that's what? About 45% yes, 55% no. So this gives us hey, for a specific barrier to employment, maybe we want to look at two or three different barriers at once, maybe we want to aggregate them all together.
But this is a report we can run where we can maybe look at some goals by barrier, where instead of looking at our agency overall, we're looking at our consortium overall. We feel pretty good about our overall performance, but we know when we're looking at English language learners were weaker, if we look at individuals with disabilities were weaker, if we look at individuals with low income were weak. I'm just throwing examples to see what sticks, but this is a way to look at it by barriers to employment, which I would suggest is another roundabout way to look at that concept of equity that you've heard lots of people talk about.
If you feel equity is an issue, one real low hanging fruit way to look at equity in a meaningful way is to say, yeah, we've got this level of performance overall, one of the agency and consortium level. But when we look at the specific barriers, we know the levels of performance are much lower. So let's target some specific barriers to employment and try to get those individuals with barriers to perform at a roughly equal level as everybody else. That's what we're saying for this example.
So here we're getting into the TE weeds. I'm not going to dig into that. But if you look at that supplemental PowerPoint, it looks at this where you can look at this report and set it up to make sure you're running it correctly.
Here's another example. This is basically looking at a more involved example, where we've got barriers to employment on one access. We've got all CAEP programs on another access. So now, we can really look at more specific numbers.
So if we want to look at individuals with disabilities-- we want to look at individuals with disabilities in career and technical education, the answer is go ahead, knock yourself out. We want to look at low income learners that are just in ESL, whatever, this allows us to get into that much more specifics, where we're looking at that issue of equity and being able to really reinforce the point more specifically.
This is the same data I just talked about with a more specific example. So in this case, we're saying, individuals with disabilities will improve from that 34%, attaining MSGs to 40% in the next program year. Maybe we want to say we're going to serve more students. That would be another way of doing it.
Maybe we're looking and saying we need to maybe-- the issue is individuals that disabilities are performing worse than everybody else. So we want to make sure they perform better. Maybe in some cases, their data looks the same, but we realize, hey, we're not exactly an accessible program. That's pretty pitiful.
We're only serving not many students with disabilities. We need to serve more learners with disabilities. That would be yet another way that we could use this report.
OK, a final example here is looking at consortium level reporting. We'll look at consortium level reports a little bit more on Thursday. But a lot of you have asked questions about it. A lot of you, I know, already have that consortium level access.
The good news is that allows you to look at all agencies in your consortium. The bad news, like we've said over the years, we go out of our way to not allow drill down if you're in the consortium level. But you can set goals at the consortium level rather than the agency level.
So here's an example. Just using that type example, using the DIR, you can now run a consortium level DIR with this level access. So maybe we're just doing a simple DIR item across all agencies at our consortium, looking at an obvious one like 12 or more hours of instruction.
Another one might be using our CAEP summary, looking at that pre/post persistence of 70% or more. The hint I'll add here is the examples I'm using at the consortium level. I'm saying on purpose, I'm using really, really, really, really obvious examples.
In my opinion, if we're doing consortium level, probably best to start to use obvious examples. So in this example, I'm looking at persistence rate. I'm looking at 12 or more hours of instruction, that is, I'm looking at specific data points that we talk about ad nauseam. I'm not looking at super detail-oriented data point, where you probably do need to drill down if you're going to give yourself a chance.
OK, so big transition here. Of course, I have limited time, but let's do a big time sanity check here. Is everybody hanging in there? I want to talk a little qualitative.
I think people are expecting more quantitative. So it was as intended to spend more time on quantitative than qualitative. That said, I did want to spend 15 minutes on qualitative.
But before I do, I've talked about a million things at once. I've jammed 8 or 10 hours in an hour and a half as usual. But let's just make sure, is everybody saying insane? Where are we on the sanity meter?
Where is that explodometer? That is, is our head the same size as it was an hour ago? Or is it bubbling up, it's going to explode any minute now? Insane in the membrane. It's going to explode.
So some of these cameras are going to splatter all over because your head is exploding. OK, after four hours of info, it's only been an hour and 10 minutes, but who's counting? Ready for some qualitative?
There you go. That was a good answer. Outstanding job, Michele.
So other considerations. Again, you can see that here, more on that and more tradings. But this is to say, lots of other ways you can look at it, it doesn't have to be from a TE report. There's lots of different ways you can look at this.
So here, we're going back quantitatively talked about qualitative. We're looking at developing that sort of of data. I don't want to talk about this because I've been talking about this forever for the last few years. But we've talked about, sometimes, performance is good, other times, it's bad. Sometimes, persistence is good. Other times, it's bad.
This is just looking at those quadrants. Again, we don't need help if everything is good. It's really unusual to have performance good when persistence is bad, so we're focusing on times when performance and persistence are both bad. That means we need to really look at cleaning up our data, getting more students pre and post-tested and then moving forward with improving instruction.
Other times, performance might be low, but our persistence is actually good. If so, then we probably need to start looking at things like using assessments to improve instruction, look at our lesson planning and curriculum, that is, improve instruction.
OK, so I don't want to read off this laundry list, but this is something I've talked about for a long time. But to pick up on time, I'm just going to say, we have agency-level strategies, and we have student-level strategies to improve persistence. We also have agency-level strategies and student-level strategies for improving performance.
So when we look at agency-level strategies for persistence, generally, we're talking about setting up that calendar for testing, having a local assessment policy in place, basically, being mindful about how are we going to go about doing testing all year long? A big thing in the last couple of years is, are we going to do remote testing? Are we going to get coverage so we can bring students into tests? That's, obviously, what a lot of you have done. But again, looking at how we're going to get all of our students tested, how are we going to schedule makeup days for students that are absent, et cetera, et cetera.
When we look at student-level strategies, we're looking more at the class level, where we're tracking individual students attendance, we're having our teachers follow up on individual students that disappear, we're making sure that our scheduling is conducive to our students' needs, we're rewarding students for good attendance, completing that pre/post payer, and so on. Sticking with the student-level but looking at performance, we're looking at things like test-taking skills, recognizing our superstar students, evaluating our superstar students, and seeing what makes them tick that others aren't doing, sharing the practices of our superstar students with other students, so our other less-performing students are starting to perform more like superstars, so to speak.
When we're looking at agency-level strategies for performance, we're looking overall at those test results reports and TE, we're looking at what types of students are doing well, which ones aren't, which classes are doing well, which classes are not, which programs might be doing better than others, which teachers might be doing better than others. So here, you might say we're looking at our superstar teachers, looking to try to have the practices of our superstar teachers replicated across all classes on all programs. We're also looking at aligning our instruction to our regional priorities, that is, is there that career pathway being put forth for our students? Or are we moving forward with everybody in our date?
So back to the qualitative strategies here, honing into what these concepts and getting into details on some of those highlights I just talked about, we have clear channels of communication, we have local assessment policy, we have regular data reviews with staff, we have short-term services, and we have targeted instruction. So what I've done here, just so you know what the heck I'm talking about, is I had these endless laundry list that we could talk about for hours on end. What I'm basically doing is I'm tweezing some of the highlights as well as tweezing some of those consistent themes that we're seeing in our student-level and our agency-level strategies. We're going to flush out some of these strategies that rear their own ugly head no matter which type of strategies we're talking about.
So let's start with local assessment policy. That is, we're all required to develop our own local assessment policy. That is, at the beginning of each program year, we meet as a local agency team, and we develop that strategy at the agency-level for how we're going to go ahead and do our pre and post-testing.
So the big piece is that's where we're developing that testing calendar with our specific dates of when we're testing our students. Sometimes, it's full weeks, other times it's individual days. Sometimes, we're itemizing it out by class. Sometimes, it's itemized out by instructional program.
We're scheduling those makeup days for students who are absent. We're providing all the information on test security and staff responsibility. The big thing here is that fourth bullet in my opinion, where we're also identifying staff responsibilities. We're establishing those clear channels of communication.
Which staff are responsible for test security? Which staff are responsible for disseminating those TE reports? Who's responsible for reviewing the data results? Who's responsible for reviewing things like lesson planning and making sure it relates to instruction?
We're looking at things like orientation. How are we dealing with hours between tests? Dealing with-- how we're dealing with enrollment and retention, distance learning, training, all those things. I'm not going to dig too much more here, but a lot of things we're stipulating as a staff, what we're communicating and divvying out this level of responsibility.
Next, we're looking at regular data reviews. This is one that's been really gets fleshed out well with those performance goals, panel discussions we have. I'm sure a lot of you have done it. We're going to have another panel discussion here in October and the data summit but we have a couple of staff, Carol, Rhoda, Atoyebi, Lita are the ones that come to mind right off the top of my head.
They talk a lot about establishing a data dialogue process locally at their agency. That is, they're engaging their staff. They're engaging their students.
I'd say long story short here is, you can't really do feel the dreams here. This is not build it and they come. You've got to go out of your way to engage all of your different staff and do your data dialogue process. You've got to go out of your way to engage your students as well.
So you're going out of your way to engage your students with your local TE data, and then, you're looking at a cross-section of staff, but you've got a couple of teachers. You include yourselves as administrators. You bring forth your TE data managers. You bring in students as well.
So go ahead and get them into that data dialogue process with regular meetings. You give participants that opportunity to voice concerns. So you're basically giving them an opportunity to give you feedback, and in exchange, you're basically getting their input on what types of strategies you want to use to start developing this data dialog.
Will all the suggestions you hear be perfect? Will all the suggestions you give your students and staff be well-received? Absolutely not. But even if you come up with a dumb idea after dumb idea after dumb idea, I'd suggest you're still accomplishing a lot.
Not the way I really wanted to put it, but either way, even if your ideas are not so great, the idea here is to establish that data dialogue process. Get all of your staff from all walks of life engaged in the data dialogue process. Get your students engaged in that dialogue process.
By establishing that process, that's really where you're getting, again, I'll use that cliche, "foot on the rail." Once you get the foot on the rail with the dia-- data-- dia-- data dialogue process, that's step 1. Then once you get that dialogue process started, you can then move forward with all of your more nit-picking issues, where you know you need to improve and so on. Again you smart goals approach with those staff and student discussions to review key data points and start working your way toward improvement.
Another thing to consider is short-term services, where you're going to find those gaps in your data. Hey, ABE, ASC, making great learning gains, but ESL isn't. Hey, my learners in ESL and CTE are getting jobs, but ABE isn't. So you know, hey, Maya, high-level ESL students are really struggling. My low-level ESL students are doing great. I'm just throwing examples.
Sometimes, you can explain it away in your data really easily, but other times, you can't. So sometimes, you might need to look at services. Maybe the student needs some services to make them whole. Here's the big list of things that you might need to address.
But long story short is, sometimes, your students need help through services that sometimes just simply making your instruction better may or may not be the right answer. So for times when it isn't, you might need to look at services. So sometimes, that supported services where you're basically looking at individual personal needs. Other times, that might be training and transition services, where you're dealing more with the professional needs. Again, those gaps, where when you see those gaps, sometimes, you might need to meet that in terms of services.
So the final thing I wanted to bring up is targeted instruction. Some of that is going to be done by that data dialogue process I talked about two or three slides ago. Some of this has to do what those test results reports on TE.
Sorry, a million things here in this session. We'll have more-- have a little bit more compartmentalized at the summit. That's what this is telling me, just so you know.
But in any case, do you all know what I'm talking about when I'm talking about those TE test results reports? That is, those reports in TE that enable you to use your assessment results to inform instruction. Does everybody know what I'm talking about when I reference that, yes or no?
OK, thank you. That's sure. Everybody else, Diana. OK, so at least a couple of you do. I think everybody is getting tired of these questions. But in any case, a few of you do.
So again, these are the competency reports, content standard reports, individual skills profile, and so on. So again, here, you can run it at the agency-level class by program, by individual student. So you can look at placement level with their assigned class.
Again, this third bullet, that is, the hot spots concept, is my preference, where I think at the end of the day, you want to look at these reports by class and student. You want to find specific classes that are doing really well. Find others that are really struggling.
Do the same thing at the student-level. Find two or three superstar classes and correspondingly superstar teachers. Find two or three superstar students. At the same time, find a few students in classes that are really struggling. See what makes the outliers different from everybody else.
What are your superstar students in classes doing that others aren't? At the same time, what type of classes and what type of students might be the type that are really, really struggling? And then, align those expectations, move the ones that are not as good back to the norm, try to move the norm up to those that are really doing well. Again, that hot spots concept, looking at good hot spots as well as not so good hot spots.
And then, here's lots of resources, again, those TE reports, lots of things on the CASAS website. That part of the CASAS website labeled curriculum management and instruction, the EL Civics website, quick search, and so on.
So OK, way too much there. Four minutes left. So I'll just say, any questions? You can swing a million different ways here.
What I'll also bring up, though is, again, that was purposefully trying to provide ideas. If there's any ideas that you like the idea, but gee, you need a lot more information on it to really know, I'd be happy to hear that. That's where I'll latch on for future training. That's where I'll know which of these details I really need to single out and dig into a lot better than I did today.
So any questions? Pin drop? You could hear a pin drop now. It's time for an airplane to fly overhead over where I am, I guess, right? So anything at all?
Lots of info. Yes, I know. Probably set the bar a little too high.
All right, so I'll turn it over. All these or I need to leave, or what's going on with the PowerPoint? So Veronica or Mandalay, I'm not sure which of you needs to take over now to finish up.
Veronica Parker: Thank you, Jay. I will take it from here. I mean, thank you all very much for participating in today's session. To reiterate what Jay said, we definitely are looking to you all for your feedback in terms of how we can continue this process of learning and more closely align it to the three-year plan.
I have received some feedback in the chat, and then we also have those outstanding questions that we will send to the CAP office, and hopefully, provide some clarification. But again, definitely let us know, especially in the evaluation. But if you want to send a simple email, that will be great as we are working with Jay to identify some other training opportunities.
So in terms of the question-- excuse me. In terms of the PowerPoint, we're going to need about a week or so to remediate the PowerPoint. So once that is made available, we will send it.
Annabelle, I understand that everything is time-sensitive. We do have about a three-week turnaround time when it comes to the remediation of the recordings. So as soon as we get those back, we will be sending them to everyone. I don't know how you all feel about receiving individual emails with recordings and PowerPoints, or if you would just like one Google folder with a repository of everything that we have presented on the three-year plan and the directors event since it's all--
Jay Wright: And, Veronica?
Veronica Parker: Uh-huh.
Jay Wright: One last thing. I'm just seeing some questions come through. So I'll give Emma the credit here because I think your question is what set this alarm off or whatever, is what Emma said, hey, I'll call you with questions. That might be a good approach.
I do think there's no way to do this without purposefully sticking out of the weeds because there's so much. But what Emma brought up is, hey, I'll call you with my specific questions. So I'll just encourage all of you to do that because I think every individual here, every consortium here is going to be a little different.
So I'll just say, hey, if you want to get deep in the weeds, what Emma suggests might be the right approach, is those of you that want to get a little deeper in the weeds, shoot me an email. We can schedule a half hour or 45 to just get on the phone and talk this out in a little bit more detail, just you and me if you like.
Not a requirement, but for a lot of you, I can see that being as useful or more useful maybe than a training as hey, we need to get together one on one and talk about what you got going, et cetera. OK, sorry, Veronica. I'll stop.
Veronica Parker: OK, great. So I'm seeing the Google folder. So that's great. So we will create a Google folder, and send that out to everyone and be sure to add all of the resources that are being shared, as well as the recordings as we receive them back, so that you all have one main point of reference for all of these materials. Again, if there's-- MJ, thank you for making yourself available.
If there are any additional things that we can do as a TAP team, please let us know, and we are more than willing to do so. And for those of you who have not registered for tomorrow's session, please take a moment to register. It will be from 12:00 to 1:30 and that will be WestEd will be going over lunch board MIS data and presenting the new fact sheets that will be, hopefully, helpful to you all as you are working through your three-year plan.
So again, any questions that come up please be sure to contact TAP. And we look forward to seeing everyone tomorrow and for the rest of the week. Thank you all so much, and have a great day. And thank you, Jay.
Jay Wright: Thank you.
Veronica Parker: And the rest of the TAP team, thank you all.
Jay Wright: All right.