Jay Wright: OK, hello. I'll go ahead and blow up the PowerPoint. I reckon, while I do that I'll just say he. Everybody hear me OK, see the screen OK, all that good kind of stuff?
Audience: Yes.
Jay Wright: Are we A-OK on that sort of thing? So hey, here's your proof in the pudding. Hopefully, this title and date matches your expectations. So a lot of you-- you look like you're certainly, mostly the coalition of the willing, usual suspects. You decide which cliche you'd like to be.
But a lot of people I've seen before in these webinars, so as such, you know kind of [humming while flicking tongue] like my cliches, whatever. So anyway, and I also like my silly questions to start. So the silly question to start this one is, how many of you were in the workshop we did a week ago on March 31st? We kind of put this one together as a little bit of a two-part series.
I mean, of course, it's really just two parts of a bigger than that series that a lot of you have been doing for the last three years. I admit that. But we did set this up with CAEP TAP to have two sessions specific to CAEP and so on. So maybe I should have answered it the other way-- maybe I should have asked it the other way.
It's not a big deal either way. But how many of you were not at that one? Maybe just about all of you were. I haven't had any of you say no yet. Anybody here not, maybe? Not a big deal either way, but just curious because of these first couple of bullets-- OK, thank you for being brave, Angela. So we'll start by recapping a couple of key points from the last one.
OK, we're splitting up now. All right, so anyway, last week we did bring up some new CAEP reports. Hopefully tomorrow morning at the network meeting, I'll be able to speak to this a little bit. But in any case, we talked a little bit about, hey, here are some new reports we either have available now or will be available very soon because I hadn't done anything with CAEP TAP in a while as of last week.
And then, the bulk of it, of course, was that second bullet where we brought up again those CAEP goals, the metrics for CAEP goals. That is the two mandatory metrics for consortia, the two mandatory metrics for agencies, and then the 10 optional metrics for agencies. So we went over all that. We've done that many times before, I think.
But what we did is we went over all of those goals and outlined, OK, here are some suggestions. In TE reports, you could use other features. you could use to directly address that goal and also some ways that you might measure it. So we'll recap that at first. And then another form of recap, this we didn't really do a week ago.
But again, you're a coalition of the willing sort of group it looks like. So my guess is most of you have heard this in other workshops but will relate a little bit. What we have going on for CAEP to those NRS performance goals that we've talked lots about the last two or three years, if you're not NRS, it's not important that you know table 4 like the back of your hand or anything, but there is a basic concept of looking at what goals we have statewide, looking at like averages we have statewide, and using that as kind of a process to get started with being able to identify your own strengths as well as, of course, identifying your own needs.
And there's that next bullet right afterwards. It's anticipating what I'm going to say, I guess, but we'll talk about how we can do that using data. And we'll get really into brass tacks with improving persistence and performance.
What's not on the agenda that I also kind of want to introduce is, last time, again, a lot of you that were there, we talked about a lot of quantitative methods, some of my own goofy terms, just some general processes you might use when you're in your data, in TE, in other systems.
What are some just general processes you might want to use? As a data analyzer, so to speak, what are questions you want to be asking yourself every step of the way? A lot of real basic Sesame Street-like concepts of hey, which of these is not like the others? I thought about this over the last week, and I think a lot of that is really-- the underlying issue is a lot of the things you're doing are really the exact opposite of rocket science.
Sometimes data gets falsely perceived that way. But a lot of ways, if you not only dismiss all the rocket-science aspects of it, but take it that extra step and say, not only is this not rocket science, but it's the exact opposite, it's really Sesame Street-- which of these are not like the others, who do we need to talk to you to find out what's different, and all that-- that's the sort of questions, that's what we're trying to set it up as, is it's really just asking basic questions.
So we'll really hone in on persistence and performance this time. And then the other thing is we'll kind of start introducing, by agency and student-level solutions, some suggestions for how you might want to engage your teachers and other staff, how you might want to engage your students as we get more data.
We already know we have data in TE, but we know that there is probably a 99.9% chance TE is not going to solve all our problems for us. We also, of course, need to engage our students more and engage our staff more, and simultaneous to all this data digging, there's a lot of stuff in our interactions that we also need to consider. And then we'll kind of tie it up with some of these CAEP reports issues and some even more detailed suggestions at the end.
So sorry, that's way too much time on the agenda. I guess I'll just sew it up by saying, hopefully, that's more or less like what you were expecting to hear today. If not, I encourage you to bring it up in chat or verbally, hey, there's still enough time to adjust if there's something we're missing that really needs to not be missed
OK, so here this slide is definitely from last week. We kind of gave an update because it's been a while since we've done so, at least from a CAEP TAP point of view. So we do have some new reports out, and others coming soon. But we showed this consortium manager menu because that really has kind of been the bulk of our new efforts, at least as far as CAEP.
So old news, but just to summarize if you haven't heard that I-3 summary that is those reports that relate to EL Civics. And we have now for CAEP agencies related to immigration indicators or I-3. So we now have I-3 information at the consortium manager level in addition to just the agency level like we did.
We also have two new reports, one called student enrollment by ZIP, the other, by ZIP and demographics. That is for the consortium manager ways of looking at data by ZIP code. So student enrollment, I think is, the one that will be really helpful for everybody. And it's kind of matching what was requested that is, hey, as a manager, we know that we have some communities that are well represented, others that are less represented.
So maybe we want to be able to see where our students are coming from, partially to help marketing efforts and get more students in our doorway, partially more for the reason of this workshop to evaluate performance try to figure out perhaps where our positive outcomes are coming from and where they're not.
So for marketing and performance-related reasons, it was thought, hey, if we can really figure out where our students are coming from and what's represented, that will really help us so that student enrollment by zip gives you enrollment counts by zip code and by city, and then that second one here on, the list a little bit gorier detail.
So for some of you, it will be about the most wonderful thing you can possibly imagine. For some of you, it may be way TMI. But it basically takes all of those factors I just described and adds the third dimension of demographics, so you can also-- OK, I'm not sure. I'm not quite sure about that one.
If it was the one, the second one with demographics-- you know what? Send me something that I can check out. I will say the second one is super busy. It's got three dimensions, not two. It does the same thing enrollment by zip code, but further organizes it by male, female, race, ethnicity, et cetera. So maybe-- I mean, I wonder maybe it's a too many pages issue.
I'll just say I'll show you a goofy screenshot, if you want, where I can get it to work but it does give me gazillions of pages, and you end up needing to put it at like 10% to really see everything because it's so detailed. I'll just say that's a known issue, then, where-- and I'll just say, if you have any logic suggestions, we're all ears.
But it's literally so much information that we've had trouble really getting a handle on it and getting it to fit within one page. So to date, we figured out a way to get all that wonderful data there, but we haven't figured out a way to get the data in a neat and tidy arrangement, so that could be it.
But anyway, two new reports there, and then there's the butterfly hours, which is very similar to those other newer hours reports we have, but it uses those stages of a butterfly. Some of you love that, others not so much. But that's the one that's not there yet. It should be available in the next couple of weeks.
OK, and then this is just a reset. I'm not going to dig into this as much as last week. But we showed this and then we gave the longer list. That is the two mandatory consortium metrics, two mandatory agency metrics, and 10 optional level agency metrics. Bottom line is-- well, obviously, the ones that are mandatory are mandatory.
For the optional ones, CAEP has kind of gone way out of its way to say, it's optional, which ones you choose. It's also optional, which data system you use. So I think it was intended to be LaunchBoard, but lots of things have come up there so it's now optional. You can use LaunchBoard if you want.
If you'd rather use TE, go ahead, use TE. That was kind of a point of last week's workshop, was to give you more options using TE. If you'd rather use your K-12 system or your college system or whatever, that, too, is also fine.
OK, and then now, I'm kind of sidestepping here and making it a little too much like NRS, perhaps. But I can see most of you are WIOA ll agencies, but not all of you. But are you all kind of familiar with the Data Portal?
I'm not saying you know all the Table 4 stuff or you know everything about the Data Portal, but are you all at least familiar with the fact that it exists and what it does? As I'm meeting the request here in the chat, I'm going to let you input in the chat on whether you feel like you know what we're talking about here with the Data Portal, yes or no.
So most of you are saying yes as far as I can tell. OK, thank you for being honest, Karen. OK, a couple of you are no. I'm not going to get into it. I'll say, first off, if you don't know much about this and you really want to know the mechanics, I don't have time to do that today, but there is a workshop I have scheduled for 11:00 AM Monday. It's kind of a repeat of sessions I did a few times in March. I'll do it again Monday, the 11th at 11:00 AM.
And that will be one where we'll go through the mechanics of how to access the link I have on the slide. And we'll go into the Data Portal. We'll talk a little bit more about Table 4 and 4B so you know that part of it a little better and we'll also go through all these mechanical steps of describing these three dropboxes and kind of, quote, unquote, teach you how to get through and follow the steps and drum up your data using the Data Portal.
For now, I'll just say, here's a screenshot. You can access this information at the link on this slide. And this is what we've had for the federally funded agencies for a very long time to look at annual data. We've made a big stinking deal about this recently because we've been a little late, but better late than never.
We've put up the most recent 2020-21 data up about a month ago. We got up the Table 4 performance info. And as Chris notes, just a few days ago, we got the data from the Employment and Earnings survey, so you can look at that.
And I've already promised that when we do this workshop on Monday, I will spend a few minutes making sure we go through all the steps of the E&E survey data given that that's new, and it does seem like there's definitely people that feel pretty good about the Table 4 side, but less good about the survey data. So we'll definitely try to give it some good airplay on Monday.
But that said, this is the Portal we have for NRS data. If you are a federally funded agency, here's where you can get really good information. It's not required, but if you ask us, at the CASAS, what's the best way to get this information and use previous performance information as a way to establish goals, we, at CASAS, would definitely say the Data Portal is your best bet, more terra firma probably than watch more a little easier to work with and going into TE or into other data systems.
This is where you can get that info. The catch though, of course, is that it's part of our WIOA ll contracts. So that means it really necessarily needs to cover just the federal reporting agencies. So if you don't have that federal grant, your agency may not be in here. I admit that. But it will be usable.
If you're not WIOA, I will go on to say I am going to continue along with this now is that a bare minimum. When it comes to making things like screenshots for PowerPoint slides, the Data Portal tends to be a lot more user friendly than TE or any other data system because, as you can see, we can get the year-wide results.
We can get several years of information on the same screen. So it's super duper better for things like screenshots. So just know, I'm using this for several screenshots. From this point forward is a way to show you data in a more comprehensive way to where anybody can kind of look and say, yeah, I can see some strengths in this example. I can also see some weaknesses in this example.
Sorry, I'll pause here and just say, does that make any sense at all? I'm fearing that this might be two or three minutes you'll never get back I'm just listening to myself talk, and I'm confusing myself. Maybe everybody else is in a lot better shape than me, though, from this level of response. I'll take that as a, hey, don't worry about it. We're OK. Thank you for the feedback
OK, so moving right along here. So here is an example from the Data Portal. Again, skipping steps, but the point here is we have our NRS levels that we use for federal reporting, six levels for ABE, six levels for ESL, like we do for the Fed tables. And we can drum up our agency-level data and compare it to the state average, compare it to the state goals.
So we're kind of making it easy and cutting to the chase by giving you some fresh out of the box examples. So here, we're using a fictitious agency from a few years ago that has some real obvious strengths and real obvious weaknesses.
And cutting to the chase, where we've got red arrow, green arrow, the red arrow to the left is giving us an example where we're not doing well, ASE Low. Hopefully, everybody can see this clearly from the screenshot. Clearly, ASE Low is an area where we're way below the state average and we're way below the state goal, so that's an area where we may need improvement. I'll stop again and just say, does everybody see, in this fictitious example, where we really need to make a lot of improvement in ASE Low based on what I'm showing you?
Thanks, Isabelle, Connie, OK, everybody's playing ball. Thank you for doing so. OK, so on the other hand, over there to the right, we have this green arrow, beginning-level ESL. That's an area where we're above the average and above the goal. So hey, congratulations, that's an area where we can safely say we're doing well, pat ourselves on the back, whatever.
Does everybody else, though, see where I'm coming from, where I'm saying that's an area that looks good? So we've got one area of strength, one area of weakness.
And so I'll move along here. And so if you were in the workshop last week, we had a bunch of slides that looked like this, Jay just being goofy Jay, I guess, trying to mix it up a little bit. But I'll just say, hey, we now know the basics. I cut to the chase, obviously, in saying, hey, we identified an area of strength, we identified an area of weakness, so we really need to start focusing on those areas of improvement. So we need to target our data to the areas of need, so in this case, we're looking at ASE Low to start.
Once we kind of start focusing on those areas where we really need to improve, we need to start looking big picture for agency-level strategies, and maybe more detailed picture to find specific student-level strategies to the areas we identify as being areas in which we need to improve.
So I'm going to double back before moving forward. We looked at that small detailed screenshot that showed a ASE Low as an area for improvement and beginning level ESL as an area strength. I'm just going to take a step back and show, hey, here's that same ASE Low. It's on the right-hand side this time.
But we'll reiterate that ASE Low is a weak area. And oh, just try to find another area. So we're just going to quickly cut to the chase again and identify ABE Intermediate High as another area where we're not doing so well. I'll just point out-- here's a little broader screenshot, but it gives the same news for ABE intermediate high that we already shared for ASE Low. That is another area where we're well below the state average, another area where we're well below the state goal and might represent an area in which we need to improve.
OK, so we're still hanging in there right. Did everybody agree or see how those were two areas that we clearly were way below average. Sorry, I keep asking all these innocuous questions. But I am shuffleboarding on you a lot here. So good to make sure-- all right, thank you. Everybody is being cooperative.
So now, we've got another blown-up slide. To be clear, you can see by the header. It says Evaluating Persistence. So this screenshot looks a lot like a couple of these others we've shown the last couple of minutes. But it's different data. We're still on the CASAS Data Portal. But we're using the persistent section.
So we're comparing persistence data, not performance, that is, we're just looking to see how well we're doing in terms of getting pre/post spans. So if you haven't heard this a million times-- some of you have, some of you haven't-- what we're looking at is performance. That is, we're evaluating our high-performing and not so high-performing levels.
When we use the term, performance, we're talking about what percentage of students do the test-- not only do the testing, but make enough of a gain between the pre-test and post-test to make a gain on the federal charts. That's what all these percentages are concerned with.
When we move out of performance and into persistence, now we're not talking about their scores, we're just talking simply about the percentage of students that, at least, stick around long enough or quote, unquote, persist long enough, to, at least, complete a pre-test and the post-test.
So what I'll just say-- I've said for a really long time doing Tech Assist and doing all these trainings, whatever, is once we start looking at these federal performance levels, we can quickly identify areas of strength and areas of need, a really, really good first step that doesn't solve our problem, but at least allows us to start prioritizing and start making decisions about ways in which we can start effecting improvement is comparing those low- performing areas with our persistence rate, and kind of take this as a first step.
So that's what we're doing here. So from the previous slide, we already show ABE Intermediate High is an area of weakness. ASE Low is another area of weakness, So. We're looking at those same two areas that we already know, for sure, are areas of weakness and comparing it to that persistence rate.
So you can see another red arrow and another green arrow. Over there to the left, ABE Intermediate High red arrow, you might say, is confirming our suspicion. This area of low performance is also an area of low persistence. That is an ABE Intermediate High. We're not doing a good job getting pairs.
You can see that our persistence rate is below the statewide average, and below that average by a pretty large margin. On the other hand, we have a green arrow here for ASE low that is kind of surprisingly the persistence rate is OK or actually pretty good. You can see for ASE Low that persistence rate quite a bit above the state average.
So in this case, that persistence rate probably not the explanation we're looking for why our performance is low. We already know our performance is low, but we'll need to look somewhere else other than the persistence rate to find the cause because we know performance is low. But by that green arrow, we know persistence is not really the explanation in the case of ASE Low.
So I'll stop yet again. Does everybody see why pre-, post-persistence is explaining away things for ABE Intermediate High, but it's not explaining things for ASE Low? We can see that by looking at the persistence rate and kind of measuring that off this exam.
OK, everybody looks good on this one. So I'm going to sidestep here. What I just did is reiterate that NRS federal reporting explanation we've gone over many, many times. So this is a CAEP training, so we know we need to move this over to CAEP. So I'll just say a lot of these things that we talk about related to Table 4 and NRS Persister, we can use that CAEP summary if we're a CAEP-only and use some fields on the CAEP summary to get this information.
It may not be exactly the same, but there's definitely ways we can use the CAEP summary and get basically the same information. Certainly, we can get enough information from the CAEP summary, more than enough information, to at least make that basic determination as to whether persistence is a problem, yes or no, weather performance is a problem, yes or no.
And so in particular, we're talking about using the pre/post section over to the left back to NRS reporting. A little bit to your comment, Janae, we talked about performance and persistence. But I did kind of conveniently skip over the part that says when we're looking at persistence from an NRS federal reporting point of view, we're always looking at pre/post testing.
When we use the word "persistence," we're using that to say, is the student sticking around? There's definitely metrics you can use beyond pre- and post-testing to successfully measure persistence. That said, for NRS reporting, they're always defining persistence by pre/post pairs, not by attendance hours or other measures we've talked about a lot.
So again, we want to make sure it's apples to apples with federal reporting. We use the pre/post section of our Literacy Gains section of the CAEP summary. That is these left-hand three columns we're showing on the screen. So again, that column B, Enrollees-- again, those are the number of enrollees on our CAEP summary that also qualify for the NRS tables.
That is, they have demographics. They have 12 hours. And they also have that qualifying pre-test. And then, we've got the enrollees with pre/post column, C. That's the column that speaking the persistence, again, of those enrollees that make Table 4. These are those enrollees that have a pre-test and also have a post-test.
And then over there in column D, that's the smaller subset. Those also have a pre and post. But those also have not only that, but scores on their pre- and post-test that suggest they made that gain based on an NRS reporting. Those are the ones that successfully made that level.
So to measure persistence, we can simply compare column C on the CAEP summary to column B, that is, of those enrollees qualified for federal reporting with the hours, demographics, and qualifying pre-test. How many of those completed a pre/post, we can calculate that easily simply by doing column C divided by column B. That gets us our persistence rate.
We can do it by program. We can do it for our whole agency. We can do it lots of different ways. I'll just say this is kind of the pre-COVID feedback, but I'll leave it on here, depending on where you might be setting goals. Before COVID, I used to always say there's not really a number I can give you for performance because that performance rate really is all over the map.
And it's not even all over the map for ABE and ESL. All of those different levels of ESL have different rates. All of those different levels of ABE, ASE also have different rates. So there is no number I can give you. I just don't give you one because it's got 100% chance of being misleading.
Persistence though is way more straightforward. I've said, yeah, good ballpark figure. 70% persistence, that's our California rate for almost 10 years now. So if you have 70% of your students or better completing that pre/post pair, in general, that's what I, anyway, would say means you're on the right track. So if that helps, there you go.
And then for CAEP performance, same math here. But we're looking at column D rather than C that is the EFL Gains Achieved, that is we're looking at not only those that complete a pre/post pair, but those that make a good enough score between the pre-test and post-test to move up to the next level. So that column D, as in delta, divided by B, as in bravo, would give us our performance.
So I'll just stop right here. If you are NRS-funded, you could use the Data Portal probably easier. But I bring all this up to say, hey, if you're not an NRS-funded, or maybe you are, but you really want to incorporate non-NRS programs like CTE or Workforce Prep or whatever, then you can use this CAEP report and get to the bottom of it using these equations.
Again, it's the CAEP Summary, so that's the one I showed you here at first. I showed the screenshot of the whole report. This is the one you'd use. And then the next subsequent slides were blown-up screenshots of that same report. Okey-doke.
All right, so we've identified the issue with persistence. We've identified the issue with performance. And now, we've also kind of correlated it to some CAEP-specific suggestions. So here's what we go crazy with all these questions. So this is what we looked at a week ago, as I think, we had one related to getting enough testing, where we kind of talked generically about, what if we're low on hours, what if we're low on demographics, et cetera.
What are some things we should start looking at? So we looked at that for testing, but I think the one we looked at last week was just general what if we're not doing a good job of getting students tested. This one is a little more specific to pre/post performance. So we were talking about some of those questions, look at it longitudinally.
That is, when we look at our data and we see a problem, we should be-- this is doubling back to a little bit of review from last week-- that is, if we know we've got a problem, one question we always want to ask is, look at that longitudinal data issue. That is, we know we have a problem right now in 2022.
You always want to ask yourself, is this a new problem where we've always done well, and all of a sudden in 2022, we went from good to bad? Or is this an age-old problem, where we've always kind of run lousy, and it's been a problem for 5 or 10 or 20 straight years? I'm not saying one is good and one is bad.
I'm just saying knowing the answer to that question is very important. If you know you have a problem, you want to know whether it's an ongoing problem or a newly developing problem. That's what this beige question is. Hey, is this something where we've always done well? Or is it something that's always been a problem?
This next one-- and to also reiterate, we talked about you always want to look for those kind of outliers, back to those Sesame Street questions, which of these are not like the others? So if we know we have a problem with performance, hey, we know we've got a problem specifically with high level ABE and ASE.
We want to look and focus our data just on those higher levels of ABE, like this example, just our classes related to ASE in this example. And then, look for outliers in that small focus group. So we're looking at our ASE data. We know ASE overall is way closer to lousy than good. So we expect most of our ASE data to be lousy because we've already figured out that's our problem area.
ASE is the area where we're lousy. If we narrow it and look at ASE, what we're looking for is, hey, where are our big fires coming from? We expect ASE to be below average. But they'll probably be a few areas that are way below average. So we want to find those areas that are not just bad but are really overwhelmingly stinking bad and put a lot of focus on that.
At the same time, we'll have those diamonds in the rough. That is, even though we know ASE is lousy, here are some specific classes, or specific teachers, or specific groups of students that are doing really well, even though they're an area where most others are doing lousy. Look for those diamonds in the rough that are doing well when others are doing lousy.
Look for those really bad areas that are doing way worse than others. So that's another question you want to look at. You also want to look at some of those specific levels, hey, are there some specific areas where we know there's some specific levels in ABE or ESL or worse? And then, the green are kind of those what-ifs.
So the what-if here is, hey, what if every student just made one more point of gain on the post-test? Would that make a big difference in our performance data? Or hey, it's way worse than that. We need to make big gains, not just little gains. But some of those things in the margins sometimes can surprise you and make a much bigger difference than you would expect. So these are the sort of things we're talking about to be thinking about when you're looking at good data as well as when you're looking at bad data.
I'll just stop here. Is this making any sense at all with all these captions coming out? Yes or no? Sorry, I got to be a little bit of a fussbudget on this one to check. OK, and I see some new yeses, so that's encouraging.
OK, so these are good questions for performance. So here's another set of questions, very similar process, but different specific questions. These are questions you'd ask if you know your persistent state is weak. So just like we did with performance, well, hey, what's our historic track record? Hey, we're doing really lousy capturing pairs now. So an example might be the COVID question. In this case, maybe we were great in 2016, '17, '18.
And now we're way closer to lousy. If so, we'd obviously want to say, hey, maybe that's a COVID issue. We were testing everybody great in 2018. COVID hit. And now we're not doing so well. Maybe it's simply a COVID casualty. Ask yourself that question. Maybe it's something else. Either way, start figuring out what it is that's made you do worse now where you are doing great before and move forward.
There's no right or wrong answer, just knowing what correctly explains your situation, just like we talked about with performance. Maybe there's that random class or random teacher that's doing a spectacularly good job with this amidst everybody else doing spectacularly lousy. What is that one teacher or one class or one group of students doing that clearly everybody else isn't? Figure that sort of stuff out.
And then the what-ifs, what about those low-performing areas that we know we need to fix, are the areas that we know are also the ones that have low pre/post test persistence? If it's a match-- what I'm really getting at is this question in bold at the bottom. That is, can we improve our performance if we really just put our focus in improving persistence?
I'll stop right there. Does everybody understand that question? Because I feel like it paraphrases a lot of what we talk about when we talk about this goal set. Sure, so what does it mean? Anybody want to dig in and explain? Anybody want to take care of my 'splainin' for me?
What do we mean by that? Anybody brave want to reiterate that verbally or by chat? Yeah, get them to the test. Right. So I'll-- OK, nobody wants to be brave .
Audience: Yeah, I can do it.
Jay Wright: Do it. Please do.
Audience: I want to try. So if you make sure that the students are coming constantly and they're focusing on the classes, you can assure, in my point of view, that the performance is going to be better because they are getting into classes. So if the persistence is good, the students are going to improve. And it's going to show in the test.
Jay Wright: Right. OK, that's good. And I see what you're saying, Connie. Yeah, that's a good part of it. And I'll just say beyond just that, it's kind of a little bit of, hey, I'm not looking-- but a little bit of-- you see the motion I'm making where, hey, we're just sort of dismissing the problem because it's mission accomplished.
Can we be like that with our performance if we just say, you know what, we kind of feel like our student learning is fine, but we're not necessarily getting everybody tested? So if we sort of ignore the learning part but just focus on our data cleaning and our pre- and post-testing, will that alone kind of bump up our performance, too, because we know our persistence rate is low?
So can we really just safely ignore bothering our teachers for now and just focus on our data and our testing rate? Is it safe to say our persistence is such a big part of this problem that if we just focus on the grunt work like that, we'll probably, by osmosis, improve our performance because our data and our testing rate is so bad that that really is where the bulk of our low performance is coming from?
So back to the, hey, if we just focus on data and getting more pre/post pairs, we'll probably automatically improve performance because we've already figured out that that's such a big part of explaining why we're performing low in the first place. What I'll say is sometimes we have it easy, and we can kind of safely brush it off and say, if we just do that, we probably will spike up.
Other times, we'll be more like that example in ASE Low, where we know we're exceedingly and overwhelmingly lousy in terms of performance. But what do you know? We're actually doing a good job getting everybody tested. So we know that's not really going to help in that area. So sometimes it really will do this, and other times not. Go ahead, Dana.
Dana Galloway: No, you're probably going to say this. But once you rule that out, then you need to go on and figure out what's going on in your classrooms or what's going on with your curriculum. And I'm sure that's what you are going to say next. But you know that's--
Jay Wright: What do you think? What does everybody else think? You think I'm going to say that? Yes or no?
Dana Galloway: Yeah.
Veronica Parker: You think that's on the itinerary soon?
Dana Galloway: Yeah, so that's the hard part.
Jay Wright: OK, sorry, I had to get other people involved just to be cute, but yeah, sure enough. That's it. Yes, we will be talking about that. And this is kind of a segue to say, yeah, sometimes we can focus on fundamentals. And a lot of times just that focus on fundamentals alone, as long as you grip or use elbow grease, we'll probably be successful.
Other times, that might be more nuanced, then we need to look a little deeper, et cetera. So now, we'll double back. Here's a slide that we just showed about 10, 15 minutes ago-- back to the, hey, we know where we're weak. So let's compare it to persistence-- one comparison where, yeah, persistence is definitely the problem, another one where it really is not a problem, we can filter those other issues, absolutely.
And so a couple of people have been asking me about this. I'm not sure if it's anybody here, but that little goofy grid. So this is where that grid comes in, where this is just, for me, where I like-- you know, hey, we know performance is going to be good or bad.
We know that persistence is also either going to be good or bad. So that allows us to oversimplify things a little bit and put ourselves into one of these four quadrants, where we're good in both, bad in both, or perhaps good in one, and not so good in the other. I'll just say what I like to focus on here more is the left side of the equation.
If we're in the upper right, and we're good in both, well, hey, congratulations. Maybe we don't need any of this. If we're in the lower right, where performance is good, but persistence isn't, I'll just say, hey, maybe it's mathematically possible. But it's very, very rare. So rare that I don't really talk about it much because it's just unusual to be a high-performing agency and have low persistence at the same time.
So we'll focus on the other two. The first one is obviously, hey, we're low on both. That one's self-explanatory. Of course, we need help, then. So that's that example where we know we're low-performing. We cross-checked with persistence. And yeah, our pre/post rate is also low. So that's the one where we'd stay focus on the fundamentals.
Where if we're low in both that suggests for now, we really just need to work on those basic issues of, hey, if we've got data that needs cleaning up, demographics hours of instruction, whatever, clean up those demographics and clean up those hours of instruction. And then at the same time, of course, really put the focus on getting all of the students overall or in that specific level pre- and post-tested.
So these are kind of, in my opinion-- you're hearing from CASAS, so, of course, I'm going to say this. But what I would consider the two most basic fundamentals of data cleanup, pre- and post-test rate. Focus on that. I mean, you can certainly look at other things, but this theory suggests you really need to focus on fundamentals, get those fundamentals up to snuff.
And then, cliches I've used in other workshops, once you get the fundamentals up to snuff, that's where you can get a chance to rinse and repeat, house of mirrors, all that stuff we've talked about in other workshops, where, hey, we know our data is clean. We fixed those missing demographics. We've added our hours.
So most people are over 12 hours of instruction or whatever. We've got 80-plus pre- and post-tested. So let's run the reports again and see what performance looks like now that we've cleaned up our data and got more students pre- and post-tested. The idea is, hey, we do another evaluation, hey, all of a sudden, what do you know, our performance is overwhelmingly and exponentially better. Well, hey, a little elbow grease fixed the problem. Hey, we did this and it's a little better, but it's still way closer to lousy than good, then we need to look at those other areas.
OK, then the other one is, hey, our performance is low, but our pre/post performance is actually pretty good. So hey, that suggests we're already doing well with the basic data collection requirements. You literally can't do very well with persistence and have fundamentals be bad. If you're fundamentals are weak, by definition, that means your persistence is going to be low. So by determining your persistence is good.
That is suggesting you're doing well with these fundamentals. But our persistence, like you can see from the graphic, is still low. So we need to address other areas. Usually, that means we need to focus on instruction and the class. So now, that brings up another series of questions, where we looked at our levels. We've identified strengths. We've identified weaknesses.
We've kind of rinsed and repeat and taken it the second step, where we've evaluated our weak levels with persistence. We've identified which of those weak levels have weak persistence as well. We've identified which of those weak levels have good persistence. We've kind of split that up and kind of divided it accordingly.
We've done another round. Now, we've kind of honed in on those with good persistence and not so good persistence. So now we need to, A, use those data, bring in those data questions we talked about, and B, start looking at our staff, specifically teachers, of course, and the students that are involved in those weaker areas we've identified, get their feedback, figure out what's going on from their point of view is kind of what we necessarily need to do if we really want to roll up our sleeves and get this fixed.
So we're going to step back. This is the data side. This looks suspiciously familiar from a week ago. In fact, I think this was a slide from a week ago, maybe minus a few word changes or whatever. But this is what we talked about with the word, "troubleshoot," where we're in our data troubleshooting things with TE reports or other data submissions.
These are our kind of, quote, unquote, "quantitative data" related suggestions of ways to troubleshoot our problem areas. To reiterate again, longitudinal data, good idea, look at the history. It's not always going to say the same thing for you. But just knowing what your history is, at the bare minimum, puts you in a much better position to effectively solve these problems.
That is, there's no exact formula I can give you for, quote, unquote, "good history or bad history." But just knowing, hey, this is an area where we've been superstars for 20 years. Now, all of a sudden in 2019, 2021, we're a basket case. What the heck happened? Was it COVID? Was it some other development?
That's a whole different story, in my opinion, than, hey, yeah, we've been terrible about this every year since 1999. We can never figure out our elbow from our earhole in this area for whatever stinking reason. I'll just say scenario A and scenario B, neither one is better or worse than the other.
But you might suggest a completely separate set of questions for the, hey, we've been bad at this since forever, versus, hey, we've been great at this. Why are we suddenly bad instead of good? Does everybody at least follow what I'm talking about with these type of questions?
OK, thank you, Connie. I knew I could count on you to [inaudible]. OK, so there you go. Then these are my goofy terms. I didn't invent them. But I make a big deal about saying these are my terms just because I really don't want you to feel like artificially or falsely giving you definitions that everybody knows or artificially projecting my terms as the terms you need to use.
I don't want to misrepresent myself, so that's why I say that. But these are terms that I like, so that's why I'm giving them to you. So by hotspots, I just mean those areas-- hey, we know that we're really weak in ABE. What are some hotspots within our ABE data, where we know ABE is a problem?
Where are the hotspots? Where are these big areas that are causing problems that ABE coming from? Maybe it's a class. Maybe it's a teacher. Maybe it's a specific student or group of students. Maybe it's some specific level hard to say what? But find out, where is the really bad data coming from?
You're not going to neatly tie it up in a bow usually, but you can usually find some specific teachers or classes or students that are worse than others. That do kind of give you some information about where you really need to focus your efforts to fix the problem.
Conversely, we have diamonds in the rough. That's just a specific variation of hotspot in my opinion. But it's a good hot spot, so to speak, or same thing. We know we're lousy in ABE. We're looking for that random ABE teacher, ABE class, or ABE students that are really doing superstar level effort, even though we know ABE is a big problem.
What are those superstar teachers and/or superstar students doing that everybody else clearly isn't? What do we need to find from those teachers or students where they might be a big help to everybody else that isn't doing so great? And then again, those what-ifs, asking those questions.
What if we just got a few more hours then for every student? What if the students just made a point or two more gain on a post-test? What if we were able to move a few of these students out of ABE and in to ASE? What if we fix whatever it might be? All kinds of things we could ask based on the scenario.
And then that neighbors is another one. That was the one we were talking about kind of right side of the tracks, wrong side of the tracks, where, hey, maybe ASE is really lousy, but we get low-level ABE all of a sudden. It's good again. It doesn't seem like it should be that different. It seems like it should be really similar results, but it isn't.
What's going on with these neighbors? Hey, we've got this blighted area here with really bad data. But hey, just a couple blocks down, this really high-end developments going on there. Why is there that big of a difference two blocks away? You'd think everything would be super-duper nice, or everything would be blighted. How is it that they're so close together and so different? That's another good way to do it.
So here we're doing-- we're putting it in pink, saying here's things you really need to be your question-- things you need to be doing your questions you need to ask yourself. But I'll just say we're using this different color to say what we brought up in the last slide, in my opinion, kind of relates to questions you should be asking yourself about your data.
These are questions you should be asking yourself more about your agency and who's involved with this data. So here, we're going to say, hey, who were the specific teachers, specific classes, specific students affected by this NRS level or CAEP program? Who are the specific staff and the specific students we might need to target to get more information?
What I like to say is, hone in on those areas that are unexpectedly strong or those areas that are quite a bit worse than others. Hone in on those, just my opinion. That's where you might get the most bang for your buck in terms of getting good information from students or getting good information from teachers.
That is focused on the extreme good and extreme bad. That's where you might get teachers or students to give you some ammunition where, hey, the ones that are doing good can share their best practices that you can figure out ways to share with everybody else. The ones that are really bad will give you the other information maybe you don't want to hear but will be the ones most likely to give you the information you need to here as far as what might be going wrong in these areas.
That are obviously things that might be going right everywhere else. And then find out from those teachers or students or other staff, what would be their suggestions for improving? What would get-- hey, what will it take to get you to drive away in this car, instead it's hey, what would it take to get you in the lab to complete testing.
What on earth do we need to do? Get their feedback. Let your students tell you what it is you would need to do to get them there. Maybe that's the simple answer to your problem. What do we need to do in the classroom that will start making you perform better? What is it that you need from the classroom that you're not getting that we need to get there pronto to make sure that you're getting what you need, so you start doing more testing or in some cases just do better in terms of results? And so on.
OK, so wait. Did I go the wrong-- oh, no, I didn't. I'm backing up. So I'll stop here. You can see I surprised myself. I'm just backing up to reset. So in that vein, I went off on a major tangent. I pontificated like it you know what there. Is everybody still with me? Yay or nay? I'm backing up a step here. So let's make sure-- thank you, Connie, Janae, OK, everybody saying everything's fine.
OK, so this is just a repeat slide. So OK, sermon over. So we'll back up and say, OK, what we're going to do is, OK, we've got those issues where we need to really work on persistence. So they're both low. Others, where one is good, and one is not. So we're just backing up a step and saying, yeah, we're doing basic data cleanup.
We're also focusing on getting more pre/post pairs. So now, here is where the fire hose comes out, where we're just giving you way TMI with these suggestions. So here's some suggestions to improve persistence at your agency or your consortia. Some of these are bound to be helpful. Some of them are bound to not be helpful.
So just fire hose worth a suggestion so you have lots of options. So it's one way to look at it is look at simple roster reports. So you can see some suggestions. Next assigned test, student gains, student test summary, these are all just TE reports that provide that simple roster format, a list of students by program or by class.
Here is a list of students, ID, name, test they took, date they took it, form they completed, test score, et cetera. So a nice easy grid-by-grid. Here's the students that have a pre-test and a post-test. Here's the ones that don't. That's the most obvious easy solution. If you're not really getting much out of evaluating pre- and post-test, maybe the alternative is you need to dig deep and start looking at instructional hours.
There's those attendance hours reports and instructional our reports we have. Maybe what you need to do a your agency, is evaluate hours more than pre- and post-testing. If so, those are the reports you'd want to use. I'll also recommend some of these new reports, the enrollees by hours and service enrollees by hours.
Those aren't really new. I think those are ones we added last summer, so about nine or 10 months old. But if you're looking at-- especially, if you're trying to troubleshoot that 12 or more hours issue. it might be a really good job to look at enrollees by hours or service enrollees by hours. That'll allow you to evaluate it by those three buckets.
That is, how many students have 0 versus how many students have 1 to 11 versus how many students have 12 or more. That might be another resource you could use. And then the butterfly hours, that's just a different method of looking at the exact same data. And then maybe looking at enrollees by zip code, focusing on it regionally might be another way that you can find out who's been performing well and who hasn't.
Here's another big laundry list for you related to persistence. That first one, I'll just say, probably looking at other reports outside of State and Federal accountability, now, we're just kind of getting into brass tacks and giving you accountability report suggestions. So obviously, you can use the DIR.
Here are some specific DIR items that I might suggest might be ones that might be more useful than others for persistence. If you choose any DIR item-- the one I'd say is item 10a. That's the item that simply shows you here is the number of students that have a pre-test, but not a post-test. That's obviously what you're looking at.
So that would be the DIR item applicable here. Then there's CAEP reports, again, reiterating a little bit. The NRS monitor is the one I'd suggest. It sounds like a funny title for a CAEP report. But I put it in this category because if you're using reports like the CAEP Summary or like some of those other CAEP reports, you know that you've got drilldown options and you also have right-click options.
The right-click option that shows up, I believe, for pretty much every CAEP report imaginable is that NRS monitor option. That's the one that I always like to say you can use if you know you have a problem, but you really don't know what the problem is, I always say right click and drill down to NRS monitor. The NRS monitor is that supplemental report that basically tells you everything you ever wanted to know and way more.
So if you don't know what the problem is, it's good to have that TMI from the NRS monitor. It gives you everything you need to know about that subset of data so that you can kind of decide, gee, is it hours? Is it demographics? Is it getting the testing in in the first place? Is it more nuanced than that? A good way to get a general flavor for that is by right-clicking and drilling down to NRS Monitor.
OK, so here are these crazy questions again. So again, we're focusing on persistence for now. So here's that same hotspots question. This applies no matter what we're focused on. But again, we're looking at pre/post pairs, who is it that's doing better than others or worse than others in terms of pre/post pairs?
That, at least, from a data point of view is where we want to put our focus. Why is this one little subset so much better or so much worse than everybody else? And so here we're getting into kind of more qualitative with interactions with students and interactions with staff where we're starting to ask those kind of questions now, not just data questions.
So here's a couple there. And I don't know if it's purple or pink. But hey, it's that color. So hey, what's the difference in the feedback we're receiving? We looked at those diamonds in the rough that are doing well, sent out some questions to teachers and students, looked at those other hotspots that are doing beyond terrible, got feedback from those teachers and those students.
What's the difference between what they're saying and what everybody else is saying? Again, back to that basic Sesame Street analysis, what is different from them that doesn't match what everybody else says? We can look at that from the higher performing classes perspective as well as the lower performing classes perspective.
That will probably generate two laundry lists. What is it that's different that they're saying that makes them better or worse than everybody else? Then at a deeper level, from your teachers and students, are your teachers receiving regular communication from your agency? Probably that means you. So this is where you might need to put yourselves on report.
Do your teachers know exactly what all those who's-on-first issues are? Are they clear about their responsibilities for testing? Are they clear about what their responsibilities are versus what your responsibilities are as administrator versus what your responsibilities are as a Data Manager?
Is everybody clear these are the duties the administrator needs to do, these other things are what the Data Manager needs to do, these other things still are what the teacher needs to do? Is everybody, but especially teachers, clear about how all those responsibilities are divvied up, especially those responsibilities related to pre- and post-testing and relating to collecting student attendance hours?
Are teachers clear about what they need to do versus what somebody else's job? Are students, in turn, receiving feedback from teachers about attendance and about their responsibilities as a student for completing requirements? So again, do teachers get good communication from you? Do students, in turn, get good communications from their teachers?
Again, answer that question from a basic yes/no perspective. And then relate it to that other question right above it. Whereas different information you might be getting from the higher-performing classes obviously suggests, in your higher-performing classes, that teachers and students are clear, in your lower-performing classes, they're not clear.
So look to see if maybe some of those trends emerge. Right, student goal setting is another one. We'll be getting there soon. So keep that in mind. You're coming to a theater near you just like Dana's question, you might say. OK, so everybody clear about these questions on this goofy slide? Yes or no? Thank you, Dana. Thank you, Jesse.
OK, so we're switching gears here. So the suggestions we gave a few slides ago, the myriad of questions that we just went through related to issues, related to persistence, so now, we're going to switch gears again and move it back over two when performance is low but persistence is OK or actually pretty good.
So here are some accountability reports more angled for performance improvement, so the DIR, obviously, a good one for that one, too. Here's a couple of specific items. In particular, 10b is the one I would select for that, so almost but not the same as the other one. 10a, to be clear, is the DIR item that specifies, has a pre-test, but does not have a post-test.
10b is the one that point-blank shows you, has a pre-test and also has a post-test, but did not make a level gain. So could it be more crystal clear in terms of what you need to find out to improve performance? Again, lots of ones you could use, but 10b is the one I'd vote for if you pick just one DIR item.
Here's that NRS monitor for basically the same reasons I suggested a couple of slides ago. Again, if you know you have a problem, but you're a little unsure what that is, right-click and drill down to NRS monitor, so you get all that wonderful information. So you have all the information you need to start figuring out what the stinking problem is.
CAEP outcomes is another one where, especially if you're looking at performance improvement in programs outside the NRS program-- so maybe you're focused on workforce, prep, or adults with disabilities, or CTE, that is, you're looking at stuff that doesn't relate to NRS-- you might want to use CAEP outcomes instead of pre/post reports to look at performance based on outcomes they may or may not be achieving rather than pre/post.
I-3 summary, another good one for that, probably more specific for ESL, but maybe for other programs if you're using I-3 or EL Civics for programs outside of ESL. But that's another one that directly relates to CAEP that might be another good one to use for performance.
If you're a little unsure about pre/post data that would be an area, where, hey, you can measure it based on those COAPPS using to measure immigration indicators, if you've got something like that going, just saying that might be another one you could use to measure performance improvement.
OK, and sorry. I keep unwittingly opening up the wrong window as I move along with these slides. OK, so anyway, here is some more suggestions for performance improvement. That first slide was kind of, hey, here's some accountability reports that get at this. When we talk performance improvement, like we said a few slides ago, that's less related to data fundamentals and more related to improving things in the classroom.
So by definition, that means we might want to look at test results reports rather than accountability reports. So in our test results reports area, we have those reports related to content standards and college and career readiness. We have those CASAS competency reports, individual skills profile, and so on.
I'll just say, how many of you use those test results? I'd be interested in the yes/no. If your answer is yes, I'd kind of be interested in knowing, are you using competency reports, content standard reports, or both? Just curious. If you're not using them at all, I'd also be interested in knowing. OK. OK, all of them, competency-- OK, so mostly, you're saying, yes, you do. Kind of mixed bag on exactly what-- OK, that's good to know.
Audience: Jay?
Jay Wright: Yes.
Audience: We actually use all three of the reports, particularly for our special ed students. And our teacher includes them in the IEP meetings.
Jay Wright: OK, great. But are you doing power, or no?
Audience: Am I doing-- no, we aren't doing power. We're doing goals.
Jay Wright: OK, so you have higher-level adults with disabilities?
Audience: Right.
Jay Wright: OK, all right, this is useful. So I'll just say these are obviously tapered to the classroom. If your data fundamentals are good, but your pre/post results are not, then this is where we would suggest you might want to go first. Align the items, so you can use your assessment results to inform instruction.
So these three reports are a good head start. There's more detailed ones in the TE test results menu as well. But these are good ones for starters. It sounds like a lot of you are. The other thing I've been bringing up is the section of our website called Curriculum Management and Instruction.
I'm sure if you were probably in on little mini presentation from about a month ago, where I brought this up. And even some of our people that know CASAS better than I do, said, hey, they've never really been here. Or maybe they'd been here, but not in years and years and years. Even people that read this stuff really weren't familiar with this web page.
I got to say way more people not familiar with this than familiar. So I'll just say here is a screenshot of our illustrious CASAS website. You can kind of see the page. I'm talking about here. And you can see it's in our section called Product Overviews. And then drilling down more, its Product Overviews Curriculum Management and Instruction.
The Curriculum Management and Instruction section of our website is the one we frequently refer to as the instructional section. That's the section of our website that has everything you want to know-- everything you ever wanted to know and more about content standards, everything you wanted to know and more about CASAS competencies, skill levels, et cetera.
All of that stuff is here. If you want to know more, this is a reservoir of information. And again, even the CASAS superstars seemingly really don't know much, if anything, about this for whatever reason. But this is a big fountain of information that's there for you, where you can get more information.
So for starters, I'll just point out Content Standards gives you more on that. Competencies-- you can download the competency list, get more there. The College and Career Readiness gives a lot of information about common core and how it aligns to that. Skill Descriptors-- I'll just say, if you're like Connie and you're looking at goal-setting for some of those other programs like adults with disabilities, sometimes using skill descriptors is good.
I'll just say another one in that line is the Low Level Literacy modules. I dare say these Low Level Literacy curriculum modules might be the single most underutilized CASAS resource out there. Nobody ever takes the bait when I mention these. They've been around for 10 years. They were developed specifically for adults with disabilities.
But it's a lot of curriculum modules that rely on pictures/symbols. That is, it goes way, way out of its way to avoid words for students that may not be very good at reading. So you can use picture, symbols instead of words to convey all the basic concepts, again, developed for adults with disabilities.
If you're like Connie, well, obviously, it will be useful. But it's also really, really, really useful if you have low-level ESL or low-level ABE. Hint, hint-- if a lot of you are looking to bring them back and looking at those low-level ESL students who are rusty because they kind of abandoned you during COVID and now they're coming back again, now that you're opening up again, this might be especially useful for students like that that I guarantee are probably going to be a lot rustier than they were in late 2019, early 2020.
You might use these low-level literacy modules-- a really good way to get the student's foot on the rail, who have very low-level reading skills. And again, a whole series of things you can use that really rely on picture symbols. If you're looking at follow-up for those students just overall, a lot of great resources, I'd say, collectively.
Not everybody, but collectively, those low-level students, whether it's ABE or ESL, typically are the ones that the average agency has by far, and I mean, by far, the most difficulty coming up with learning materials that are worth much of anything for it's those low-level students. So I really think this addresses an area where just about everybody has an area of need are these low- level modules.
Quick search is another one. Not as-- just as useful, probably more useful, but I say, not as underutilized. Way more people are clueless than knowledgeable on this. But at least, a few of you are familiar with quick search as opposed to those low-level literacy modules where nobody is. But that's where you can go online and find more materials.
That's what I like to call the CASAS cookie-cutter module, where you use reports like the competency reports or the content standard reports to find student areas of strength and student areas of need. You focus on those class or student areas of need, which specific competencies or which specific content standards do we know to be an area of weakness and an area we might need to improve for this class?
So we can search by competency or content standard, and it will spit out all kinds of instructional materials that we know do an effective job of addressing the competency in question or the content standard in question. Right.
And so what I'll just say to your point, Jesse-- sorry if I'm later here noticing it, but that's why we would say Quick Search is good, is this is a way to make sure that the textbooks or instructional materials you're using directly relate to those content standards or the competencies that you've already noticed might be areas in which you need to improve. So thank you, Jesse. That was right on with what I'm trying to spit out.
OK, and then this is just the slide from that presentation a month ago. As we've talked about curriculum management and instruction off and on over the years, but this was-- because I think it was at a TE meeting. So if you're more TE related, this is just looking at those areas on the website and trying to relate it to more TE-specific issues.
So mostly, we talk about it in terms of instruction. Here's a way to talk about it in terms of TE reports, where if they're trying to figure out, well, hey, this is all nice, wonderful. But how do I relate it to what I'm doing day to day in TE? How can I parlay this information that you're saying is so gosh darn wonderful and start using it when I'm digging into TE reports and actually fixing stuff?
These are just some TE-related suggestions that relate to some of these sections on the website. OK, and yet another series of crazy questions here. So in this case, instead of focusing on improving persistence, now we're asking the same questions but focusing on performance.
So the first one kind of the same we've talked about. Well, we're looking at it in terms of gain, not just pre/post persistence. But hey, which specific classes or teachers or groups of students are doing noticeably better or worse in terms of making gains between pre and post. So the same diamonds in the rough, the same hotspots idea, but focusing on the test score in this case, not just whether they have a pre/post pair.
And then taking it again, is there a difference in feedback in terms of what the higher- and lower-performance classes are saying in particular? Is there may be a difference between what the teachers are saying and what the students are saying? That's obviously applicable across the board, but in particular applicable to this performance issue.
And then questions you might ask yourself are, are teachers empowered to identify and employ practices for student learning? Are you letting teachers do their job and figuring out what their students need in terms of learning? And then at the same time, are your teachers able to relate class activities to goals and assessment results? And are they able to relate their assessment results to instruction?
So same thing from the teacher point of view, are they getting what they need from you? At the same time, are the students getting what they need from the teachers? Look at it both directions. You probably can't answer that in a vacuum. But if you go to these specific teachers and students and ask them what they think, theoretically, you should be in much better position to answer these questions once you get their feedback.
And then more hose stuff here. I copied and modified some things from some recent trainings in that NRS Performance Goals series. So we've got four slides for the different areas we might address. So hey, for starters, here's just some ABE, ASE specific. Some of these might be great suggestions. Some of them might be lousy.
But again, I'm just giving you the fire hose here. That if you know ABE, ASE is where you need to improve, hey, here are some ways you might do some more data. Or again, as I'm saying is, hey, we know this is a problem, so that we might divide it into three buckets, ABE versus HSE versus high school diploma.
Or maybe if you're not that detailed, divide it up into two buckets, one for ASE and one for ABE. Compare just your people and GED or HiSET versus those that are in diploma. Break it down between those that did their pre/post in math versus those that did read it. Look at those who scored really high and may have gotten off the map because they got a pre-test of 268 or something like that.
Look at your outcomes and compare it to high school diploma credits. Compare it to HSE subsection. If you're a big HSC agency, see if there's any difference between those that are working on HiSET and those that are working on a GED, if it's mostly diplomacy, if you can find anything come up by high school diploma subject.
Lots of others you can do. But again, it's just breaking it down into different areas of this program. So you can see if any trends come up, see if you can kind of force the issue and shake out a few more, which of these things are not like the other. Hey, I didn't realize it, but the students that are working on completing the HiSET are all doing a really great job, but the ones working on GED really are struggling. Why is that?
I mean, I'm just pulling that out of thin air. That shouldn't happen. But hey, what if it does happen? Well, hey, we've got a trend. We have something we might need to sift out and see, wait, maybe there's something where we are aligning it to the HiSET really well, but for whatever weird reason, we're not aligning our instruction to the GED as well as we're doing the HiSET. Hey, there's a big issue we might need to look at. Maybe that explains some of the reasons why we're doing well overall, but really lousy in those two levels of ASE. I'm just making that up.
For ESL, divide it into buckets. Similarly, we could divide it into EL Civics or non-EL Civics. Divide it into EL Civics focus area. Break it down. Look at those who were in IET or IECL. doing some kind of workforce training. Compare those that are pre/posting and listening versus reading.
Those ESL students also have an issue with scoring high. See if there's any issue with that. Looking at pre/post versus COAAPs if you've got ESL students that are not doing well on their tests. Are they also having trouble with COAAP's. Or are they struggling with testing? But you look at their COAAPs and they're the greatest thing since sliced bread, knowing that is useful.
Compare by demographics. This you can do for all programs. But I'll just say I hear this a lot more related to ESL than other programs, where, hey, in our community or in our region, our ESL students are from this specific ethnic group, this specific socioeconomic group. As such, they tend to do really well over here or really struggle over here.
Make sure your awareness of what demographics factors are important to your region are. Compare that-- I'll just use the one that I hear all the time, where, certain demographics are better educated than other demographics in our region. So hey, certain demographics groups seem to do really well on CASAS testing even though they don't know a lick of English, say. So look at that. Is that relevant to your region? Is that affecting your test scores? That might be one that you might want to look at based on region.
Then for CTE, what if? We had more CTE outcomes or more hours. Break it down by CTE-specific program, pre-apprenticeship, short-term CTE workforce prep. Compare and contrast. Look at occupation. Hey, our CTE data is great. But hey, these students that are in HVAC, boy, their data is terrible.
Why is HVAC so lousy, but our other CTE programs seem to be all right? Why are some good and others not? Compare IET from that way. Collaboration with Title I with the AJCC is a big factor. Again, this applies for everybody. But on the CTE side, how is our collaboration doing? How are we hooking up?
What's going on with our CTE program, with our other regional priorities, whether it's CAEP consortia, WIOA partners, whatever. Maybe those demographics issues relate to CTE sort of like they do with ESL. And then, here, we're calling it special programs. So if you're like Connie and you're aligning it to adults with disabilities, the age-old example here is, if you're NRS, and you find out that ABE level 1 is your worst level, a lot of times, my first question is going to be, do you have an adults-with-disabilities program?
Because usually strengths and weaknesses in ABE 1 typically have way less to do with ABE and way more to do with adults with disabilities just because that's usually the way it is. So a lot of times special programs can affect certain levels or certain programs more than others.
So look at things like that. Compare special programs. Barriers to employment, this day and age, see if there's a difference with hybrid or hyflex versus face to face. Maybe there's a difference with students that have workforce connections. Another one that we haven't really looked at much but might be relevant is look at students receiving services versus those not receiving services, seeing if that has any impact, lots of things you might look at.
And then systems in place-- I'll kind of hurry up here. But again, we're getting those feedbacks, feedback from those affected, looking to figure out, hey, orientation, enrollment, attendance policy, testing strategy, curriculum and lesson planning. New short-term services we might need to add. Those are the sort of solutions we're looking at by getting feedback from those affected students and affected staff.
And then here's where we're doing all these wonderful things. We're sharing feedback. We're building a culture of data. We're making sure we're getting our students involved. We're providing 360-degree feedback, hooking up with our partners, all sorts of things there to make sure we're moving forward to develop that culture of data.
It's not on the slide, but I'll just say, if you're interested in these, quote, unquote, higher-level strategies beyond what we're talking about, there are a couple of panel discussions plan. It's not really with TAP, but it certainly relates to this. One is on April 19, one is on April 26. I think they're both at 1:00 PM. We'll have a couple of panels.
The one that we're doing on the 19th is student focus, the one we're doing on the 26th is more staff focus. But I have some superstar panelists that have some really good practices in terms of different ways they better engage their students-- that will be one of them-- and then another set of panelists that we'll talk about ways in which they better engaged their staff.
OK, and I think that's it. I think I'm a little bit-- OK, two minutes over. Not that bad. I thought I got the two-minute warning about 10 minutes ago. Maybe that was done as a premature ejection there to make sure we got out of here on time. Who knows? That's for you to know and me to find out, I guess. But anyway, thank you very much. Take it away, Holly or Veronica.
Veronica Parker: All right. Thank you, Jay. And I'm not seeing any questions in the chat, just some thank-yous. So thank you all very much for participating this afternoon. In the chat, we have posted a link to upcoming professional development opportunities, as well as a link to the evaluation.
It only takes about a minute or two to complete the evaluation. We are looking at fall professional development. And so we want to ensure that we are meeting you all's needs when we are planning our professional development, and one way you can let us know what the needs are is through that evaluation. So definitely take about two minutes to complete that evaluation. We would greatly appreciate it.
Other than that, again, I'm not seeing any questions, so we'll go ahead and close. Thank you all very much for your time and your participation and have a great afternoon.
Audience: Thank you.
Veronica Parker: All right, thank you everyone.