Skip to content
All posts

Rethinking How We Assess Students

This is a recording of the EdTech Innovate session, "Out With the Old (Assessment) in With the New (Assessment): Rethinking How We Assess Students", which took place at the Schools & Academies Show London on 1st May 2024.

This discussion focused on key topics such as:

πŸ“ Exploring the shift from traditional assessment models to contemporary strategies that align with evolving educational needs

πŸ“ Considering new perspectives on evaluating students as part of the ongoing reassessment of assessment methods

πŸ“ Does AI encourage cheating or are we assessing students on the wrong aspects?

πŸ“ Tailoring assessments to better evaluate the skills of today’s students

πŸ“ The impact digital assessment will have on school infrastructure

Panelists include:

  • Miles Berry, Professor of Computing Education and Chair, University of Roehampton School of Education and National Centre for Computing Education academic board
  • Shahneila Saeed, Head of Education, ukie.org.uk
  • Alex Scharaschkin, Executive Director of Assessment Research & Innovation, AQA
  • Graham Macaulay, Director of Strategic Partnerships, Leo Academy Trust

Check out the session for free here! πŸ‘‡

 

New call-to-action

Transcript

Shahneila Saeed 0:01

Good afternoon everyone and welcome back to our session. This session is "Out With the Old (Assessment) in With the New (Assessment): Rethinking How We Assess Students". We have a panel discussion, and I'm joined by a wonderful group of experts who I will introduce you to. We have Graham Macaulay, Director of Strategic Partnerships at LEO Academy Trust. We have Alex Scharaschkin, Executive Director of Assessment Research and Innovation at AQA. And we have Professor Miles Berry, Professor of Computing Education at the University of Roehampton School of Education. Thank you very much for joining me. Thank you. This is such an important topic in today's world. If it's okay with yourselves, I'd like to just dive straight in with the first question, rather than giving a preamble, if that's all right. Let's get to the meat of the conversation right from the outset. So my first opening question to each of you, and so Miles you might kick us off on this, if that's okay, what is the changing role of assessment in children's education, and why do we assess students at all? 

Professor Miles Berry  1:16  
Oh my goodness. Where to start with this? So you're all, I suspect, familiar with this whole formative, summative thing that's going on with assessment, and I think that gives us a good way of framing our understanding of the purpose of assessment. Summative Assessment seems to carry on pretty much as it always has. Yes, we may see some changes. We may see some use of adaptive testing in there, but under specifications, they kind of move every now and then. But the purpose of summative assessment seems to remain broadly speaking as is, what have the children learn? Can we demonstrate that to future organizations, employers, universities, wherever formative assessment that information for the learner themselves as to what I figured out so far, what I still need to work on, the information crucially for the teacher, as to how best to adapt their teaching, to respond to where the learner is. I think we see so much shift in there and making that a much more dynamic, a much more immediate process of that feedback, that snapshot of where the child is there, and then in the lesson, think of it, if you will, as the GPS of where the learner is right now, and the ability for us to sort of do the SAT nav thing, and this is the best path to get to the next destination. The thing I would like to add into the mix here is, alongside your summative and formative, there is something going on in my head about constructive assessment that we learn best, not through the process of writing things for others to see that can be a story, an essay for teachers to mark, that can be creating something to show to a Much more public audience. But that sense of assessment as learning, rather than merely assessment of or assessment for learning

Shahneila Saeed  3:07  
 I love that, assessment as learning. Alex, straight over to you. 

Alex Scharaschkin  3:10  
Thank you. No, I think that's an excellent point to pick up on, actually, because obviously I come from AQA, which many of you will know as an exam board and hence purveyor of summative assessments. But the purpose of those summative assessments, in my view, anyway, is essentially to provide us with evidence to make promises when we issue a qualification certificate, really what we're doing is making a promise about the candidate and saying, "This student has achieved a certain level of competence". They've got a Grade 4 in GCSE maths, from Grade A in A level French, or whatever it might be. And we need to find, we need to have a way that is seen to be as fair as it can be, and it's kind of valid as it can be to do that job. But I would agree, in terms of the formative, summative thing, that actually that's a very unusual and kind of special thing that you sometimes have to do for that purpose of providing a qualification, hopefully a passport, for a learner to move on to the next stage of education. But most of the value in assessment really is in the classroom. Because I think assessment in that "assessment as learning" and the formative assessment, it's part of teaching. Every interaction that you have with a student, in a sense, whether it's observing them, talking to them, asking a question, you're getting some sense, some information, some feedback, and then you're perhaps taking some action or making a decision or doing something a bit differently. And I would regard that in the wider sense as essentially assessment in its in its wide sense. So I think really in that formative space and that assessment as learning space, I'm sure we'll come on to technology and so on in due course, that's where I think we've got some really interesting things we can do. I hope you agree to kind of help do things that we couldn't do a few years ago there.

Shahneila Saeed  5:05  
Graham?

Graham Macaulay  5:06  
It's always reassuring when you're on a panel, isn't it? When other people sort of say what you're thinking. Because I'm sitting there thinking, "Oh no, how am I going to word this?," but I didn't need to. So that's always a pleasant surprise. Coming from a trust with primary age children, I think definitely the fundamentals of assessment are similar. I don't think it matters whether you're four or actually, I've got my own child that's 15 months.  It doesn't matter whether you're 15 months or 15 years. I think the fundamentals of assessment are pretty similar, right? And so I was reflecting on, actually, what does it mean for our staff if I said to a staff, a teacher in one of our schools, "what's the purpose of assessment?" I'm actually not sure what they'd say, because I think deep down, you know, I really believe that formative assessment is crucial in terms of informing us and acting on it. And my colleagues here have kind of articulated that really, really well, but actually, what does that look like on the ground? And I'd argue that that's the most important thing for us to consider here. How is the way that we enact our assessment, policies, systems, processes, procedures, etc. How is that? How is that real for our children? And so that part of the question was about change and I, fundamentally, I'm not sure whether the purpose of assessment has really changed, like I ultimately, I still think we're trying to do the same thing. Yes, we might have tech solutions for that, and yes, we might have different frameworks, and it was a while since I did my GCSEs, but yes, we've got, like, all of that stuff and that regulation around it, but fundamentally at the heart of it, I think it's to level up, to accelerate the learning that we provide our students, whether they're Poppy, my daughter that's 15 months, or Joey Blog that's 18, I don't think it really matters.

Shahneila Saeed  6:52  
Thank you. Thank you. I think sort of touching on and sort of reflecting on what we currently have because we started talking about moving forward, but just to think about sort of currently have, and I think it is perhaps fair to say that traditional assessment models are beginning to face, or do face some criticism. In your opinion, what are the key challenges that they present? But is there an argument towards us sticking with the same thing? There might not be, I don't know, but that's why I'm asking. 

Professor Miles Berry  7:27  
Okay, is this to me first? 

Shahneila Saeed  7:31  
Yeah!

Professor Miles Berry  7:31  
So firstly, let me reject the premise in your question. Are they really that criticised? There are certainly faults with the way that we do some aspects of summative assessment at the moment, that notion, and I don't know if any of you walk this way into the Excel Center, but those huge rooms that I think UCL and King's College London have got here for bringing their students to the Excel Center to put them behind desks and make them do written papers. Now, 2024 this is still how universities are doing, at least some aspects of their assessment. So they clearly think this is the best way to assess something. What are they measuring there? They're assessing, I think, some aspect of memory, the ability of a student to remember things under very tightly controlled, monitored, supervised conditions. This is pretty much what we do in the Key Stage Two SATs, where they're put into the school hall. It's pretty much what we do you do GCSEs at a levels. So we've not changed yet, and we have had the opportunity to do so, to move away from that, because it is really good at assessing "can you remember this stuff"? Can you apply this, possibly in a novel context? And this idea that memory is the legacy of thought, that because we're assessing memory in a reasonably reliable way, there, perhaps what we're assessing is the thinking that took place before that, doing well in those exam rooms down there, doing well in your GCSEs, doing well in your SATs, probably means that you've had to think long and hard about these things when you were at school. There's loads of stuff that this doesn't seem to be a particularly good way of assessing how well you work with a team of people. Let's see the Kings and the UCL students. Oh, just put the tables around. Let's tackle this together. This is going to be really important to them when they get their degrees. It's really important for the folks leaving school. It's probably really important to the folks leaving primary school. But somehow the SATs, GCSEs, university finals don't measure that ability to work well with other people, the ability to use your new technological superpowers, to use the information which Google and Chatgpt and all of the others provides us with and ways of thinking using these tools. Traditional paper based supervised exams don't get to that. So I don't think we reject what we're doing so far because it is a reliable way of testing what you remember, but we add to that things which would be equally reliable measures of the other stuff, which does seem to really matter in education. 

Shahneila Saeed  10:14  
Thank you. Alex?

Alex Scharaschkin  10:16  
 Yeah, I would agree with pretty much all of that. I think, I think it's quite useful to make a distinct distinction, perhaps, between what is assessed and then how it how it is assessed. So there's the kind of, what's the content, what's the construct, or the skills or understanding or knowledge, whatever it is that we're trying to elicit through whatever procedure we use to do that, and then how? What is the procedure? What is the methodology we use to do that assessment? I mean, again, I'm at an exam board, and of course, as we know, GCSEs and A Levels are largely assessed through written exams. That's the government of the day's preference. Actually, it's not a kind of necessary condition of being an exam board that everything has to be assessed through exams. In the 1980s and 90s, certainly in the 90s, the largest GCSE English specification which we ran was 100% coursework assessed. For example, when Nick Gibb came, well, Michael Gove and then Nick Gibb came in, they had a particular view, not only about the content and what should be in the curriculum, and what is it important, what skills, knowledge and understanding is important to equip young people with during their time at school? They had some views about that, very, very strong, but they also had some views about, how should you assess it, and this kind of sense that it all should be done through written exams, which, you know, I say, despite being a provider of these things, I'm kind of agnostic about in terms of, you know, are they always the best way of assessing the thing you want to assess. And sometimes they are actually helpful. And in some circumstances, you do need to test memory. Sometimes, you know, doctors perhaps need to know some some things without having to. But other times, of course, you don't need to test that. And a realistic test of, you know, how someone might write a letter in a business context would be, well, of course, what you do is have a look and see what someone else has done. You take a template, use something you did before you do you know, that's what everyone would actually do. And if you want to be authentic, perhaps that's what we should how we should design those assessments. Think about what the content, what the construct is, and then think about the best way of assessing it.  

Graham Macaulay  12:27  
I'm going to tell a story. I think it's a primary practitioner that's in me that I just, I just can't shake off, despite my best intentions. And we were doing a piece of work where it was linked to tech, but that's kind of trivial to this conversation. And we were talking about assessment, and we were talking about what we assess with, with a group of practitioners, you know, class teachers, phase leaders, set heads, Assistant heads, all those school based professionals. And they were rattling off to me beautifully what the end of Key Stage Two Writing Assessment Framework looked like, and what what this document says, and what this document says, document says. And of course, this is all end of Key Stage Two orientated, but the concept, I'm sure is transferable. And I and Dr Aubrey Smith was with us, and she she posed a challenge of, surely, we should be assessing what we value, right? Like, that makes absolute sense. Like, let's focus on what we really value. Let's focus on what's really important. But I don't know about you, but I don't really care if a child can spell cupboards in their end of Key Stage Two SATs, partly because I'm not sure I can spell it, in all honesty, but also because ultimately, that's not what we value. And so I guess my thinking here is around is kind of linking into what we've already said about, not so much about how we assess, because on a personal level, I don't think that's that difficult to change. You know, I appreciate that we need system leadership. We need direction from government. But actually how we do it is quite procedural. I think what's far more significant is, is the "what", is ultimately what do we value? What ultimately is important. And I don't want to get into this philosophical debate around, you know, does a child ever need to be able to write? Does handwriting matter to timetables, all of these sort of things? Because every one of us in the room has our own personal beliefs on that. But I think there is definitely a place to think about, what do we really value? And you know what? Let's focus on that. 

Alex Scharaschkin  14:18  
Can I just pick up on that point about valuing, because I completely agree. I think actually, when you think about the term "valid" in relation to assessment, we often have this kind of mantra that the most important thing in assessment is validity. Of course, validity is what we value. The question is, who is the "we" or isn't it often? And you're quite right that you know part one of the things that makes it a very fascinating area of kind of research is this contestability of what is the right thing to value and how do you reconcile different views and different parts of society about on that.

Shahneila Saeed  14:55  
I think this discussion on assessing what we value is really interesting. So I think for the next question I'd like to ask, actually, I'd like to dive a little bit deeper into that, into what we're assessing? And are we still assessing the same things? I mean Miles, you've talked historically about that knowledge driven, memory recall as a as the measure of what we used to assess. With changing times and needs, there is probably an argument to be made that there, to a certain extent, do we still need to be assessing a little bit of memory driven knowledge recall at some points, but do we need to start thinking about other things as well? And how might we best do that? And I know this has been touched on in the past, as in some of the earlier questions, but just diving a little bit deeper, just particularly on this point around, are we assessing the same things? Do we need to be assessing differently? Do we need to be assessing different things? Is there any ideal strategies about how we might do that? Four questions in one. I mean, who would like to start this, Graham, would you like to?

Graham Macaulay  16:07  
I think, ultimately, I think it depends who you speak to. So on one hand, I think you could talk to academics and policy makers that would say that we've considerably redesigned assessment, and what we're assessing is different, and it's much broader, particularly in the primary environment. But I'm not sure it is. I fundamentally think we're still assessing much of a muchness. You know, we might have changed, like, layouts and formatting, and we might like, have changed the structures, but fundamentally, we're still asking our 11 year olds at the end of KS2 the same kind of stuff. You know, I haven't got access to this year's SAT papers, for any of you that are wondering, but my, my gut feeling is I could do a pretty good guess of what might come off in some shape or form or capacity. And that, for me, that raises the question, well, you know, does that create a culture of just focus on what is pretty likely to come up? Like, the chances are, sorry, this is very primary orientated. Now I apologise. Chances are there's gonna be long multiplications in that arithmetic paper, so guess what? Focus on teaching that because you've got a mark there waiting. So, you know, I suppose, what I'm, I guess I'm anecdotally saying here is that deep down, I'm not sure we've really changed it, changed the assessment content or focus enough. And I'd really love to hear and be energized that perhaps this isn't the same in GCSEs, etc, but I think we've got a pretty flat curriculum, and I think we've got to do something about like, you know, and I don't repeat what already said, but like, measuring what we value, you know, I think about the future children, and yes, they need to  know some knowledge. I'm not suggesting they don't. Actually, I think we're talking about AI maybe later, so perhaps let's not go down that route. But like, that's going to change assessment. And what do we really want our children to have? What are those, those core fundamental skills our children, our students, our learners, need to have? What do they need to thrive in life?

Shahneila Saeed  18:09  
We will come back to AI in the next questions. That's fine! Alex?

Alex Scharaschkin  18:14  
I mean, it's interesting. I'm lucky enough to have in my team at AQA people who kind of look after the archive from the exam board going back to 1902 you know. And so he's very interested. And every Thursday, we get something email round, which is called Throwback Thursday. And it's like, here are some, you know, here's a, here's an image of the 1926 you know, O level sewing and domestic science paper or something. It's really interesting to see what is the same, you know, in broad terms, and what is different? Also it's a kind of cultural history thing, actually, it's very interesting in, in that sense, what are the kind of, you know, assumptions and implicit stuff embedded in all of that? But broadly, I would say that, you know, there's kind of in terms of what it is that is assessed in if we're looking at the, you know, 1618, school leaving staff as well, the primary, you know what we have now is a very traditional, essentially, a very traditional curriculum. That is what the government of the day has determined should be assessed. And of course, whatever we then do assess in our summative tests at the end of that time, is what will be taught. You know, for better or worse. I mean, ideally one talks about teaching to the test, and you know, you know, that's impoverishing, impoverishing classroom practice and so on. It would be very nice to have tests that were kind of worth teaching to, wouldn't it? We have to work with what we've got. But also, you know, back to the point about, you know, different people will have different views about what is, what is it that we should be  encouraging people to learn? And what should be supporting teachers in kind of helping people to flourish. We all probably have different views on that, I guess.

Professor Miles Berry  20:03  
So I was going to tell a story, and I will tell a story, but let me just spin off that I have an O level in computer studies. Is there anybody else here? Excellent. Thank you, sir. And the paper they made me sit back in the 1980s I actually, you know, it was what I chose to do. I could have had an O level in biology. No, I chose to do computer studies, and that paper is very, very similar to what my trainee teachers are preparing their pupils to sit later this month or early next month. I forget when the exact date of the exam falls. With one exception, for my O level in computer studies, and I suspect for yours too, sir, you had to write some code. There was a programming project as part of that. Oh, if only we could bring that back. Well, okay, I know the large language models would just go off and write the code now, but you know, that was an important experience for me. Let me do the story thing now. So I spent 18 years working across four different schools, had a fabulous time as a school teacher, then made the move into higher education, where most of my job now is training the next generation of outstanding teachers. One of the big differences between higher education and schools is we get to write our own exams. It is fabulously liberating. It's not we could just make this as easy as we want. There are people who check on this. For teacher training, I tell my students right at the start of the course, there are five things, no matter what everybody else tells you about all of these expectations and directed tasks, there are five things they have to do. One is turning up and seriously, that really does matter as a diligent attendance, a professionalism thing there that we are held as a profession to that high standard. You know, year five will not just be able to get on with some quiet reading if the teacher isn't there by large. Then there is that, can you do the job? And half of the credits for the PGC are actually practice based. And you meant, those of you with the QTS will remember, Oh, the lever arch files. And you know, how many lever arch files did you have to produce? It's so much simpler now, because it comes down to the judgment of the trainees, mentor as to have you met the teacher's standards? They're prescribed by the state. There is no flexibility, or not. 78 of a qualified teacher. Surely that's good enough at the moment. You know, you've got to be able to do that the job. It's very, very hard for anybody to fake that. And I think that is because that is the thing we value most for qualified teachers, is, well, you know, can you teach? The other thing we do, though, is set them some essays to write, and we would very much like them to write the essays more about that conversation later. I'm sure. Why? Because, well, teachers have to be able to write essays. No, that's not it at all. It's because, by writing the essay, they think long and hard about the ideas of education, of pedagogy, of curriculum, of assessment, of all of that. And I don't think it is simply being able to do the job. I think it is having some sort of background hinterland understanding of how learning, how education, even works, and I don't know as we've got that quite right is the best way to assess that understanding is by Harvard reference  and word academic essays, but it's the best I've got at the moment. Alongside the turning up and being able to do the job,

Shahneila Saeed  23:41  
This all goes back to sort of what you were sort of saying earlier about it is the process, you know, you're right, as you said, you're assessing the essay. The topic of the essay is perhaps less important as the but as in, it's, it's the process through which they actually produce the essay, and getting them to think about. 

Professor Miles Berry  24:04  
 I do not sell them the essay titles because I want to know the answer. The essay is about them thinking. 

Shahneila Saeed  24:14  
And helping them develop those skills which they are going to need when they get their QTS and are qualified and stuff. Looking at that translating, I think that does translate potentially across to when we're looking at both current forms of assessment and future forms of assessment. Now, all of you sort of touched upon AI, and you've sort of patiently waited for my next question. So I'm going to ask it, and we can't not ask it, but briefly, I think you'll meet teachers that are really excited by the opportunities that AI brings. And our sort of previous panelists were very excited and were looking at the developments and what that brings. But they are also a significant number of teachers that are quite fearful about what this means for the future. I was just wondering about your thoughts collectively on this. So Miles, did you want to kick off a bit?

Professor Miles Berry  25:09  
I am worried about the future of coursework. It's so tempting for folks doing A level, NEAs, some of the GCSE NEAs, where they are left to ask the AI for some ideas, to ask the AI to "how could I write this a little bit more precisely"? Or "how could I articulate that"? "Could you write me this section"? The temptation is very, very real for students, for sixth form, for GCSE students, and the notion that schools now are responsible for informing the awarding organizations that the work is the candidate's owner, without a huge sense of professional expertise in how I can even tell that that is the student's own? You know, what do we do at Roehampton? You know, if it's beautifully written, beautifully referenced, then I am immediately suspicious that it's written by a machine rather than my trainee teacher. Sorry, shouldn't have said that out loud. What we do around non-exam assessment is hard, and I fear that we end up with going back to that very traditional "let's bring them into a room and turn off the internet and confiscate their devices and make them do that in supervised conditions", even when it's a computer science programming project or an art practical or portfolio or so on. Because of the temptation being so very real. I think we're missing a trick that as we move forward, the validity of the assessment it's we ought to start thinking about not just what can you do on your own, which still will matter, but what can you do when you're allowed to use all of the tools and all of the people that you have around you, and acknowledging that and saying, "I used it in this sort of way and that sort of way, and look at how skilled I have become at". Can you find this information out on the internet? Can you find this information in journal articles or books. Now, surely some of that is about can you prompt well? Can you check that the response you've got from the AI makes sense? Do you know why it is rubbish at maths? All of those sorts of things becomes part of any sort of valid assessment. As we move forward, there are people on the panel who are much, much more expert at knowing how to do that, I suspect.

Alex Scharaschkin  27:36  
Well, on AI. I think absolutely we have this short, let's say, in the immediate term, coursework is probably the big issue when it comes to school, kind of summative assessment. And as I touched on, I guess actually, we have much less coursework now than we used to. It's in some many subjects don't have any anymore, but those that do, clearly, you know, one of the biggest kind of topic of discussion, really is, what does the advent of generative AI mean? How we can assure ourselves of what students have done with respect to coursework. And I suppose, to your point about validity, my my thoughts is, well, at the moment we were trying to kind of work out some short term fixes and give guidance and then see how that works, and everything's kind of, we're all kind of moving on together on this. But of course, these tasks were all designed and these approaches were designed before these large language model generative AI tools existed, and I think we have to for in the longer term, really fundamentally, go back and think, what are we trying to do with this coursework? Are we actually trying simply to, rather lazily, get, you know, just set this as an essay, because that's how we tend to assess things, and actually, that's not going to be really very helpful now. Or is it actually trying to elicit particular specific skills, and should we be looking at other ways of eliciting those? But when you think about end point assessments for apprenticeships, for example, often that's maybe in the form of an interview, sitting down with someone, showing me how you do this? What would you do here? Show me how we'd use the AI to do this? You know, is that those things, if we're trying to get you know, if that's what we're interested in assessing, then we should be looking at ways to do that which recognize that in those circumstances, this is AI tools are amongst the toolkit that students have. I think in the more medium term, there's an interesting question. Maybe it's going to be a bit nerdy and theoretical now, but in terms of this notion of marking and assessment as being something to do with numbers and quantification and measuring how much of something someone has. Because I think the whole approach to dealing with, you know, the kind of linguistic artifacts which are often what students are producing in our system anyway, where everything's not multiple choice tests, where people are actually producing things, writing essays, create constructive response type tasks. The way in which those tasks kind of valued is going to be different, I think in 15 years time, it's not necessary that we think we're trying to use those to kind of measure an amount of something, already with large language models, we can provide feedback, and we can classify those kind of artifacts, yes, not always perfectly well, but using natural language and kind of recognizing that often where the linguistic endeavor is, what the student is doing, what the value that comes back is, not so much a number or a point on a scale. Maybe you do want occasionally to grade them for a class, for a qualification or something, but it's more around what do you learn from this? How is it expressed back and I think that's a really interesting development which we're interested in, I'm interested in pursuing further.

Graham Macaulay  30:52  
I think there's a danger that when we talk about AI, not just in this, you know, in the assessment conversation, but generally we default to talk about Gen. Ai. We talk with default to talk about generative AI, you know, your use of Bard, your Geminis, your chat gpts and that is just part of AI and, and this isn't a hierarchical comment at all. You know, different AI has different examples of AI have different purposes. And I'm actually not going to talk about the Gen AI stuff, because I think we've heard some really, really great ideas about that. But actually I think AI to inform teachers, to give people's feedback immediately, to do something to help overcome this teacher workload retention crisis that we've got in the country, in my opinion, should definitely be given some thoughts. You know, we we've got this, and it comes back to that, I guess, the formative and summative assessment piece doesn't it? What are we looking to achieve here? What is the end goal of this, and therefore, what's the best way to do that? And thay may be AI involved. And guess what? There might not be AI involved, and that's also absolutely fine. And just my last point, just to keep this really concise, was about coming back to measuring what we value, and if actually what we're what we're asking, we're assessing learners on, is something that ChatGpt Bard could output, then that might be okay. But actually, that might also not be okay. And I think about let's be really purposeful about this assessment. What is it we're trying to achieve, and what's the best way to do that?

Shahneila Saeed  32:29  
There is so much there, and I have so many questions, not just the ones I wrote, but new ones coming up in my head all the time as you speak. But I'm conscious of time, and I'm conscious that I'm not the only one in this space that's going to be having dozens of questions flooding in their heads for the three of you. So, yeah, see, I suspect, and there's a hand up already. So I think, can we return to audience questions? We have our first hand up as well. Nicely timed, sir.

Audience Member 1  32:55  
I agree with Graham. I think we're limiting this to narrow AI as well, and actually, when we think about more advanced AI, what could it do, and would the need for summative assessment potentially be replaced completely, because AI will analyze the data for a student from the point of birth through to the point of graduation, and we look at so many metrics that we are not able to assess. So will it change assessment in that way that there would no longer be a need for summative assessment?

Professor Miles Berry  33:35  
You said a very interesting thing earlier about the credential, like the credentialing that we have this particular 16 year old and your colleagues form a judgment about "they are Grade 4 in maths". Tell me about the things they can do? Yeah, tell me about what they're actually good at, and tell me about what they can't do, because if they've got a Grade 9, I've got a pretty good idea that anything on that spec, they're reasonably competent. Yeah, so grade 4, I don't know. Is it that they're really good at derivatives or is it that they're really good at statistics and probability? So I think there is something interesting about let's take the output and divert and write or get the machine to give us an accurate profile of where the learner is. I come back to my SAT nav analogy of tell me about where they are in all sorts of complex, semi infinitely dimensional space, about how good are they at, you know, grammatical terms, how good are they out writing a story? So give me a detailed profile of the learner. The scary stuff for us as educators, I think, comes when the machines start using that profile. And I don't know it's for the well motivated child that this  is the best learning objective, object for you to interact with next. It feels very, very different from what we normally do in the class, to listen to arguments which are of the form of this is more effective at making sure they master this content, possibly, but this is more effective at teaching them how to get on with other people, at giving them an interest in things that they're not already interested in, at showing them how to be a really well rounded, balanced human being. I think it's a long time before the AI tutoring software is going to be able to do anything around that. And what I value is not nearly how good they are at arithmetic and statistics and all of that. But, you know, talk to me about their motivation, talk to me about their character, talk to me about their personality. I think those things I don't really want the machine writing that bit of my daughter's school report.

Shahneila Saeed  36:00  
Absolutely. Should we go to another question?

Audience Member 2  36:06  
So going back to the assessment part of it, yeah. And I think what the future might have, because anybody here will agree with me that each learner is different. And I think you need to start thinking about adaptive education, just to say, do an assessment to see where that particular learner is, shift GPS, and then shift him in a direction, where he needs to be. So I think there's a new concept here. Each assessment should generate a wealth for their education, because everybody's on their own individual journey. What are your thoughts on that? Is that something that can be put together?

Alex Scharaschkin  36:47  
I think, certainly the notion of adaptive assessment, it's been around for a long time. Obviously, we do run adapt in Wales, at AQA, in Wales and in Scotland, we do run some adaptive tests in the traditional sense, for schools in looking at literacy, or some reading comprehension, really, and some numeracy skills and the way those assessments work. And lot of kind of computer based adaptive testing works now is using item response theories and statistical very simple in kind of principles statistical model, which administers a tar, you know, an item to student, sees whether they get it right or wrong, estimates a parameter which is supposed to reflect their ability, whatever that means one thing a number. So what is that? And then, kind of, on the basis of whether that's high or low, gives another number, which gives another item which is easier or harder, supposedly, and then carry on doing this until you end up with a kind of estimate of the students ability, and they've been rooted through the possible items they could have been given in a way that it said, you know, is more suited to their ability. And now that that was kind of a fairly crude way of doing it, I think the AI stuff is very interesting in terms of, how can you personalize kind of learning journeys through a domain much more flexibly, much more realistically, without limiting yourself to you know, right wrong answers and notions of a difficult item is one the Clever kids get right and the clever kids ones get want to get difficult items right, becomes a bit reductive. I think with the with the AI, there is a lot when language models are so there's a lot to be explored there. And I think it is really interesting. I think finally, on that, though, really the place to be looking at this is in the formative space. I think there's still questions about adaptive testing, for Summit, for kind of awarding a qualification, simply on the grounds that there is concern about, what if we have, if people have all done different versions of a test, how do we ensure ourselves that they've all been treated fairly? But I think in the formative space, absolutely.

Graham Macaulay  39:00  
And I think sort of conceptually, actually, maybe that isn't a new concept at all. And actually, you know, we've been doing that for a long time in education. I think what perhaps has changed slightly is about how we do that, the sophistication of it. But actually, you know, comes back to Mars point like that's on that deep down pedagogical, what is it that ultimately makes a learning experience happen? Hasn't really, hasn't really changed, and now it's just about doing that. I think we've a bit more cautious through the lens of technology?

Shahneila Saeed  39:35  
Just to, I guess, dive in very quickly. It might be a bit tangential, but the technology itself, you're right. It's not new in some form or another. It's been there a while. So I'm Head of Education for the video games trade body. It's an exciting role to be able to work with some amazing companies. But I know, if you look at video games, it's been a long time, you know where people are able to have their own individual journey through the game. We can both play Sea of Thieves or Minecraft or roadblocks, or, I don't know any game title that you can think of, and no two experiences will be identical, because my response to stimuli within the game will decide what happens next. And it's the AI that's doing that, you know, and it scores you up, it levels you, it's all of those things. So I think in some aspects, the technology is there. It's about harnessing it for the power of education. And I wish I could share names, but I do know there's some companies that are working on exactly that.

Professor Miles Berry  40:42  
Two things. Firstly, you know this in practice, in your lessons, those wonderful occasions when it doesn't get to the end point that you had planned because of what happens during the lesson. It is like your gamified learning of this is the real experience, that you have a very good profile of each of the kids in your class, or the class as a whole, and you take them in a different direction, because that's what they're interested in, or because we need to do it this way rather than that way. And you're good at that. The machines may get there, but they're not quite there. No, I don't think. And secondly, this whole bringing them into a particular place at a particular time and testing them in the whole thing, okay, I can see, you know, there is a place for it, but there are other really high stake things that we do not do that way. None of you were allowed to drive because you did the written exam at a particular date. And, okay, there is a written exam now, but the practical kind of matters. And you take the practical when you are ready for it, and you have another go if you didn't get that at first or your third time, the scouts and the guides and the brownies and the Cubs and the rainbows and the beavers all do the same thing. You know, there are these badges clear criteria. You get the badge when you are ready to take it, you provide the evidence of a music exams. You know, again, it's a very what is the word I'm looking for, reliable, valid form of assessment. Yes, an examiner will come on a particular day. But if they're not doing the whole cohort all at the same time, you take the thing when you're ready for it. And somebody tells me they've got Grade 8 violin. I know what that means, in a way, that Grade 4 maths does it yet for me.

Shahneila Saeed  42:30  
I think we got time for one more question very quickly. I don't know who put their hand up first. Actually, maybe this gentleman there. I thought this one, but I could be wrong.

Audience Member 3  42:45  
Thank you. I found that fascinating. I did have one for your Throwback Thursday, but my French owners to have haunted me, but we'll talk about afterwards. I had a question, really, about the application of AI, and I teach marketing at University of Bedfordshire and rewinding access university, and I've seen a lot of examples where completely different data sets are taken. So I don't know if you saw the OVO energy ad that took Met Office data and UK energy and showed how much renewables are being used that day, a lot of agencies are bringing media and creative together that haven't been together for decades. So I wonder whether there's something in the assessment that's bringing together disparate things that is quite interesting to solve, maybe SDG issues or overcome EDI so we're very much focused on AI as a thing, and there's challenges, but actually there's lots of ways we can bring difficult things together. Maybe that might be interesting for students. I wonder what your reflection would be on that.

Professor Miles Berry  43:50  
I didn't catch all of that, but I think somebody could give me a quick summary round. Sorry.

Shahneila Saeed  44:01  
It was difficult to hear. I have to apologize, but I think the sounds issue. But I believe you were talking about AI, measuring different things and bringing different things, to get bringing different elements together, right? 

Alex Scharaschkin  44:19  
I mean, I guess the scope to kind of join up data sources, and, you know, is much bigger now than you speak, whether that's with AI or whether using kind of more traditional data warehousing kind of techniques. I think certainly is, you know, I think that's particularly helpful when you're trying to look at what you're looking at assessment results, and then you're trying to make some conclusions, for example, about what does this tell us about disadvantaged students versus students who've had this particular experience? And we also now discover that these kids did this over here because we've joined this. And when you're in the kind of social policy space or looking to see how to interpret results that you've got from your assessment procedures, I think there's a lot of value in that. And actually, in terms of just on the assessment side, things like the fairness of our assessments, we obviously try to do things that you know are inclusive and don't unintentionally discriminate against or are biased against particular groups of students, but it's very hard. Obviously, you can describe groups in all kinds of ways, and the more kind of information you can get about the population you're testing, the more perhaps chance you have to get some intelligence to help you understand whether there are potential biases or differential functioning of your assessment that you didn't want.

Professor Miles Berry  45:52  
I think what I'd say is, I don't think it's an assessment thing. I think it's a curriculum thing. The way so much of education is organized, is around individual subjects, and by the time they're in secondary school, the making the connections between those things is so hard for schools to find time to do that. Your experience at university level, I suspect, is going to be a similar thing. And just building in the opportunity for kids to start seeing the things as part of a larger whole. There is always a mistake at events like this, of seeing education purely in terms of what happens in institutions, in schools and in universities. Look at the way the kids are playing these amazing games that your colleagues are making. Look at the way that kids are teaching themselves things which are not on any schools curriculum, by watching videos, by connecting to other experts in the field, by having a go at these things, and that chance to make those really authentic, organic connections, I think, is tremendously powerful. That's probably not an answer to the question,

Shahneila Saeed  46:58  
Yeah, and those children are doing that in their own time, because it's fun, not because somebody's telling them to which is probably one of the fundamental things. I think I know there's probably half a dozen more questions in the audience. I know there's certainly half a dozen more questions in my brain, but I will have to wrap this up there. Can I just say thank you very, very much for such an amazing discussion. Thank you to all three of you. That was brilliant. Yes, and that's it. Thank you. Our next session is in 10 minutes time, it will be asking the question, Game of phones. Should students be allowed to phone in the classroom? See you in 10 minutes.