Some thoughts on Edwards & Clinton (2018).


Some thoughts on Edwards, M.R. & Clinton, M.E. (2018) A study exploring the impact of lecture capture availability and lecture capture usage on student attendance and attainment .High Educ. https://doi.org/10.1007/s10734-018-0275-9

I’ve been sent this article enough times that I thought it might be worth doing a blog on my thoughts on it. A disclaimer before I start: if you follow me on twitter you’ll know that I am generally extremely positive about lecture capture, you can find my work on this here (Nordmann et al., 2017, https://psyarxiv.com/fd3yj/) and here (Nordmann & McGeorge, 2018, https://psyarxiv.com/ux29v/), so feel free to interpret everything that comes next in light of that.

The study uses a matched sample from two different cohorts of the same course to determine the effect that the introduction of lecture capture had on attendance and attainment, both in terms of availability and actual usage. Edwards and Clinton find that the introduction of lecture capture reduced attendance, and that lecture capture usage was not related to attainment and does not make up for low attendance, given how strongly attendance is related to achievement.

Things I liked about this paper
  • They use in-class attendance and data from the lecture capture server rather than self-reports. Lots of studies on lecture capture have used self-reports for both attendance and LC usage and this is clearly problematic so this is great to see.
  • I think that the distinction between availability and usage is novel, interesting, and important. Previous studies (e.g., Leadbeater et al., 2013) have suggested that lecture capture availability is enough to stop students attending the lecture but that they then don’t use the recordings so I think that teasing apart mere availability from usage was a very good design choice.
  • The way that the authors frame their conclusions is to be applauded. 
Importantly, there is a strong case for clearly communicating to students the danger of an over-reliance on using recorded content and the potential negative impact that low lecture attendance could have on their attainment. In the majority of cases, students would not be able to use lecture capture to compensate for severe lecture absence using recorded content and the current study can serve as useful evidence to help educate students of the potential impact of low attendance; it is important to clearly communicate that the idea of binge-viewing lecture capture content during revision period can make up for severe absence is likely to be misguided.
  • Rather than using their results to call for bans on lecture capture, they highlight that their work can be used as an example of why attendance is so important, regardless of LC availability and this fits with the argument of Nordmann and McGeorge (2018) that you need to ensure that appropriate guidance is provided. This needs to be highlighted because it’s been almost completely ignored on Twitter, where the conclusion taken is that this study is a reason not to provide LC at all. 
  • It’s not as strong as I’d like (see favourable leanings towards lecture capture) but they do acknowledge that there are lots of variables that potentially affect what impact lecture capture has.
We also need to recognise that although the current study may be representative of a typical quantitative research methods cohort in the UK, the impact of lecture capture may differ across taught subjects and institutional contexts and this may limit our ability to generalise to a broader base of students. It is possible that intrinsic motivation to study the topic and intellectual curiosity may differ across subjects which means the impact of lecture capture might be subject dependent (see O’Callaghan et al. 2017).
  • This paper, like most scientific research, is another brick in the wall in understanding what we know about lecture capture, rather than conclusive proof that lecture capture is evil, a point I’m going to come back to. And again, I applaud the authors for the way they’ve framed this because the binary view that’s being expressed on Twitter doesn’t really reflect what they wrote.
  • The paper, if not the data and the code, is open access.

Things I do not like about this paper
  • There are no exact p-values reported. This particularly annoying because…
  • ….there is no mention of any multiple comparison correction which means that if you’re wanting to write a blog about this and calculate what the adjusted p-values are then life is more difficult than it should have been, particularly if the person writing the blog hasn’t quite got to grips with writing functions in R yet. My preferred correction of choice these days is Bonferroni-Holm, however in the interests of me not spending hours on this blog, if we went with a good old blunt Bonferroni correction the relationship between lecture capture availability and attendance in Table 1 would no longer be significant.
  • In addition to the correlations, there are a number of regression models constructed on different DVs and there’s no mention of any consideration of the alpha level or indeed any real explanation as to why e.g., both exam grade and final grade were tested. Which leads me to the issue that…
  • …there is a lack of information about the course and the assessments. In my institution the workshops and coursework tend to be somewhat removed from the lecture content. Obviously it all feeds into your knowledge of the subject but the specific lecture content is only assessed through the final exam, so in Nordmann et al. (2017) that’s what we used as the DV because the overall course grade takes into account e.g., workshop attendance (which tends to be near ceiling) and thus isn’t relevant to lecture capture. This might not be a problem in this study because it might all be related, the point is that there’s not enough information for me to understand whether or not this is a problem and that’s a problem because it feeds into my earlier whereby I am not sure why certain analyses were conducted. I think that the lack of clear justification combined with the lack of correction for multiple comparisons reduces the value of the analyses.
  • Buckets are bad for your health, regardless of whether you’re a teenager or a continuous variable. This paper isn’t alone in doing this, it’s plague that afflicts the lecture capture literature but I do wish everyone would stop it.
We grouped the students into three lecture capture viewing behaviour profiles: (1) no substantive viewings (66 students/41.3%), (2) viewed between one and five times (46 students/28.7%) and (3) viewed lecture capture more than five times (48 students/30%).” And from page 14 “To examine the lack of interaction between lecture capture usage and attendance in more detail, we grouped the students into three profiles of weeks 4–11 lecture attendance behaviours: a group that never attended lectures (30%), a group that attended between one and four lectures (41.9%), and one that attended more than 50% of lectures (28.1%).

  • Why not use keep them as continuous variables? A student who only attends 1/8 lecture probably shouldn’t be put in the same group as a student who attended 4/8 lectures when a student who attended 5/8 is in a different group. Added to that the effects of lecture capture throughout the literature are generally small (more later), by bucketing the data you’re likely not getting a good idea of what’s really going on. It’s unclear whether the “Analyses incorporating lecture capture viewings” use the bucketed lecture capture usage data or total number of views.
  • Below is the figure that accompanies the bucketed data. I’ve turned into one of those wankers that does all their figures in ggplot2 but that aside, there’s not enough information in this figure to understand what’s going on, there’s no measure of variability. I’m quite sure that because of the small Ns, with error bars this would look much messier but such is life

  • The above figure looks very interesting - those students who never attended but watched a lot of lecture capture appear to be doing better than those who went to half the lectures and watched a lot of lecture capture. I want to know more about this and it’s never really unpacked.
  • The regression model predicting attainment increases from 41% variance explained to 43% when lecture capture availability is included. The negative correlation between lecture capture availability and lecture attendance is -.197. The regression model predicting attendance increases from 8% variance explained with GPA and gender to 11.4% with lecture capture availability included. These effects might be significant (without adjusting for multiple comparisons at least) but they’re very weak and this isn’t really acknowledged.
  • Given that attendance is a strong and reliable predictor of achievement it was an interesting choice to include this in the regression models after adding lecture capture, rather than before or simultaneously.
  • Gender is included as a predictor without any theoretical justification. Additionally, Table 1 reports a mean and SD for gender, which is again, interesting.  
If you’re reading this and you’re the authors of this study, I imagine right now you think I’m a total asshat. Please know that my main issue with this paper isn’t actually anything in the paper. I think that this is another brick in the wall, it’s a study that has strengths and weaknesses, like any good Reviewer 2, it’s not the study I would have run and it’s not the write-up I would have done, but it contributes to our knowledge base.

My real big issue and the reason I’m writing this blog is the way it’s been used on Twitter. I've seen multiple tweets using the paper as proof that we shouldn’t use lecture capture, ignoring that there are more studies that don’t find an impact on attendance than do, ignoring that there have been positive effects of lecture capture found, and ignoring what Edwards and Clinton state themselves, which is that the impact lecture capture has is highly variable depending on subject and context.


Most importantly, it’s ignoring the key takeaway from this paper which is that lecture capture can have ill-effects particularly if you don’t provide any guidance and we should use studies like this to help show our students what to do. I’ve started ranting so I might as well finish. If you are that bothered about attendance decreasing, if you genuinely believe that attending lectures is crucial why not monitor attendance and make lectures compulsory regardless of lecture capture availability? If you don’t provide lecture capture and your lectures aren’t compulsory and you don’t take attendance, what do you think is happening to the students who aren’t attending? Is attendance at your lectures compulsory? If not, why not? The literature suggests a benefit of supplementary use of lecture capture when used in a manner that promotes deep learning, if your concern is truly your students’ education then the answer is compulsory lectures with supplementary lecture capture and clear study guidance. Actually, if we were all that concerned about learning we’d probably not be giving didactic lectures at all, but that’s another story for another day.




Comments

Fatima Ahmad said…
Do you require an extremely pleasant, attractive escort why should prepare go out with you to find the best places in the Doha city? All things considered, you don't have to go somewhere else, as the stunning Doha Escorts Girls Babes are prepared to play around with you.
I know this web site provides quality based content and other information, is there any other website which offers these stuff in quality?
I am actually glad to read this webpage posts which consists of lots of valuable data, thanks for providing these information.

Popular Posts