InsightBlog

Another digital revolution: The GenAI disruption

Prof. Leanne Williams
//
November 20, 2024

Prof. Leanne Williams is a professor at the University of Warwick in the School of Life Sciences, and Academic Lead here at LearnSci. In this article, she shares her thoughts on the impact and future developments of AI use in higher education.

Generative Artificial Intelligence (GenAI), with its ability to generate human-quality text, code and images, has quickly become integrated into various aspects of society, even without us fully understanding the potential risks. In education, its impact is particularly profound, challenging traditional notions of academic integrity and prompting a re-evaluation of teaching and assessment practices. The initial response to GenAI in many institutions has been characterised by fear and uncertainty, leading to hasty policy changes and a focus on detection and prevention. An emerging ‘enhance’ rather than ‘restrict’ approach is starting to recognise the potential of GenAI to enhance learning and empower both students and educators.

As an EdTech company, we at LearnSci have a vested interest to stay abreast of, if not in front of, trends in educational strategies and digital innovation. We hold a clear view of the STEM sector educational landscape, and support those that push the boundaries of digital innovation and utilise technology-enhanced learning to its potential. So, how does AI impact us as a company, our products and partners?

AI and digital learning: What about LearnSci?

Despite the concerns raised above regarding academic integrity, we are confident that the technologies we offer our partners are resilient to the challenges presented by the use of GenAI. Our two key technologies offer students the opportunity to explore the value of getting it wrong to get it right, and provide learning efficiency with repeatability to consolidate learning and remedy error.

Our digital learning resources are designed to foster "learning by doing" and critical thinking. LabSims, for example, require students to explore practical techniques and data generation. While technologies advance, in-person ‘hands-on’ technical and practical skills development by students can never be replaced by AI. 

Similarly, Smart Worksheets prioritise data acquisition, analysis, and evaluation, promoting higher order thinking and deeper understanding of scientific concepts. The assessment strategy in which Smart Worksheets are embedded significantly impacts their efficacy. For me, the best asset to any teaching repertoire is low stakes formative learning and skills assessment. When Smart Worksheets are used formatively, students aren’t ‘pressured’ into focusing on a final output or grade, rather than the learning process itself. In this instance, students are less likely to feel the need to use AI to assist them.

The strength of summative Smart Worksheet use is unique learner experiences, parity of feedback and grading, and of course no marking burden. However, high stakes assessment brings with it the concern of students using AI. Smart Worksheets are subject to less risk than alternative modes of assessment, owing to the spiral structure of increasingly complex questions linked to unique, randomised data. Therefore, providing the opportunity to complete Smart Worksheets in a dedicated, controlled environment can reduce risk whilst providing unique user experiences.

No items found.

The future of AI in higher education

I hold a positive view of the potential for AI. It’s here and being used, so we must work at embedding (or at least accepting) it into teaching and learning strategies. We must, however, put some of the onus on students to take responsibility for their own learning and exploration of ethical literacy. We must also trust that the vast majority of students are here to learn and better themselves, not to ‘cheat the system’. We shouldn’t inhibit the exploration of using both generative and AI assisted technologies, however it must be done so thoughtfully, ethically, and in collaboration with wider key stakeholders such as schools, the higher education sector, and employers.

One example of this is the recent Lovelace-Hodgkin Symposium on AI ethics in Glasgow. It drew together an impressive list of research and teaching academics and representatives from key organisations, including the Centre for Data Science and AI, the Alan Turing Institute, Children’s Parliament, the Scottish AI Alliance, and Scottish Parliament. The common thread of the event was cross-sector collaboration and informed policy generation. Future cohorts of students will be arriving at university having used AI extensively in schools, with expectations that university will upskill them further in the latest technologies, including AI - not least to enable them to thrive in an increasingly competitive jobs market.

I am most excited by how GenAI, adaptive learning platforms and assistive technologies could help level the playing field and narrow awarding gaps for students. My priority has always been to challenge processes and policies that I feel are intrinsically discriminatory against marginalised students. Those who navigate the pressures of higher education whilst managing disability, specific learning requirements and neurodiversity are often disadvantaged, even with associated reasonable adjustments.

My vision for the future is that students will be offered more personalised learning experiences with adaptive content, pacing and modes of assessments. I believe that the emerging generations of AI technologies could change the way we support students with autism, varied and specific learning requirements, ADHD and of course not least those with physical disabilities that often experience the least access to learning environments. Focus assistance, social and emotional learning support, communication aids for non and minimally verbal students, sign language recognition and translation for deaf and hearing-impaired students, image and object recognition and navigation assistance for blind and visually impaired students, and telepresence robots, would truly revolutionise accessible education.

Recommendations for higher education institutions

If it’s here to stay, then the ethical implications of GenAI in education are significant and require careful consideration. Concerns about environmental impact, copyright infringement, data privacy and the potential for bias in AI-generated content must be addressed through transparent policies and responsible development practices.

Furthermore, the use of GenAI raises important questions about the future of work and the skills that students will need. Higher education institutions must prioritise the development of digital literacy, critical thinking and ethical reasoning skills to prepare students for this new reality.

Below are a few recommendations for higher education institutions that I believe are a step in the right direction surrounding the use of AI.

  • Develop clear AI policies: Institutions should establish clear guidelines for the ethical use of AI in teaching, learning and assessment. Authorise and provide access to use certain applications, to support reasonable adjustments.
  • Promote digital literacy: Integrate digital literacy and critical thinking skills into the curriculum to empower students to engage responsibly with AI.
  • Embrace innovative assessments: Shift towards authentic assessments that emphasise process, application and higher-order thinking skills.
  • Invest in faculty development: Provide faculty with the training and resources necessary to effectively integrate AI into their teaching.
  • Foster collaboration: Encourage collaboration between schools, EdTech companies, higher education institutions and employers to develop and implement aligned and meaningful AI skill development.

If you’d like to discuss any topics in this article further, get in touch with us at community@learnsci.com and we’d be happy to continue the conversation.

References