Affective Analytics in Learning Environments

To my librarianship readers: This is the third of a series of reflections on readings from a text for a course that I am taking on learning analytics. While it may not be directly related to librarianship or library assessment, I am hoping to learn of the opportunities where library analytics and learning analytics overlap.

2022. D’Mello, Sidney K. and Emily Jensen. Chapter 12: Emotional Learning Analytics. In, Handbook of Learning Analytics, C. Lang, G. Siemans, A. F. Wise, D. Gasevic and A. Merceron, eds. Society of Learning Analytics Research (SoLAR). DOI: 10.18608/hla22.012

ABSTRACT

This chapter discusses the ubiquity and importance of emotion to learning. It argues substantial progress can be made by coupling discovery-oriented, data-driven, analytic methods of learning analytics and educational data mining with theoretical advances and methodologies from the affective and learning sciences. Core, emerging, and future themes of research at the intersection of these areas are discussed. Keywords: Affect, affective science, affective computing, educational data mining, learning Analytics

Human learning (versus machine learning) does not take place in a vaccuum; it is influenced heavily by environmental and internal factors, including emotions of the learner.  This chapter highlighted a myriad of experiments and systems designed to identify characteristics of physical features or written or spoken text which are most likely associated with specific emotions.  Traditional methods of detection have relied heavily on human judgments, following standard protocols such as the Baker-Rodrigo Observation Method Protocol (BROMP).  More recently, machine learning methods of supervised learning, trained on data sets derived from such methods, have proven to be more efficient and nearly as accurate.  Parallel development of methods of detection based on textual (aural or written) traces as well as bodily or physical traces (notably eye-gaze detection and movement of the body) have resulted in systems that can detect emotions of boredom, confusion, and achievement or success or satisfaction.

Such methods have been applied in classroom settings, notably as intelligent online tutorial systems, collaborative learning systems, and online classroom management dashboards.  The intelligent tutorial systems can detect occurrence of a limited set of emotions and behaviors associated with being on- or off-task.  The systems then can react to these detections, providing responses meant to reassure and re-connect the learner who may be getting less engaged due to the negative emotions.  Often these responses are programmed to be empathic, even mirroring the emotion(s) detected, with the expectation that the learner will re-engage with the system.

While these systems appear to be improving on the detection of these emotions, there are many other emotions that impact learning.  Affective states of envy, jealousy, pride, guilt, shame, and others may be more difficult to detect as they often are negatively perceived in many cultures, and thus may be more likely to be hidden by the learner.  Experiments of recording the occurrences of these emotions combined with physiological detections could help with identifying facial characteristics that could lead to applications in which these emotions are empathically addressed to assist with the emotional growth of the learner.

I was concerned about the idea of a dashboard for instructor’s use that detects and highlights emotions of specific students.  My initial concern was that this could lead to the erosion or lack of development of this skill in the teachers themselves.  I understand that it would be most useful to have little flags above each child’s physical head denoting the engagement level or lack of understanding, but I wonder if we may be becoming too dependent on such notifications. 

Coming from a field outside of education or learning sciences, much of the information presented in this chapter was new to me.  It was useful to learn of the general research and understanding of the impact of emotion on learning, particularly its signaling, evaluative, and modulating functions. This idea is not unlike lessons I am learning about the psychology behind mindfulness and communication, notably how emotions are indicators of needs being met or not being met (signaling). 

This chapter was a whirlwind of the basic research behind, and the progressive development of affective computing in a learning environment.  I was disappointed in the lack of research or applications from my literature searching of writings in librarianship.  There are many opportunities for librarians to integrate affective assessment of communications in training sessions, reference interviews, automated “virtual reference” using chatbots, and formal discussion forums. 

Visual analysis of OSTP’s OA policy

As seen in Scholarly Kitchen, this guest post from Christos Petrou is interesting and inspiring – not quite as much for the content (although I did learn much) as for the analyses. While I have been analyzing data for quite a long time, I consider myself too old-school. Whenever I see analysis that uses color, charts, graphs, and tables in an insightful way, I am always inspired.

https://scholarlykitchen.sspnet.org/2022/09/13/guest-post-quantifying-the-impact-of-the-ostp-policy/

Christos projects that the OSTP policy (announced in a memorandum issued last month) could result in as many as 132,000 articles made freely available to read (“unpaywalled”). This new policy is meant to extend the trend of making the outputs of federally-funded research free to read, eliminating previous exemptions and restrictions, such as eliminating embargo periods and extending the direction to all grants, not just large ones. [Rick Anderson provides a very interesting analysis of the text of the memo, noting its tone of more recommendation rather than real direction.]

Christos using clean charts and graphs that invite the reader in, rather than repel. They are clean, provide just the right amount of detail for the purpose, using fonts that are clear and readable (a real problem with Tableau’s default fonts). They support the analysis and the analysis do not regurgitate the charts.

Now, back to the content – Christos indicates that this policy (if implemented as directed) could open up about 16% of all scholarly papers produced in the U.S., pushing up the worldwide total by over 3 full percentage points. But a key insight Christos makes is how much of this opening would be in the more reputable journals. While the journals themselves have provided hybrid OA access, allowing authors to choose whether or not to pay to publish (and thereby have their articles free-to-read), American authors have been resistant to taking that step, even when funding for article processing charges (APC’s) is available from the grants.

As with any policy, the effects will not be evenly distributed – most funding is in healthcare and biological sciences, so those subjects will be most affected, while there is very little federal funding of research in the arts and humanities. Furthermore, publication of engineering and technology will be less impacted because of the domination from Chinese researchers (see Christos’ analysis of China’s “Beall List”).

This was just the inspiration I needed as I start a new year of analyses projects.

Learning Analytics Reflection 2: Writing Analytics

[To my librarianship readers: this is the second of reflection pieces required for completion of a course in Applications of Artificial Intelligence for Learning Analytics. Although it might not be directly relevant to academic librarianship or library assessment, I do point out opportunities librarians could take, particularly those who directly support instruction in writing or who collaborate with writing labs and tutoring services.]

Gibson, Andrew and Antonette Shibani. 2022. Natural Language Processing – Writing Analytics. In, Handbook of Learning Analytics, 2nd edition. Charles Lang, George Siemens, Alyssa Friend Wise, Dragan Gašević, Agathe Merceron (Eds.). 2022. SoLAR, Vancouver, BC. DOI: 10.18608/hla22.010

ABSTRACT

Writing analytics uses computational techniques to analyse written texts for the purposes of improving learning. This chapter provides an introduction to writing analytics, through the discussion of linguistic and domain orientations to the analysis of writing, and descriptive and evaluative intentions for the analytics. The chapter highlights the importance of the relationship between writing analytics and good pedagogy, contending that for writing analytics to positively impact learning, actionability must be considered in the design process. Limitations of writing analytics are also discussed, highlighting areas of concern for future research.
Keywords: Writing Analytics, natural language processing, NLP, linguistics, pedagogy, feedback

Reflections on Writing Analytics

One specific application of artificial intelligence methods in learning analytics is the evaluation of writing and the development of writing ability. These methods are generally centered on natural language processing (NLP) techniques, although they can be combined with other methods, such as social network analysis, to provide a more comprehensive picture.

As mentioned in my first reflection, I learned of the difference between “analytics for learning” and “analytics of learning”. Similarly, one key point I learned from this chapter was the difference between “learning to write” and “writing to learn”. The focus of learning to write is more the technical aspects of writing, the syntactic rules, the vocabulary, the styles appropriate for the context, while the focus on writing to learn is more on the content and the ability to express important concepts and knowledge. These approaches drive the kinds of measures and feedback, and by extension, the kinds of specific NLP techniques and methods, with computational linguistics being more commonly used for learning to write analytics and machine learning classifiers and topic modeling going used more commonly for writing to learn approach.

A second key takeaway from this chapter for me was the distinction between purposes of writing analytics systems, notably to describe or to evaluate, which is similar to the approaches or purposes of statistical analyses – descriptive and inferential. Descriptive writing analytics provides data about a written artefact summarized and often presented in visual forms as information to the writer or the teacher. Generally quantitative, this information is meant to give clues regarding the quality of the writing, but it is up to the viewer to make any meaning from it. This information is limited in value because the feedback is not actionable. Evaluative writing analytics, conversely, applies human judgment (codified in the program) regarding quality of the writing in terms of context, usually (but not always) providing actionable feedback to the writer or the teacher. An exception to this is the application of evaluative WA for summative evaluations for high-stakes assessments, which have very little value beyond “support(ing) performative agendas.” Automated writing systems can be very useful in helping learners develop skills by providing actionable feedback during the writing process, not only regarding errors or problems, but ways to improve, effectively “closing the learning analytics loop.”

Key to the effectiveness of writing analytics systems using artificial intelligence techniques is that they need to be informed by and applied in conjunction with good pedagogy. The pattern of traditional development of educational technology is not surprising to me – that is, the technology is developed first independently and then applications are sought. This is a pattern seen in many contexts, including librarianship. A better approach includes two components: participatory design, with experts from both technology and education or pedagogy involved in the development of WA systems, and pragmatic inquiry, in which the practical applications of the research and development are prioritized.

I was impressed by the evaluative writing analytics approaches that seemed to be of most value or utility, notably AcaWriter and Research Writing Tutor. However, these appear to be of value to a limited set of contexts, notably academic expository writings, which follow standard norms of structure and organization. After mastering the basic techniques of any skill, one is able to effectively “break the rules” or diverge from the norms to become a true master of that skill.

While the primary computational methods for writing analytics involve natural language processing, the units of analysis vary from the individual words to entire documents. Learning analytics systems that are linked into learning management systems could incorporate these tools by reading the drafts submitted by students and providing feedback either in real-time (during the writing process) or shortly after submission, supplemented with insights provided by the class instructor.

As providers of information resources, which are largely textual in nature, libraries have long been supporting the teaching of writing from both approaches (learning to write and writing to learn). Indeed, some of the strongest collaborations between librarians and faculty are for the first year English courses which focus reading and expository writing. Many libraries host writing labs and tutors within their facilities, providing not only space but, in many instances, resources. It would be useful for library administrators to consider evaluating and recommending a selection of AI writing analytics applications to extend the reach of these services.

Announcing: The LibParlor Podcast! — The Librarian Parlor

Announcing the launch of our new open peer reviewed podcast.

Announcing: The LibParlor Podcast! — The Librarian Parlor

I admit, I’m not a podcast devotee, but this looks intriguing. The first episode will be released next Friday, September 16th. And, they are seeking co-hosts and guests!

Collection Liberatory* Librarianship

I saw the posting quoted below in CODE4LIB (which I only recently re-subscribed to for a one-time purpose but decided to stay on for a while and I’m glad I did). Wow, what a concept – “liberatory”, as in “liberated”. A much better word than “discovered” or “uncovered” or even “revealed”. I wish this would have been released a year from now – we might have something to contribute after our DEI collection assessment.

CFP: Edited Collection Liberatory* Librarianship: Case Studies of Information Professionals Supporting Justice*, due Nov. 30

Editors:
Dr. Laurie Taylor (University of Florida, USA)
Dr. Shamin Renwick (University of the West Indies, St. Augustine, Trinidad and Tobago)
Brian Keith (University of Florida, USA)
 
Background:
In this volume to be published by the American Library Association, we seek to explore what is “liberatory librarianship,” using liberatory to mean serving to liberate or set free and using “librarianship” capaciously, to include all information professionals, including archivists, museum professionals, and others who may or may not identify as librarians.
 
Liberatory librarianship involves the application of the skills, knowledge, abilities, professional ethics, and personal commitment to justice and the leveraging of the systems and resources of libraries to support the work of underrepresented, minoritized, and/or marginalized people to increase freedom, justice, community, and broader awareness.

 In this volume, we want to address questions like:
 – How can librarianship be liberatory?
– How is library capacity and expertise used to increase freedom, justice, and community?
– What is your story of being a liberatory librarian?
– Tell us the story of liberatory librarianship that inspired you in your work?
– In 2020 many librarians were shocked by tragic racially based events and motivated to become more focused on social justice work – how has that translated into library work?
 
We seek stories of liberatory librarianship so that collectively we can learn from impactful luminaries, who too often are unknown and their work unspoken.  In this volume, we seek to define, recognize, and foster liberatory librarianship by bringing together many voices sharing the stories of this work.
 
For what we hope is the first of many volumes, we seek:
– Practical stories to inspire us to think about our work and inform it, not opinion pieces
– Stories based on information professionals doing something
– Stories of stalwarts and champions who have forged progress in this area
– Autobiographical entries are welcomed
– Stories from the across the world
– Entries in English (the stories may depicted work undertaken in other languages)
– Cases are expected to follow practices of reciprocity and community, and so are expected to engage and return to the community. Community members should be afforded the opportunity to review and comment. For example, if the story of liberatory librarianship includes work with a particular community, will a member of that community be a contributor to the piece?
– For essays where the person is alive and available, the book process will include inviting the person to take part and incorporating their perspective to share their voice (incorporated into the entries). As with all of the essays, these will share stories of specific work and person working following liberatory librarianship.

The editors expect to include approximately:
– 10 long-form profiles (3,000-4,000 words)
– 15 short-form profiles (under 350 words)

We will select based on the importance of sharing hidden stories, representativeness of the stories, and the ability of each story in terms of how they can educate, inform, and inspire. 

This volume will complement recent scholarship on liberatory archives and justice in libraries, known by many terms, as with Michelle Caswell’s Urgent Archives (Routledge 2021) and Sophia T. Leung and Jorge R.
López-McKnight’s Knowledge Justice: Disrupting Library and Information Studies through Critical Race Theory (MIT 2021). This book will parallel the collection edited by Shameka Dalton, Yvonne J. Chandler, Vicente E. Garces, Dennis C. Kim-Prieto, Carol Avery Nicholson, and Michele A. L. Villagran, Celebrating Diversity: A Legacy of Minority Leadership in the American Association of Law Libraries, Second Edition (Hein 2018), which offers a thematic overview with specific stories of excellence and impact. This volume shares a methodology with grounded theory, narratology, and feminist practices, as with books like Sherry Turkle’s Evocative Objects (MIT 2011). In the telling of specific stories that speak to greater truths, the essays in this volume will illuminate complexity through accessible, readable, and engaging stories.
 
As a collected set of stories of the profession, this volume will be of interest to those working in librarianship, defined broadly, as well as to faculty and students in information science and museum studies programs.
 
Please send the following to laurien@ufl.edu by November 30, 2022:
– Name(s)
– Email(s) for all
– 100-250 word bio of the author(s), which may include links
– For a short form (under 350 words), please submit the full piece
– For a longer form (3,000-4,000 words), please submit the full piece or a 250-500 word proposal
 
For submissions:
– Please use Chicago Manual of Style, 17th edition.
– Photos, images, or artwork should be saved in separate electronic files (each photo, image, etc. as a separate file). Indicate their placement with an all-caps comment in the manuscript, immediately following the paragraph that includes the reference to the figure, table, or box, for example:
   INSERT FIGURE 6.3 APPROXIMATELY HERE.

The editors will respond by December 5, 2022.
For longer form, final submissions will be due February 15, 2022.

Ranganathan’s Fourth Law: “Save the time of the reader” — The Faithful Librarian Blog

As noted in an earlier entry, this blog will aim to look at Ranganathan’s (1931) five laws of library science through a Christian lens. We already looked at the first law: books are for use, the second: books are for all, every reader his or her book, and the third: books are for all, or, […]

Ranganathan’s Fourth Law: “Save the time of the reader” — The Faithful Librarian Blog

Dr. S.R.R. through a Christian lens

As you can guess from my blog title (not to mention my page and category devoted to the “saint” librarianship), I’m a sucker for posts about Dr. Ranganathan and his Five Laws of Library Science. Sure, there are those who quibble at the choice of words (are they more “principles” than “laws”, and is it really a “science”?), but these five statements continue to serve as the essence of librarianship. I do not know how spiritual Dr. Ranganathan was, but I would hazard to guess he would have appreciated Garrett Trott’s suggestion of providing space or a place for contemplative study, reflection, meditation, and yes, prayer.

Shooting down the ‘one-shots’

If you haven’t noticed yet, the latest issue of College & Research Libraries “turns a critical eye to the one-shot instruction model.” The special issue is filled with articles ranging from a meta-analysis (which finds a modest but consistent positive association of one-shot’s and student outcomes, especially when the outcomes are skills-based) to a timely metaphor with vaccinations to a clever “re-enactment of a one-shot“, which would be even funnier if it wasn’t so painfully familiar.

Despite the findings of the meta-analysis, which the author’s admit has a number of methodological limitations due to the dirth of studies and lack of standard measures of academic performance, the issue was an all-out assault on this lowly form of library instruction. And rightly so – librarians have a particular disdain for this service for a number of reasons, so well detailed in the many articles, notably:

  • being expected to cram so much into such a short session (which belittles our content)
  • being called upon usually with very short notice (which belittles our professionalism)
  • not being involved in the development of the lesson (which belittles our educational role)
  • reducing our contributions to the academic experience to 50 minutes a semester, at most

The authors are true to the academic sense of the word, “critical” – many apply critical theory methods to the history of this service, particularly in the context of the neoliberalism of academia, its role in supporting the systemic racism of the the burdens it places on librarian practitioners, and the influence of the feminization of instruction on its dominance as an expected service. Some solutions are proposed, most of which share similar qualities of slowing down, collaborating with (albeit reluctant) faculty, and “rebuilding from the ground up.”

This…is…amazing. I don’t think I’ve seen such a frank, critical review of a single staple of librarianship. But I’m concerned that we are preaching heartily and full-throatedly to the choir. Who, outside of LIS, reads CR&L? How will these articles get into the hands of those faculty members who request that 1-hour session on using “databases” in 2 days? How do we impart these vital connections of neoliberalism and the “pedagogies of the practical” to academic administration? What can convince faculty to share their power in the classroom?

Recognizing the import of this special issue and wanting to build on whatever momentum fueled by these articles, ACRL will be hosting two webinar sessions:

I’ll be interested to see, hear and read responses from librarians, library administrators, and librarianship researchers to these ideas. Could this spur frank and bold discussions within individual libraries, resulting in fundamental and systemic changes to information literacy and library use instruction? Or are we spinning our wheels, whistling into the wind? Will our sermon echo in the chambers of libraries, as we continue service as usual, grumbling along the way, and delivering our spiel with a smile?

Reflections on Analytics of and for Collaborative Learning

[To my librarianship readers: this is an assignment for my Applications of Artificial Intelligence in Learning Analytics course, providing my reflections on a chapter from our key text, Handbook of Learning Analytics, 2nd edition, from the Society for Learning Analytics Research (SoLAR). Despite the seemingly forced nature of this post, I hope you are as intrigued by the topic of learning analytics and its potential utility for librarianship as I am.]

Collaboration, or teamwork in the human resources parlance, is a key soft-skill that is increasingly being incorporated into educational settings, from K-12 schools through graduate and professional programs, most commonly in North American and European countries. With the advent of online collaborative systems, starting in the latter quarter of the twentieth century, the collection and analysis of data, particularly traces or ambient data generated behind the scenes, has made studying human collaborative behaviors relatively easier and more efficient. The explosion in utilization of these systems, notably MS Teams and Zoom, during the recent COVID-19 shutdowns, provided greater need to understand how people learn and work together and improve the tools for bringing out the best of these collaborations. From my reading of the chapter of the Handbook of Learning Analytics on this topic, Chapter 9: Learning Analytics for Understanding and Supporting Collaboration, I came away with these three key points: the difference between analytics of and analytics for collaboration, two dimensions of analytics for collaborations, and challenges and future directions of analytics related to collaborative learning.

There are key differences of analytics conducted of or about collaborative learning, and those conducted for collaborative learning. Analytics of collaborative learning attempts to understand how humans learn together or collaboratively. Such studies may focus on changes in understanding or cognition at the individual level, while others examine the social or cultural aspects of interpersonal communication and cooperation. This latter, intersubjective framework commonly uses mediational objects to study the social and linguistic interactions and analyze the changes over time, or trajectories, of these objects. Conversely, analytics for collaboration is the action-oriented application of analytics, with feedback provided in a timely manner to the participants for the purpose of improving collaborative learning experiences, usually measured by improvements in understanding. While the purposes of these categories of analytics of and for collaborative learning differ, they also serve cross-purposes, with knowledge gained from from each informing the other.

The Handbook categorizes analytics for collaboration by two distinct dimensions based on distribution of power or control, and the level of integration within the collaborative process. Systems developed may distribute more or less control to the humans or the system. Partner systems describe those that provide greater control to the humans, while regulator systems manage the collaborative process more actively, using artificial intelligence to monitor the progress and provide a more inclusive experience for participants.

The application of analytic methods to the study of collaboration and systems built to enable collaborative learning has been quite successful. Such methods include machine learning, social network analysis, topic modeling, and artificial intelligence for smart chat-bots that interact with human participants. There have been notable challenges, however, including the lack of theoretical foundations of learning analytics (LA) studies, the relatively small scale of LA studies, and the delicate nature of human agency within human-computer interactions. Because much of research for and of collaboration and collaborative learning has been conducted within many different fields, learning analytics in general is more multidisciplinary than interdisciplinary. This work has traditionally been conducted within the silos of each field or discipline, resulting in a lack of strong theoretical foundations. By bringing together the researchers into formal and informal channels for sharing and collaborations themselves, the field could begin to unify, resulting in new theories and systems of analytics of its own. Another limitation of LA is the relatively small scale of studies which have difficulties of generalizability and reproducibility. The generation of Big Data that is possible with learning management systems and collaboration systems that are adopted widely provides the opportunity to eliminate this limitation entirely. But it is a challenge for LA researchers to tread carefully around the nature of human-computer interactions, recognizing and ensuring that humans retain and exercise agency in their own actions and responsibilities.

The lack of integration of research and theory did surprise me. While I understand how learning analytics is a relatively young field, I had expected it to be more developed and unified by now. There are few scholarly or professional organizations and journals that specifically and specially cover learning analytics. Conducting searches for literature on these topics brings results from a wide range of subject areas, which makes researching published studies and staying current difficult. As a professional aside, it also makes developing collections for libraries supporting learning analytics, either as an academic program or for application within the institution, difficult and costly.

Some of the examples of studies, and particularly of applications of analytics for collaborative learning or collaborative work, really impressed me. I was particularly interested in the methods used for ensuring Wikipedia content is well maintained, notably the ClueBot that automatically identifies content vandalism and reverts the content accordingly. I was also impressed by the example given of the use of “sociometric badges” based on patterns of interaction among the members of a group for the purpose of maintaining balance of participation.

This chapter provided to me a good overview of the methods, applications, challenges and opportunities of analytics for and of collaborative learning. I’m interested in considering the opportunities for applications in librarianship.

Welcoming myself and my readers (if any) back

It has been a while since I last posted to my blog. August 6, 2017 to be exact. To be honest, it was hard work and I just didn’t think anybody was listening. Of course, it’s not like I disappeared. Way too much has happened to review in a single post. But that’s not why I’m here now.

I’ve been spurred to reactivate this channel due to course assignments. Last year, I enrolled in my third master’s degree program, this time in data analytics. It has been enlightening – so many new techniques and methods. The course with the reflective writing assignment is in learning analytics, an area in which I have been quite surprised that the librarianship appears to has allowed itself to be purposefully side-lined. Although I had been using some data at the institutional level to assess potential needs of information, my first real introduction to learning analytics was with the Prioritizing Privacy Project (https://prioritizingprivacy.org/), an initiative that I regularly and highly recommend to my colleagues. This intensive course provided a really good overview of the field of learning analytics and opened my eyes to the missed opportunities libraries and librarians have had to work more closely with those who collect the data for the purpose of making student data privacy, well, a priority, rather than turning our backs on the whole idea. FYI – they are organizing another cohort at this time for the Fall of 2022 (Curriculum and application form).

While my first year of the UNT MS in Advanced Data Analytics program covered methods of statistical analysis, artificial intelligence, machine learning, “Big Data”, and visualization, I’m starting the second year focusing on my area of specialization – learning analytics. The field is actually not as big nor as well developed as I thought it would be. There is only one journal and only 152 items in our library’s collection with the phrase “learning analytics” (Journal of Learning Analytics) (OK, that’s not a strong indicator of collection coverage). But advances are being made and put into action every year and librarians are missing out on becoming key players in developing the appropriate protections of student data privacy and utilizing data and information to actually help students, particularly those with the greatest needs.

This course is also my first full-semester course I’ve taken since starting this program, which has been divided into 8-week sessions. It requires both discussions and reflective writings, and the instructor recommended creating a blog. Rather than start from scratch, I decided to re-vamp Libraries are For Use and see if I can keep it going again. I may play with ways to make it more appealing and invite guest posters, particularly students who need some experience writing.

So, I’m welcoming myself back to blogging, and welcoming any readers who return. I’ll try to make it more interesting.

Blog at WordPress.com.

Up ↑

%d bloggers like this: