Libraries and Patron Data…it’s complicated

As I progress through a course on learning analytics, I’ve been digging deeper into the relationship that librarians have with patron data, and I’ve determined that, like so many troubled relationships, it’s complicated. While the core value of patron confidentiality is now considered unquestioned and, indeed, inviolate, it is not as historic as some may believe. It was added to the ALA’s “Library Bill of Rights and Code of Ethics” in 1939, nine years after the initial release, at a time that is associated with the rise of Nazism and notable events of government surveillance (Witt 2017). While over 80 years old, this particular ethic has had some troubling beginnings and has taken some serious knocks from both external forces and from within the profession itself.

I also found the placement of this particular code interesting in the ALA Code of Ethics (current) – following customer service and intellectual freedom. Indeed, there are a number of ways that customer service has trumped privacy throughout the years. Consider circulation practices of yore, such as check-out cards with handwritten signatures of patrons who had borrowed each title, easily accessible to any person perusing the shelves. What would be the implications of borrowers of this title following an explosion in a lab if it appeared suspicious?

But I digress…my point is that customer service is the number one ethical obligation, but it is not meant to be at the expense of the other codes. Indeed, these other codes appear to constrain that #1 – guardrails, if you will. But to what extent should these constraints go?

The Prioritizing Privacy project, led by Lisa Hinchliffe and Kyle Jones, of which I have partaken, provides the opportunity for librarians to explore their own concepts and test their own assumptions about privacy and library services, particularly relating to learning and learning analytics. Taking a critical approach to both privacy and Big Data in academia (particularly higher education), the extended workshop brings together individuals at many points along the spectrum of beliefs and attitudes. Particularly intriguing to me were the discussions that brought out the disadvantages of libraries and librarians standing on the sidelines of this issue, not only for the institution of librarianship, but, most notably, for the students themselves, in the loss of services which could help them succeed in their own goals. But the key to ensuring ethical application of such services is the respect of students’ agency, an aspect that even the most ardent opponents of learning analytics can accept as a path to providing that “highest customer service” while still protecting the “patron’s right to privacy and confidentiality”.

I’m still exploring the privacy-analytics issue and have collected what I consider five foundational or key articles that explore the historical development and current struggles librarianship has with privacy (see References below). Witt, as I indicated earlier, provides a look a the intrigues, influences, and events which brought about the current ethos of patron confidentiality and privacy protection within the profession. Of particular note is Witt’s suggestion that the development of the codes of ethics in the 1930’s was less of a concern about the values of the profession and more about the development of librarianship as a profession itself (Witt, pg. 648). Regardless of the motivations, the codes were developed organically from statements in published literature and responses to surveys of ALA members, reflecting overall opinion of librarians, notably that reference transactions should be considered private and libraries should take steps to protect patrons’ privacy of information that they seek (ibid., 649).

Campbell & Cowan’s “Paradox of Privacy” focuses the privacy and service dilemma through the lens of LGBTQ (note: the authors specifically denote the “Q” as “Questioning”, which figures prominently in their discourse). They argue that the need for inquiry and the need for privacy are necessarily integrated, particularly for such a personal journey of the development of a young person’s identity. The title comes from the authors’ statement that, “open inquiry requires the protection of secrets” (Campbell & Cowan, pg. 496). After setting the context of this paradox, they argue for “the need for privacy” and the paradox of individual’s disclosure dilemma – disclosure can be risky and it can be healing. Indeed, the need for privacy within inquiry is imperative for the development of identity, but libraries have had mixed success in dealing with the competing needs of managing inventory with managing privacy. They describe how self-checkout systems have been shown to result in increases of use of LGBTQ material, contrasted with the increase of libraries collecting (or allowing to be collected by third-party systems) “big data” surveillance. Campbell & Cowan summarize Garret Keizer’s treatment and definition of privacy – “‘a creaturely resistance to being used against one’s will.'” – including providing Garret’s observations about privacy and libraries:

  • Surveillance—monitoring what people are reading and sharing private information about them—becomes a form of using other people.
  • Privacy consists of…individual’s power to modulate the extent of his or her self-revelation in specific circumstances.
  • The library occupies a position of significant though paradoxical importance: its status as a public place makes it an ideal place in which to experience genuine privacy.

My takeaway from this slightly-below-surface-level look at libraries and privacy is that librarianship’s relationship with patron privacy is complicated. We value a patron’s right not only to access quality information on any and all subjects, but their right to keep this information private. However, we also value service and easy access to this information, and methods to improve these services can conflict with the value of privacy. But there are ways to reduce this conflict, mostly by returning control of information back to the patron. Transparency and honesty in how the information will be used, providing patrons access to their stored information, enabling patrons to opt-in and opt-out of these services, and simply letting go of the need to control information and resources are the solutions which I have discovered to have the greatest potential to ensuring trust in libraries is retained.

References

Asher, Andrew. “Risks, Benefits, and User Privacy: Evaluating the Ethics of Library Data.” Chap. 4.2 In Protecting Patron Privacy: A Lita Guide: Rowman & Littlefield Publishers, 2017.

Campbell, D. Grant, and Scott R. Cowan. “The Paradox of Privacy: Revisiting a Core Library Value in an Age of Big Data and Linked Data.” Library Trends 64, no. 3 (2016): 492-511. https://doi.org/10.1353/lib.2016.0006.

Jones, Kyle M. L., and Lisa Janicke Hinchliffe. “New Methods, New Needs: Preparing Academic Library Practitioners to Address Ethical Issues Associated with Learning Analytics.” Paper presented at the The Annual Meeting of the Association for Library and Information Science Education (ALISE), 2020.

Witt, Steve. “The Evolution of Privacy within the American Library Association, 1906-2002.” Library Trends 65, no. 4 (2017): 639-57. https://doi.org/10.1353/lib.2017.0022.

Zimmer, Michael, and Bonnie Tijerina. “Foundations of Privacy in Libraries.” Chap. 2 in Protecting Patron Privacy: A Lita Guide, edited by Bobbi Newman and Bonnie Tijerina: Rowman & Littlefield Publishers, 2017.

Learning Analytics: Learning and Growing Pains

After a jaunt through some analytical tools and methods, my course on learning analytics has returned to the path of overview and reflection. The subjects of this post are two different methods or approaches with some common criticisms and challenges. As a very young and inherently interdisciplinary field, learning analytics is still maturing, much like young apprentices who have been introduced to the tools of trade but have not yet developed the expertise necessary to deliver quality work. The use of network analysis in learning environments (including social network analysis methods) and the development of multimodal learning analytic systems are good examples of such limitations. Most of the studies utilizing these methods lack theoretical foundations or contexts, as well as standards in definitions and methods. Because the methods were originally developed outside of the field of educational research, they often require expertise, technological or methodological, not normally available to those with an interest in studying learning. Finally, these methods represent the need for multiple approaches to adequately examine the complexities of learning and cognitive processes.

The readings about these methods (chapters 4 and 6 from The Handbook of Learning Analytics, see Lang, et al., 2022) were particularly critical of the state of research put by those in the field. Specific criticisms include studies that provide “limited insight”, “disjointed empirical findings” that are “difficult to synthesize” (for social network analyses, Lang, et al., pg. 38), as well as “the lack of homogenous of methodological approaches” with “each study (using) different approaches”, resulting in “complete diversity” of research (regarding multimodal learning analytics, Lang, et al., pg. 60). The key problem identified by Oleksandra Poquet and Srecko Joksimovi, authors of the critique of learning analytics studies utilizing social network analysis, has been “the naive adoption of network analysis” without making the network methodologies explicit and careful considerations of the assumptions of these methods. This has resulted in a “cacophony of network approaches” that are indecipherable and provide disjointed results. Similar criticisms, albeit not quite as harsh, comes from Xavier Ocha, author of the chapter on multimodal learning analytics (MmLA), who characterized this specialty as lacking standards and best practices due in no small part to the youth and interdisciplinarity of the field (Lang, et al., pg. 61).

Both specialties, like that of the entire field of learning analytics, require expertise outside of the tradition field of education, particularly in the technology and data analytics (e.g. machine learning and artificial intelligence). The methods of multimodal learning analytics inherently require substantial knowledge and technical capabilities to effectively integrate the multitude of sensors into a network to provide simultaneous capture of trace data. Then there are the technologies needed to assimilate or “fuse” the disparate data so that it can be processed and analyzed using artificial intelligence methods (Lang, et al., pg. 62). These methods require substantial expertise in the assumptions and parameters that, when not fully understood, could lead to the “limited insight” and “disjointed empirical findings” that Poquet and Joksimovi warned about. The authors noted that network analysis as a methodology shifts the resulting networks from “relationships of data” to “relationships of constructs” (Lang, et al., pg. 40). In these theoretical models, attention needs to be paid to proper construction of the network model through selection of ties or nodes based on theory, and using the theoretical framework to interpret the model, rather than explaining the model through the data relationships alone (Lang, et al., pg. 41).

These two methods, both multimodal learning analytics, of course, but also network analysis for learning analytics, demonstrate the need for multiple methods to effectively discern the cognitive and learning processes. This is inherent in multimodal LA as this method is based on capturing data from an array of modes of communication through a variety of media or channels. This method also requires the data to be gathered in mixed-medial learning environments, and then that data to be analyzed in simultaneously in order to provide near real-time feedback to students or teachers. Similarly, it is rarely enough for network analysis alone to explain cognitive processes or learning behaviors. Nor are single dimensions of communications enough for network analyses to discern valid ties. The field is ripe for exploring using more complex network models, incorporating time, space and multiple dimensions of communication (Lang, et al., pg. 43).

I was most impressed by the sheer amount of technology that could be (and has been) used for multimodal learning analytics. The example provided, a presentation rehearsal system that provides feedback to students on their presentation skills, was impressive for its extent of coverage of audio and visual data capture, and even more so for its analysis of the data captured (see figure below). The system included not only cameras and microphones, but also software to track eye gaze and posture, analyze speech volume and formats to pick up on filled-in pauses (“um’s” and “ah’s”), and to process all the data simultaneously and provide the student presenter with near real-time feedback.

Diagram of the layout of the multimodal system for oral presentation feedback. Shows the pico projector (front and back), Raspberry sensor, microphone, camera, intel NUC, and acoustic foams.
Ochoa and Dominguez, 2020

Finally, one of the many new points that I learned from these readings was the distinction between network analysis as a method and that as a methodology. Network analysis as a method provides a network model based solely on the relationships of the data provided. While such models can be used to select features for more complex models and categorize ties or nodes, they are inherently naïve or arbitrary and lack theoretical context. They cannot, then, represent a theoretical construct. Network analysis as a methodology requires such a theoretical framework, and as such, care and attention paid to selection of model parameters, descriptive metrics, and statistical techniques. The concept of comparing network models with the use of null models was especially intriguing. Much like traditional null hypothesis testing, comparing network models requires models of “random networks simulated using hypothesized generative rules” – that is, a model of a specific learning behavior under normal conditions. Models of observed behavior could then be compared against these “null models” to identify diversions or deviations.

References

Lang, C., et al. (2022). Handbook of Learning Analytics. Vancouver, BC, Society for Learning Analytics Research (SoLAR).

Ochoa, X. and F. Dominguez (2020). “Controlled evaluation of a multimodal system to improve oral presentation skills in a real learning setting.” British Journal of Educational Technology 51(5): 1615-1630.

Integrating machine learning in the classroom

Throughout the first third of my course on learning analytics, we have been reading about the methods of using artificial intelligence methods to evaluate or study human learning or aspects of cognition or cognitive development, such as collaboration and motivation.  As we move into the applications and the relevant technologies, we are also learning how these methods could be introduced to the students themselves as part of the curriculum.  An instructional video, produced by Samantha Norris, a doctoral student in the UNT Department of Learning Technologies, provides an example of introducing machine learning to students, include an opportunity to see the “learning” in action using Teachable Machine, Google’s interactive system for demonstrating machine learning.

I was impressed by how the lesson plan so effectively brought together so many learning goals, or “student expectations” (standards from the Texas Essential Knowledge and Skills (TEKS)) in a one- or two-period lesson.  Ms. Norris laid out a fully-developed lesson plan, complete with “hook” (to capture the students’ interest), introduction and explanation (kept simple but effective), and time for hands-on experience building and testing the model.  I was also impressed with the demonstration of the Teachable Machine.  Samantha not only demonstrated how to integrate the tool into a lesson plan, but also ideas on how to improve the demonstration to provide more successful results.  This looks like a very useful tool for providing hands-on experience with machine learning, not only in the K-12 learning environment, but also in certain undergraduate courses, especially those for which the primary topic is not the technology itself but rather other aspects, such as cultural, psychological or sociological. 

This application of machine learning as educational content in and of itself differs from the previously learned examples of using machine learning and other analytical methods to improve or understand learning.  I wonder if such a method could result in students developing methods for the express purpose of understanding or improving their own learning experiences, not unlike machine learning “robots” developing new AI methods themselves. 

Affective Analytics in Learning Environments

To my librarianship readers: This is the third of a series of reflections on readings from a text for a course that I am taking on learning analytics. While it may not be directly related to librarianship or library assessment, I am hoping to learn of the opportunities where library analytics and learning analytics overlap.

2022. D’Mello, Sidney K. and Emily Jensen. Chapter 12: Emotional Learning Analytics. In, Handbook of Learning Analytics, C. Lang, G. Siemans, A. F. Wise, D. Gasevic and A. Merceron, eds. Society of Learning Analytics Research (SoLAR). DOI: 10.18608/hla22.012

ABSTRACT

This chapter discusses the ubiquity and importance of emotion to learning. It argues substantial progress can be made by coupling discovery-oriented, data-driven, analytic methods of learning analytics and educational data mining with theoretical advances and methodologies from the affective and learning sciences. Core, emerging, and future themes of research at the intersection of these areas are discussed. Keywords: Affect, affective science, affective computing, educational data mining, learning Analytics

Human learning (versus machine learning) does not take place in a vaccuum; it is influenced heavily by environmental and internal factors, including emotions of the learner.  This chapter highlighted a myriad of experiments and systems designed to identify characteristics of physical features or written or spoken text which are most likely associated with specific emotions.  Traditional methods of detection have relied heavily on human judgments, following standard protocols such as the Baker-Rodrigo Observation Method Protocol (BROMP).  More recently, machine learning methods of supervised learning, trained on data sets derived from such methods, have proven to be more efficient and nearly as accurate.  Parallel development of methods of detection based on textual (aural or written) traces as well as bodily or physical traces (notably eye-gaze detection and movement of the body) have resulted in systems that can detect emotions of boredom, confusion, and achievement or success or satisfaction.

Such methods have been applied in classroom settings, notably as intelligent online tutorial systems, collaborative learning systems, and online classroom management dashboards.  The intelligent tutorial systems can detect occurrence of a limited set of emotions and behaviors associated with being on- or off-task.  The systems then can react to these detections, providing responses meant to reassure and re-connect the learner who may be getting less engaged due to the negative emotions.  Often these responses are programmed to be empathic, even mirroring the emotion(s) detected, with the expectation that the learner will re-engage with the system.

While these systems appear to be improving on the detection of these emotions, there are many other emotions that impact learning.  Affective states of envy, jealousy, pride, guilt, shame, and others may be more difficult to detect as they often are negatively perceived in many cultures, and thus may be more likely to be hidden by the learner.  Experiments of recording the occurrences of these emotions combined with physiological detections could help with identifying facial characteristics that could lead to applications in which these emotions are empathically addressed to assist with the emotional growth of the learner.

I was concerned about the idea of a dashboard for instructor’s use that detects and highlights emotions of specific students.  My initial concern was that this could lead to the erosion or lack of development of this skill in the teachers themselves.  I understand that it would be most useful to have little flags above each child’s physical head denoting the engagement level or lack of understanding, but I wonder if we may be becoming too dependent on such notifications. 

Coming from a field outside of education or learning sciences, much of the information presented in this chapter was new to me.  It was useful to learn of the general research and understanding of the impact of emotion on learning, particularly its signaling, evaluative, and modulating functions. This idea is not unlike lessons I am learning about the psychology behind mindfulness and communication, notably how emotions are indicators of needs being met or not being met (signaling). 

This chapter was a whirlwind of the basic research behind, and the progressive development of affective computing in a learning environment.  I was disappointed in the lack of research or applications from my literature searching of writings in librarianship.  There are many opportunities for librarians to integrate affective assessment of communications in training sessions, reference interviews, automated “virtual reference” using chatbots, and formal discussion forums. 

Visual analysis of OSTP’s OA policy

As seen in Scholarly Kitchen, this guest post from Christos Petrou is interesting and inspiring – not quite as much for the content (although I did learn much) as for the analyses. While I have been analyzing data for quite a long time, I consider myself too old-school. Whenever I see analysis that uses color, charts, graphs, and tables in an insightful way, I am always inspired.

https://scholarlykitchen.sspnet.org/2022/09/13/guest-post-quantifying-the-impact-of-the-ostp-policy/

Christos projects that the OSTP policy (announced in a memorandum issued last month) could result in as many as 132,000 articles made freely available to read (“unpaywalled”). This new policy is meant to extend the trend of making the outputs of federally-funded research free to read, eliminating previous exemptions and restrictions, such as eliminating embargo periods and extending the direction to all grants, not just large ones. [Rick Anderson provides a very interesting analysis of the text of the memo, noting its tone of more recommendation rather than real direction.]

Christos using clean charts and graphs that invite the reader in, rather than repel. They are clean, provide just the right amount of detail for the purpose, using fonts that are clear and readable (a real problem with Tableau’s default fonts). They support the analysis and the analysis do not regurgitate the charts.

Now, back to the content – Christos indicates that this policy (if implemented as directed) could open up about 16% of all scholarly papers produced in the U.S., pushing up the worldwide total by over 3 full percentage points. But a key insight Christos makes is how much of this opening would be in the more reputable journals. While the journals themselves have provided hybrid OA access, allowing authors to choose whether or not to pay to publish (and thereby have their articles free-to-read), American authors have been resistant to taking that step, even when funding for article processing charges (APC’s) is available from the grants.

As with any policy, the effects will not be evenly distributed – most funding is in healthcare and biological sciences, so those subjects will be most affected, while there is very little federal funding of research in the arts and humanities. Furthermore, publication of engineering and technology will be less impacted because of the domination from Chinese researchers (see Christos’ analysis of China’s “Beall List”).

This was just the inspiration I needed as I start a new year of analyses projects.

Learning Analytics Reflection 2: Writing Analytics

[To my librarianship readers: this is the second of reflection pieces required for completion of a course in Applications of Artificial Intelligence for Learning Analytics. Although it might not be directly relevant to academic librarianship or library assessment, I do point out opportunities librarians could take, particularly those who directly support instruction in writing or who collaborate with writing labs and tutoring services.]

Gibson, Andrew and Antonette Shibani. 2022. Natural Language Processing – Writing Analytics. In, Handbook of Learning Analytics, 2nd edition. Charles Lang, George Siemens, Alyssa Friend Wise, Dragan Gašević, Agathe Merceron (Eds.). 2022. SoLAR, Vancouver, BC. DOI: 10.18608/hla22.010

ABSTRACT

Writing analytics uses computational techniques to analyse written texts for the purposes of improving learning. This chapter provides an introduction to writing analytics, through the discussion of linguistic and domain orientations to the analysis of writing, and descriptive and evaluative intentions for the analytics. The chapter highlights the importance of the relationship between writing analytics and good pedagogy, contending that for writing analytics to positively impact learning, actionability must be considered in the design process. Limitations of writing analytics are also discussed, highlighting areas of concern for future research.
Keywords: Writing Analytics, natural language processing, NLP, linguistics, pedagogy, feedback

Reflections on Writing Analytics

One specific application of artificial intelligence methods in learning analytics is the evaluation of writing and the development of writing ability. These methods are generally centered on natural language processing (NLP) techniques, although they can be combined with other methods, such as social network analysis, to provide a more comprehensive picture.

As mentioned in my first reflection, I learned of the difference between “analytics for learning” and “analytics of learning”. Similarly, one key point I learned from this chapter was the difference between “learning to write” and “writing to learn”. The focus of learning to write is more the technical aspects of writing, the syntactic rules, the vocabulary, the styles appropriate for the context, while the focus on writing to learn is more on the content and the ability to express important concepts and knowledge. These approaches drive the kinds of measures and feedback, and by extension, the kinds of specific NLP techniques and methods, with computational linguistics being more commonly used for learning to write analytics and machine learning classifiers and topic modeling going used more commonly for writing to learn approach.

A second key takeaway from this chapter for me was the distinction between purposes of writing analytics systems, notably to describe or to evaluate, which is similar to the approaches or purposes of statistical analyses – descriptive and inferential. Descriptive writing analytics provides data about a written artefact summarized and often presented in visual forms as information to the writer or the teacher. Generally quantitative, this information is meant to give clues regarding the quality of the writing, but it is up to the viewer to make any meaning from it. This information is limited in value because the feedback is not actionable. Evaluative writing analytics, conversely, applies human judgment (codified in the program) regarding quality of the writing in terms of context, usually (but not always) providing actionable feedback to the writer or the teacher. An exception to this is the application of evaluative WA for summative evaluations for high-stakes assessments, which have very little value beyond “support(ing) performative agendas.” Automated writing systems can be very useful in helping learners develop skills by providing actionable feedback during the writing process, not only regarding errors or problems, but ways to improve, effectively “closing the learning analytics loop.”

Key to the effectiveness of writing analytics systems using artificial intelligence techniques is that they need to be informed by and applied in conjunction with good pedagogy. The pattern of traditional development of educational technology is not surprising to me – that is, the technology is developed first independently and then applications are sought. This is a pattern seen in many contexts, including librarianship. A better approach includes two components: participatory design, with experts from both technology and education or pedagogy involved in the development of WA systems, and pragmatic inquiry, in which the practical applications of the research and development are prioritized.

I was impressed by the evaluative writing analytics approaches that seemed to be of most value or utility, notably AcaWriter and Research Writing Tutor. However, these appear to be of value to a limited set of contexts, notably academic expository writings, which follow standard norms of structure and organization. After mastering the basic techniques of any skill, one is able to effectively “break the rules” or diverge from the norms to become a true master of that skill.

While the primary computational methods for writing analytics involve natural language processing, the units of analysis vary from the individual words to entire documents. Learning analytics systems that are linked into learning management systems could incorporate these tools by reading the drafts submitted by students and providing feedback either in real-time (during the writing process) or shortly after submission, supplemented with insights provided by the class instructor.

As providers of information resources, which are largely textual in nature, libraries have long been supporting the teaching of writing from both approaches (learning to write and writing to learn). Indeed, some of the strongest collaborations between librarians and faculty are for the first year English courses which focus reading and expository writing. Many libraries host writing labs and tutors within their facilities, providing not only space but, in many instances, resources. It would be useful for library administrators to consider evaluating and recommending a selection of AI writing analytics applications to extend the reach of these services.

Announcing: The LibParlor Podcast! — The Librarian Parlor

Announcing the launch of our new open peer reviewed podcast.

Announcing: The LibParlor Podcast! — The Librarian Parlor

I admit, I’m not a podcast devotee, but this looks intriguing. The first episode will be released next Friday, September 16th. And, they are seeking co-hosts and guests!

Collection Liberatory* Librarianship

I saw the posting quoted below in CODE4LIB (which I only recently re-subscribed to for a one-time purpose but decided to stay on for a while and I’m glad I did). Wow, what a concept – “liberatory”, as in “liberated”. A much better word than “discovered” or “uncovered” or even “revealed”. I wish this would have been released a year from now – we might have something to contribute after our DEI collection assessment.

CFP: Edited Collection Liberatory* Librarianship: Case Studies of Information Professionals Supporting Justice*, due Nov. 30

Editors:
Dr. Laurie Taylor (University of Florida, USA)
Dr. Shamin Renwick (University of the West Indies, St. Augustine, Trinidad and Tobago)
Brian Keith (University of Florida, USA)
 
Background:
In this volume to be published by the American Library Association, we seek to explore what is “liberatory librarianship,” using liberatory to mean serving to liberate or set free and using “librarianship” capaciously, to include all information professionals, including archivists, museum professionals, and others who may or may not identify as librarians.
 
Liberatory librarianship involves the application of the skills, knowledge, abilities, professional ethics, and personal commitment to justice and the leveraging of the systems and resources of libraries to support the work of underrepresented, minoritized, and/or marginalized people to increase freedom, justice, community, and broader awareness.

 In this volume, we want to address questions like:
 – How can librarianship be liberatory?
– How is library capacity and expertise used to increase freedom, justice, and community?
– What is your story of being a liberatory librarian?
– Tell us the story of liberatory librarianship that inspired you in your work?
– In 2020 many librarians were shocked by tragic racially based events and motivated to become more focused on social justice work – how has that translated into library work?
 
We seek stories of liberatory librarianship so that collectively we can learn from impactful luminaries, who too often are unknown and their work unspoken.  In this volume, we seek to define, recognize, and foster liberatory librarianship by bringing together many voices sharing the stories of this work.
 
For what we hope is the first of many volumes, we seek:
– Practical stories to inspire us to think about our work and inform it, not opinion pieces
– Stories based on information professionals doing something
– Stories of stalwarts and champions who have forged progress in this area
– Autobiographical entries are welcomed
– Stories from the across the world
– Entries in English (the stories may depicted work undertaken in other languages)
– Cases are expected to follow practices of reciprocity and community, and so are expected to engage and return to the community. Community members should be afforded the opportunity to review and comment. For example, if the story of liberatory librarianship includes work with a particular community, will a member of that community be a contributor to the piece?
– For essays where the person is alive and available, the book process will include inviting the person to take part and incorporating their perspective to share their voice (incorporated into the entries). As with all of the essays, these will share stories of specific work and person working following liberatory librarianship.

The editors expect to include approximately:
– 10 long-form profiles (3,000-4,000 words)
– 15 short-form profiles (under 350 words)

We will select based on the importance of sharing hidden stories, representativeness of the stories, and the ability of each story in terms of how they can educate, inform, and inspire. 

This volume will complement recent scholarship on liberatory archives and justice in libraries, known by many terms, as with Michelle Caswell’s Urgent Archives (Routledge 2021) and Sophia T. Leung and Jorge R.
López-McKnight’s Knowledge Justice: Disrupting Library and Information Studies through Critical Race Theory (MIT 2021). This book will parallel the collection edited by Shameka Dalton, Yvonne J. Chandler, Vicente E. Garces, Dennis C. Kim-Prieto, Carol Avery Nicholson, and Michele A. L. Villagran, Celebrating Diversity: A Legacy of Minority Leadership in the American Association of Law Libraries, Second Edition (Hein 2018), which offers a thematic overview with specific stories of excellence and impact. This volume shares a methodology with grounded theory, narratology, and feminist practices, as with books like Sherry Turkle’s Evocative Objects (MIT 2011). In the telling of specific stories that speak to greater truths, the essays in this volume will illuminate complexity through accessible, readable, and engaging stories.
 
As a collected set of stories of the profession, this volume will be of interest to those working in librarianship, defined broadly, as well as to faculty and students in information science and museum studies programs.
 
Please send the following to laurien@ufl.edu by November 30, 2022:
– Name(s)
– Email(s) for all
– 100-250 word bio of the author(s), which may include links
– For a short form (under 350 words), please submit the full piece
– For a longer form (3,000-4,000 words), please submit the full piece or a 250-500 word proposal
 
For submissions:
– Please use Chicago Manual of Style, 17th edition.
– Photos, images, or artwork should be saved in separate electronic files (each photo, image, etc. as a separate file). Indicate their placement with an all-caps comment in the manuscript, immediately following the paragraph that includes the reference to the figure, table, or box, for example:
   INSERT FIGURE 6.3 APPROXIMATELY HERE.

The editors will respond by December 5, 2022.
For longer form, final submissions will be due February 15, 2022.

Ranganathan’s Fourth Law: “Save the time of the reader” — The Faithful Librarian Blog

As noted in an earlier entry, this blog will aim to look at Ranganathan’s (1931) five laws of library science through a Christian lens. We already looked at the first law: books are for use, the second: books are for all, every reader his or her book, and the third: books are for all, or, […]

Ranganathan’s Fourth Law: “Save the time of the reader” — The Faithful Librarian Blog

Dr. S.R.R. through a Christian lens

As you can guess from my blog title (not to mention my page and category devoted to the “saint” librarianship), I’m a sucker for posts about Dr. Ranganathan and his Five Laws of Library Science. Sure, there are those who quibble at the choice of words (are they more “principles” than “laws”, and is it really a “science”?), but these five statements continue to serve as the essence of librarianship. I do not know how spiritual Dr. Ranganathan was, but I would hazard to guess he would have appreciated Garrett Trott’s suggestion of providing space or a place for contemplative study, reflection, meditation, and yes, prayer.

Create a free website or blog at WordPress.com.

Up ↑