Room 42 is where practitioners and academics meet to share knowledge about breaking research. In this episode, Dr. Amanda Licastro explains how an edited collection about composition and big data led to a unique approach to peer review and insights into how engaged authors can shape future scholarship.
Season 2, Episode 1 | 47 min
Transcript (Expand to View)
[00:00:11.370] - Liz Fraley
And good morning, everyone, welcome to Room 42. I'm Liz Fraley from Single Sourcing Solutions. I'm your moderator.
[00:00:17.340] - Liz Fraley
This is Janice Summers from TC Camp. She's our interviewer. And welcome to Dr. Amanda Licastro, today's guest in Room 42.
[00:00:24.540] - Liz Fraley
Dr. Licastro has a doctorate in English and recently moved from her position as an Assistant Professor to take on the role as Emerging and Digital Literacy Designer at the University of Pennsylvania.
[00:00:37.050] - Liz Fraley
Her research explores that intersection between technology and writing, including book history, dystopian literature, and digital humanities with a focus in multimodal composition and extended reality.
[00:00:52.860] - Liz Fraley
Amanda serves as the Director of Pedagogical Initiatives on the Book Traces Project and is the co-founder of the Journal of Interactive Technology and Pedagogy and the Writing Studies Tree. Her publications include articles in Kairos, Digital Pedagogy and in the Humanities, Hybrid Pedagogy, and Communication Design Quarterly, as well as chapters in Digital Reading and Writing in Composition Studies, and Critical Digital Pedagogy.
[00:01:20.070] - Liz Fraley
Today, Amanda is here to help us start answering the question: What happens when you bring transparency to big data and peer review? Welcome.
[00:01:31.310] - Janice Summers
Yeah, welcome, Amanda. We're very excited to have you here.
[00:01:35.270] - Amanda Licastro
Thank you, both so much.
[00:01:36.710] - Janice Summers
There's so much that we can talk about, but I want to dive in and talk about your book that's coming up. And you can find it already for advanced ordering just so everybody knows, Composition and Big Data. So tell us about this book. What got you.... You and Ben are co-editors in this book? Yes.
[00:02:01.790] - Amanda Licastro
Yes, Ben Miller and myself. We both went to graduate school together at The Graduate Center CUNY, which is in New York City, right across from the Empire State Building. And it was a program that was one of the first to really be focusing on digital humanities at the time when I started my PhD in 2009, very few programs were doing DH seriously.
[00:02:24.560] - Amanda Licastro
So Ben and I were very fortunate to meet in that program. And we both decided to combine our interests in composition and rhetoric and DH, which is even more rare perhaps, than the digital humanities alone piece.
[00:02:38.150] - Amanda Licastro
So, yes, that's where we met. And we have been working on projects together ever since then being in the program together where we founded the Writing Studies Tree. This seemed like a natural extension of our dissertation work.
[00:02:52.750] - Janice Summers
So stupid question. I'm notorious for asking stupid questions. So what is big data?
[00:02:59.140] - Amanda Licastro
That's a great question, and it's actually a pretty controversial question because different industries have different size, different categories of-
[00:03:10.550] - Janice Summers
Thank you for bringing that up. That was... Yeah.
[00:03:13.240] - Amanda Licastro
Yes. So I mean, when you're talking about Amazon, their big data is a lot bigger than what we usually deal with in the humanities. When we're talking about the humanities, we're usually talking about fairly manageable data sets.
[00:03:27.400] - Amanda Licastro
What makes composition interesting is a lot of times when we're talking about composition rhetoric or technical communication research, we're talking about data that comes from programmatic assessment so it can get fairly large. If you're talking about a large public university assessing all of their writing intensive courses, you're talking of hundreds of thousands of documents there.
[00:03:51.700] - Amanda Licastro
And then when you're talking about archival research and you're perhaps looking at decades worth of documents, again, then you're getting into those real big data and numbers.
[00:04:03.550] - Amanda Licastro
However, traditionally in composition and the humanities in general, we're looking at more like in the hundreds of documents or texts, especially, I think, as a proof of concept. You might look at a chunk of data to dig into, to do some work, to find some conclusions and then expand from there and add every year to prove that there's actually a pattern happening.
[00:04:31.510] - Liz Fraley
What kind of patterns, what kinds of things does someone who's doing composition and big data, what are you looking for? What do you find?
[00:04:40.670] - Amanda Licastro
That's a great question. And I think it varies tremendously depending on the researcher, which is why our edited collection, Composition and Big Data actually has four sections, because all four of those sections are very different data-driven analysis.
[00:04:55.840] - Amanda Licastro
And my work is actually looking at student writing. So I did a study for my dissertation of a decade worth of ePortfolios. So an ePortfolio is your traditional writing portfolio where students are producing and revising writing over time that's collected and presented as a process driven presentation of their work.
[00:05:20.200] - Amanda Licastro
I had over 3,000 of them, and because they were electronic, they were all archived on WordPress, which was is a very easy to use content management system. I was able to data mine, collect and study them.
[00:05:36.310] - Amanda Licastro
So with those 3,000 ePortfolios which contained multitudes of student writing, what I was looking for is the difference between multimodal writing and the humanities and arts versus multimodal writing in the sciences and technology. So how do students write using images, videos, tags, taxonomy or a folksonomy that grew out of the affordances of the website.
[00:06:08.080] - Amanda Licastro
Were they using infographics, data visualizations? Where they using the affordances of interactivity? So do they have comments open where they annotating each other's work in various ways? They have discussions happening in these online spaces.
[00:06:25.630] - Amanda Licastro
So I was really looking at how students write, but also how the language of the instructor's assignments that led to more or less of that kind of engagement. So I looked at low stakes assignments, those that we don't put a lot of weight and in grading in terms of assessment.
[00:06:47.890] - Amanda Licastro
And I also did those high stakes assignments, those like final assignments, more research driven long term writing projects. And I compared the assignments, the actual assignment sheets, the language the instructors gave to the end results.
[00:07:04.210] - Amanda Licastro
Needless to say, this was definitely a mixed method research project. It was qualitative and quantitative and relied heavily on my disciplinary knowledge of pedagogical scaffolding, the very idea of low and high stakes assignments, but also different approaches to teaching writing across the disciplines.
[00:07:31.150] - Amanda Licastro
I was fortunate enough to be an instructional technology fellow in my graduate program, so I worked with professors across all disciplines to help them integrate technology into their courses. So I had some sense of the differences across the discipline and how they structured their assignments.
[00:07:51.100] - Janice Summers
Well, I imagine you would need to in order... because I mean, the data is going to speak to you. But in order to hear the data, lack of a better phrase, you need to have some kind of knowledge to know how to interpret it, right?
[00:08:06.340] - Amanda Licastro
Absolutely. This is perhaps why Masters in data analytics is one of the biggest growing areas of higher education, because data itself is not very valuable. It's how we interpret it.
[00:08:21.670] - Janice Summers
And could also be the mass of information.
[00:08:24.400] - Amanda Licastro
Also, data doesn't actually exist in and of itself. We have to curate, comb, refine and organize that data, which is definitely one of the most difficult steps of data-driven research. You have to decide what you're collecting, how you're collecting it, how you're presenting that data, what you're taking out and what you're leaving in, which is an incredibly critical decision. And we know that a lot of what it is left out is actually maybe more revelatory what is left in some cases.
[00:08:58.030] - Amanda Licastro
For more on that, I definitely recommend Data Feminism by Lauren Klein and Catherine D'Ignazio, where they really look at the bias, the inherent bias and what data is and is not collected.
[00:09:09.240] - Janice Summers
Okay, I was just going to ask those who are doing analysis, are they following the scientific method and full disclosure, right?
[00:09:19.270] - Amanda Licastro
Yeah, that's the entire last section of our book of Composition and Big Data is about the ethics.
[00:09:25.060] - Janice Summers
[00:09:26.890] - Amanda Licastro
Big Data Research
[00:09:26.890] - Janice Summers
Like that whole section where I was, oh, I want to read section... I can't wait...Well, the whole book looks great, but section 4 looks particularly interesting.
[00:09:36.550] - Amanda Licastro
We have really phenomenal authors to work with. We're incredibly impressed with the submissions that we got and the ones that actually made it to the final collection of the cream of the crop.
[00:09:48.160] - Amanda Licastro
But what we did ask after receiving all the initial submissions, we actually asked that every single author and every single chapter had a section on ethics, no matter whether you were in the ethics section or not, related to address kind of what did you learn? What would you do differently? What you acknowledge is being missing or broken or partial because all data-driven studies are those things. They're all partial. They all have elements that are missing. They all have interpretive elements that reveal our own bias as a researcher.
[00:10:28.030] - Amanda Licastro
So the bias of our institution or of the location where we where we scraped the data. Obviously, one of our chapters, deals with data from The New York Times, for example. The New York Times data is going to be a very biased data set in and of itself.
[00:10:42.880] - Janice Summers
[00:10:43.390] - Amanda Licastro
Because it's only was published in The New York Times, right?
[00:10:46.390] - Janice Summers
[00:10:47.620] - Amanda Licastro
So those that every single chapter of this collection addresses that very clearly, so that I believe they really speak to each other in that way. But then that last section focuses on it in a way that's talking about how can we as a discipline, ask and answer some of these larger questions about, not only the ethics of designing data studies and doing the work of data-driven research, but also how it's applied. And that's if you look at some of the wonderful work in The Digital Black Atlantic that just came out, for example. And in more global DH studies, you're going to see a lot of this, like not just how the data is collected and interpreted, but then what are the implications?
[00:11:42.150] - Amanda Licastro
How is it applied?
[00:11:43.470] - Amanda Licastro
When we learn something about how students write, we're then going to take action based on what we've learned and who is impacted by those actions and how are those actions affecting groups differently? There's really serious consequences there. And this is, of course, true in all disciplines and all areas of our lives right now.
[00:12:09.080] - Amanda Licastro
But I think it's very critical for us to think long term about the implications of data-driven research and what we're claiming is true and what actions come out of those claims.
[00:12:23.480] - Amanda Licastro
So we did actually apply to the forces conference this year to continue those conversations, because as most of you know, publishing in academia is slow, it takes a while. And so much has happened since the conclusion of the writing of these chapters. We wanted to expand and extend these conversations into some things we've learned in the last couple of years.
[00:12:46.750] - Janice Summers
Well, the subject itself is an ever evolving, growing. It's an organic thing. It shouldn't be just stagnant like once done, right?
[00:12:55.640] - Amanda Licastro
[00:12:56.440] - Janice Summers
Society changes. We all change. We learn new things. We apply new things.
[00:13:01.450] - Amanda Licastro
And our access to data and tools to analyze the data are constantly evolving as well. And they're becoming easier to use.
[00:13:08.770] - Amanda Licastro
One of my favorite acronyms is WYSIWYG, what you see is what you get. So before you had to have a lot of technical knowledge to do text mining, text analysis work. And now with tools like Voyant really anyone can do it. All you need is a browser and access to the web. And you dump some words in and it's going to do some analytics for you.
[00:13:32.140] - Amanda Licastro
And also programs like Gephi and the proliferation of the almost ubiquity of Qualtrics and other statistical modeling programs that most universities now subscribe to. It means that lots of people can do this work versus very few.
[00:13:51.070] - Amanda Licastro
And the first section of our text actually talks about how to teach students to do this work.
[00:13:56.840] - Janice Summers
[00:13:57.970] - Amanda Licastro
And that's something that I think is really interesting is now it has become simple enough to access in terms of both the cost and the technical knowledge that we can teach this in first year writing courses, in introduction to digital humanities courses at the undergraduate level.
[00:14:18.460] - Amanda Licastro
And then at the graduate level, you can really work some pretty serious data-driven research into graduate studies as well.
[00:14:26.380] - Janice Summers
Well, I like that in that first sight, because it's like being taught how to drive a car. It's like, okay, so we're like kids with a bunch of stuff we have access to. It's like, okay, well, teach you how to do this the correct way, responsible way.
[00:14:44.000] - Amanda Licastro
[00:14:44.920] - Janice Summers
The whole thing about surveillance and mining data. Okay, well, let's be responsible about and how we do it and there's methods and techniques.
[00:14:53.780] - Amanda Licastro
Yes. And let me tell you. So I am on a strategic team at the University of Pennsylvania that is looking to to build comprehensive programing around data-driven research.
[00:15:08.840] - Amanda Licastro
So we did an external scan of all both our peer and benchmark institutions, but also any institution that has a digital humanities center or data-driven research center. And we looked specifically for what programing, formal or informal instruction resources that universities were providing around data literacy. And there's not as much as one would think.
[00:15:39.260] - Amanda Licastro
Data literacy is incredibly important to everyone's ability to navigate our current world. And there's not a lot of formal instruction. There's not a lot of resources outside of those blanket statements or those parameters where a university will say, this is how we collect your data or this is how we use your data, or this is how we suggest you protect your research that is done while you're a part of the university community.
[00:16:09.680] - Amanda Licastro
But there's not a lot of the personal data literacy. How do I protect the data that's on my mobile device? How do I protect the data that's being collected right now as I'm on the Zoom call with you and how that's being used?
[00:16:25.370] - Amanda Licastro
And we really start this collection in our introduction talking about that, how we are we are constantly being data mines in every aspects of our life and so are our students and so are our colleagues and so is everyone that's existing in our world right now. And how the ability to read, manage, protect and analyze that data is now an essential literacy for all of us, whether we like it or not.
[00:16:58.390] - Janice Summers
Yeah, I think, I mean, I really do think that the Internet should come with training. I mean, really, because really, that's what happens as soon as you plug in to anything that's electronic then surveillance starts and will happen forever.
[00:17:12.100] - Amanda Licastro
At the CUNY Graduate Center, we had a lot of students who were very active in political protesting, especially when I was in graduate school, The Occupy Wall Street movement was happening like literally blocks from the Graduate Center.
[00:17:31.450] - Amanda Licastro
So we had crypto parties where students would teach other students how to encrypt their data, how to turn off all the settings on your cell phone network that was collecting geospatial information, how to use different IP addresses, et cetera, when you were doing certain activist work.
[00:17:52.570] - Amanda Licastro
And I think we were rare as a graduate program to do that because of our location and proximity to some of these global protests. But you will... I think we will start seeing more of that kind of literacy happening at all levels of education, hopefully all the way down right to, I think most 13-year-olds have cell phones now. I think Pew Research said something like 80 percent of 13-year-olds have cell phones. So at that... even at that level, there should be a data literacy built into the curriculum.
[00:18:31.480] - Amanda Licastro
I'm going to recommend one more text. Gregory Donovan from Fordham University. His book is called Canaries in the Data Mine. And it is about youth and their use of social media platforms and how when we can see indicator lights, the canaries, we can see indicator lights in that young population of what is potentially problematic in data collection and online spaces. Super fascinating work.
[00:19:02.830] - Amanda Licastro
He also did his graduate, work with Ben and myself. He was part of our cohort that we were really very much teaching each other these skills, because, again, DH was so new, DH was so very, very new in 2009 when we were all at the Graduate Center together.
[00:19:20.740] - Amanda Licastro
So Gregory, Donovan, Mickey Kaufman, Chris Ellen-Soula, Ben Miller, Jill Belli and myself, we were all kind of in a brave new world, pun intended, of doing this data work for our dissertations. And Greg Donovan's book just came out, very highly recommend that work.
[00:19:46.320] - Janice Summers
So but in academia, we're not looking at nefarious ways of using this data, are we?
[00:19:57.660] - Amanda Licastro
Perhaps the three of us aren't. But I wouldn't say that, like in exam proctoring surveillance technologies. I can name several for profit products that I think are absolutely, even though they're educational technologies, they're absolutely using data in ways that I would question the ethics of.
[00:20:21.850] - Janice Summers
So when we talk about transparency, oftentimes we don't know what's being surveyed and what's happening with that data, right? We as individuals.
[00:20:37.120] - Amanda Licastro
Yes, we as individuals don't. And even when, if you are designing your own research study, you may need to make decisions that will weaken the efficacy of your study to prioritize responsible research.
[00:20:59.510] - Amanda Licastro
For example, in the study I did for my dissertation research, I told you that I looked at student writing. So I actually made the choice to eliminate gender and racial designations from those students. That obviously weakens the claims I could make. I was missing data. I extracted data right purposefully.
[00:21:25.900] - Amanda Licastro
These are critical decisions that are made about data that's collected that the people who are providing the data don't have any say in. Like, if I had chosen to collect the students, the gender, race, religion, all sorts of other identifying information, I could have made claims about those students and they didn't have a say other than the fact that they opted into letting me use their work for the study.
[00:22:02.220] - Janice Summers
Right. Okay, but when we talk about transparency, I think one of the safety zones, because I don't think gathering data, gathering big data from a lot of different areas, it's not going to change. But I think one of the things that can change because conclusions are drawn by doing this data analysis and they force an argument one way or the other way based on this data.
[00:22:31.330] - Janice Summers
And I think that one of the safe zones for data collection is demanding transparency. I want to know, how did you come to that conclusion, right?
[00:22:42.900] - Amanda Licastro
Yes, it was in composition and rhetoric. There is a very clear position, a very clear argument for actually publishing your data along with the research conclusions.
[00:22:56.180] - Amanda Licastro
So you write the formal academic article that does the analysis work and present your interpretation, but then you actually provide access to the data set as well. There's a very famous controversy in DH that this happens where a researcher did provide their data set and then a different researcher ran the same study has presented in the in the text and got different conclusions.
[00:23:22.870] - Amanda Licastro
So this is what Richard Haswell called RAD research, replicable data-driven research, because that's... If you run the same data analysis twice and get different results, there might be a flaw in either your procedure or your data.
[00:23:38.920] - Amanda Licastro
So I think that this is a really important new dimension and something that it's actually very, very difficult to facilitate. So one of our chapters is about rhetoric IO, which was a platform to present the data of researchers and composition and rhetoric.
[00:24:00.160] - Amanda Licastro
There's been many different ad hoc organizing of platforms through which to collect, present and archive data. But as of right now, if you're publishing in a traditional academic journal or press, there's no place to put your data. Doesn't come like a little USB with your book for the data or something like that. And even if there's a companion website, often doesn't have that data.
[00:24:29.350] - Amanda Licastro
So you're going to see a lot of open institutional repositories. So at CUNYS is called Academic Works. My dissertation and my dissertation data is on that platform.
[00:24:43.120] - Amanda Licastro
Humanities Commons which was originally through the MLA and is now being directed by Kathleen Fitzpatrick at Michigan State University, is another great place. But there's going to be an increasing number of repositories for data, not just the publications, but the data data itself.
[00:25:02.500] - Janice Summers
The data itself?
[00:25:02.500] - Amanda Licastro
[00:25:04.060] - Janice Summers
And I think that's an important evolution that we need to have, right? Back to the scientific method, sorry, psychology train. You state your method and the synopsis of your conclusion and support all the supporting data because you're drawing a conclusion, which is interpretive possibly, maybe somebody running it is not going to get the same, but if they run it, they can see how you came to your conclusion. And if you're wrong, then you can modify, or if a modification needs to happen, I think... Do you think, I think, tell me what you think. We have to separate ourselves from, I'm absolutely right.
[00:25:51.300] - Amanda Licastro
Oh, yes, absolutely, we're offering an opinion or interpretation. It is, of course, based on evidence. We have our evidence and we've done the research in the historical review, the literature review to contextualize the evidence. But all of these studies are still limited based on context, location, access, et cetera.
[00:26:16.620] - Amanda Licastro
So one thing that, for example, I was interested in why I did put my dissertation data in our institutional repository is because someone could then take that CUNY-based data, CUNYs students that are working in New York City in an honors program. It's a very skewed set of data and they can compare it to their data from their community college in Arkansas or their state school in California or their institution of higher education anywhere in the world.
[00:26:51.900] - Amanda Licastro
So that it allows us to make comparisons and to increase our data sets by including others that have already been prepared. But also, again, allows a lot of experimentation in the classroom.
[00:27:04.440] - Amanda Licastro
One of the greatest barriers to doing data-driven work is like what data or my students going to play with? Where's my sandbox of data that my students get to explore?
[00:27:15.180] - Amanda Licastro
And that by making our data open access, we invite those kind of pedagogical forays into data analysis because there's a data set ready to use.
[00:27:28.690] - Janice Summers
Right, and I think it's one of those we should be accustomed to always challenging past findings and challenging because things change, we grow. Times change. Societies change. Thoughts change. So always challenging that and bringing it up. But if you don't see it,how can you reach it? So having that open access, I think is a great evolution.
[00:27:55.610] - Amanda Licastro
Absolutely. And then one of our chapters is, for example, looking at the WPA listserv from 2017, I think, 2017-2020. But like a very small little couple of years, obviously, then that data could be compared to past, future. Different chunks of the same data set, but in different times, collected in different ways. And that's also important, because most often one researcher doing one study is going to select a chunk of data and not look at all that there is available.
[00:28:33.950] - Amanda Licastro
Again, going back to The New York Times, you might look at New York Times articles in these two sections published in these five years. And that's going to be a massive amount of data.
[00:28:42.890] - Janice Summers
Yeah, huge data.
[00:28:43.910] - Amanda Licastro
You can't forget all of The New York Times, their entire historical archives. That's almost too big for an individual researcher and even a small research team to do.
[00:28:56.120] - Amanda Licastro
There's actually one thing I'd really like to talk about, too, is collaboration. I think that we're going to see more and more collaborative research in the humanities because this research is best done through what Cathy Davidson calls the collaboration by difference.
[00:29:16.520] - Amanda Licastro
So when you're specifically collaborating with people that are unlike you, you have a different perspective than you so that you're not reading your data through tunnel vision. You don't have that, the blindness to what only you see, but instead, you have a lot of different eyes in the data and a lot of different interpretations.
[00:29:37.070] - Janice Summers
And here's the thing too. It take you from that bias, right? Because we're all human, we have human bias. So to say that you don't have it is incorrect. You do. Everybody does.
[00:29:48.800] - Janice Summers
And when you have a lot of people involved or other people involved, you're seeing things from different prisms and in different ways that you wouldn't have thought of that are very revealing and interesting. And I can take you in ways that you on your own can't go.
[00:30:06.930] - Amanda Licastro
The classic metaphor that Cathy Davidson uses, and I believe this is from Now You See It, her book, Now You See It. She talks about the gorilla experiment where there's a group of students passing a basketball. Half of them are in black shirts, half of them are in white shirts and the passed the basketball back and forth, and just that count how many times they pass the basketball.
[00:30:29.300] - Amanda Licastro
And in the middle of the study, a gorilla, a person in a gorilla suit walks through the people who were playing basketball. And at the end, the researcher asks, how many people saw the gorilla?
[00:30:40.790] - Amanda Licastro
And of course, the majority of people did not see the gorilla because they're so focused on counting the basketballs.
[00:30:46.010] - Janice Summers
[00:30:46.820] - Amanda Licastro
So this, I believe Cathy Davison calls it attention blindness, but it's when you're paying attention to one thing. And when you're interpreting data, you are. When you're interpreting data, you're looking for something. You have a question. You have a-
[00:30:58.880] - Janice Summers
You have a have a question on your mind. Yeah.
[00:31:00.980] - Amanda Licastro
That's what you're looking for and you miss the gorillas, right?, that might be walking through your data. But if you have a team of researchers with those different perspectives, someone is going to see the gorilla. And that's really important.
[00:31:16.130] - Amanda Licastro
And that's why despite maybe their previous experiences with group work, all of my students do collaborative assignments in my courses. I strongly believe in graduate student cohorts and normalizing collaborative work and graduate school.
[00:31:32.630] - Amanda Licastro
I mean, the Writing Studies Tree never would have happened if it weren't for Sondra Perl enabling us in her course to work together as a team of 13 students to build that platform.
[00:31:44.650] - Amanda Licastro
The collection is certainly an exercise on collaboration, but if we really want to do serious data-driven work, it needs to be collaborative and modeled more off of the sciences where it is the norm to have multi-authored papers and lab style work.
[00:32:02.300] - Janice Summers
Yeah. And something interesting. And I'm just going to for the practitioners out there, and those not in academia, this is part of that dynamic when you bring in a consultant from outside. And it's that mindset shift from, you don't think of your consultant as doing a task, your consultant is there to help you be better and do better.
[00:32:23.690] - Amanda Licastro
And to see what you're missing.
[00:32:24.830] - Janice Summers
And to see what you're missing. One of the thing Liz is phenomenally good at is asking questions, like it was one of that question. So just a little plug about that, why you bring people in from outside.
[00:32:38.840] - Janice Summers
And that's why earlier when you said people who aren't in my circle, it's really important for all of us to push outside of our circle because there's a richness that comes and they're going to find the gorilla. I'm going to have to use that now, find the gorilla. Do you have a gorilla hunter?
[00:32:56.750] - Amanda Licastro
Right. The thing is, is that we are actually accustomed to doing this humanity through peer review. You don't publish anything without having at least two other sets of eyes are going to look at everything you do.
[00:33:12.690] - Amanda Licastro
And we do train students to do this from that first year writing course all the way through their graduate work where they have three or four members of their dissertation committee. You always have multiple people read something before you put it out into the world.
[00:33:27.050] - Janice Summers
[00:33:27.500] - Amanda Licastro
Why we wouldn't make this practice more visible because that's very invisible labor, incredibly invisible labor. Your peer reviewers aren't written anywhere on that final text. They're not accredited anywhere in that final text. We don't even put our reviews on our CVs in academia.
[00:33:45.200] - Amanda Licastro
But that work, I think if we normalize making that kind of work more visible, especially in asking someone to review your data set and review your methods much like they do in the sciences, that is something that we could, I think actually the humanities would bring something to the sciences by adopting those processes and talking about how they could be more ethical and they could be more critical.
[00:34:11.960] - Janice Summers
Absolutely. Because I think here's the thing. And then the practitioner fog, a lot of them... Well, I should, because I run into teams that aren't familiar with peer review. And it's one of the first questions we always ask. What's your peer review process? And we want to cry when we hear that they don't have one.
[00:34:30.590] - Janice Summers
But the peer review, it's not a threatening thing, it's actually a good thing because it makes you stronger. And when it comes into the science field, I think what they miss is the fact, from a humanities perspective, we're taught peer review is going to make you stronger, is going to make you better. They're going to find the gorillas for me. But in science, sometimes it's like, well, no, I have to be right.
[00:34:57.640] - Amanda Licastro
Yeah. I don't think they're necessarily right. And so is the humanities, we're not really searching for what the right answer is, but rather we're hoping to extend or expand a conversation or enter a conversation, continue a conversation. And that's what this collection in composition and big data is really about starting a conversation. Data-driven work has been done since the '70s, since Janet Emig and Sondra Perl were hand coding with highlighters on printed out transcripts.
[00:35:33.320] - Amanda Licastro
But there's almost a renaissance of data-driven work that has been happening since the early 2000s, especially in the fields of computers and writing. If you look at the computer and writing conference proceedings and the publications in that field, like Chiros, inculturation, et cetera, you're going to see a lot of data-driven work starting in the 2000s and increasing now, which for me means that we do need to train students in peer review, how to peer review data-driven work.
[00:36:08.810] - Amanda Licastro
We need to start that experimentation process earlier. And we do need to make sure that these interdisciplinary conversations are happening.
[00:36:18.620] - Janice Summers
Yes. And I think... Sorry.
[00:36:23.240] - Liz Fraley
Peer review is hard for a lot... It's hard to go through, especially if you are thinking of yourself as part of the product. We had Traci Nathans-Kelly just last week. She says she's teaching people to separate the self from the professional deliverable. And your journal and your approach to this edited collection opened all of that up and changed things.
[00:36:49.290] - Amanda Licastro
Yes, which is intimidating, I think, to more traditionally trained academics. So the Journal of Interactive Technology and Pedagogy, which we started at The CUNY Graduate Center in 2011 and is now international. We have an international team of editors on there. It's entirely open peer review.
[00:37:08.490] - Amanda Licastro
So when you submit to our journal, you are told who will be reviewing your piece and you work with them more in a mentorship model where the scholars in your field will be entering into the conversation about how to strengthen your work.
[00:37:27.690] - Amanda Licastro
We did this specifically so that the journal will be welcoming to graduate students, to early career academics, but also to alternative academics and librarians and independent researchers and people outside of maybe the traditional hierarchy of higher education to make not only the product we're putting out more diverse but also the voices that count in academia more diverse.
[00:37:54.210] - Amanda Licastro
We really want to signal boost some of those voices that get left out of more traditional academic publishing, because we know that they have a lot to offer to these conversations that are happening, especially about technology and more cutting edge emerging research in these areas.
[00:38:12.670] - Amanda Licastro
Now, that's very scary, right? When you as a researcher see that these two maybe senior scholars, maybe rock star names that you read and admire are going to be offering you criticism. That's intimidating.
[00:38:31.080] - Amanda Licastro
But we really focus on trying to offer that constructive criticism and really making the work stronger, which is ultimately everyone's goal. You as the author, the reviewers and us as the journal editors, we all want the work to be as the best it can be, the strongest product it can be in terms of this collection.
[00:38:57.090] - Amanda Licastro
I think we took a radical approach, we did. Most at in collections, all the authors are working in isolation by themselves, they produce the chapter. They submit it to the editor or editors. I mean, even a lot of edited collections are still single editors. In our case is two. And they make some suggestions, you revise and then you get blind peer reviewers from the press that offer more suggestions.
[00:39:30.690] - Amanda Licastro
But again, there's a lot of invisible labor. It's isolated labor that's very disconnected. And the authors in the edited collection might not even know who else is in that collection until it appears on their doorstep. You might not even know who else has contributed. They don't know who else is on the table contents with them until actually it's complete.
[00:39:59.010] - Amanda Licastro
What I think sometimes happens then is that you end up with chapters in the collection that don't speak to each other. That you have a multitude of voices that are having individual conversations rather than that discussion that seems to have a thread that ties them all together.
[00:40:17.400] - Amanda Licastro
So after reading all of the chapters and getting initial feedback from the press, we asked the authors to read each other's chapters. Now, we didn't ask all 17 authors to have to read all 17 chapters. That would be a lot of work.
[00:40:32.190] - Janice Summers
That's too much.
[00:40:33.050] - Amanda Licastro
Yeah, too much labor to ask-
[00:40:35.910] - Janice Summers
If you're reviewing, you can't quality peer review 17.
[00:40:40.440] - Amanda Licastro
So instead, we strategically partnered them with chapters we thought had clear connections, and we thought actually maybe should be citing each other.
[00:40:51.570] - Amanda Licastro
So we thought like, oh, this chapter should really be citing this chapter. Or these these two authors are really speaking to each other but don't know it yet.
[00:41:01.670] - Amanda Licastro
So we partnered them very thoughtfully in hopes of creating dialog between the chapters. And it honestly worked so much better than we even imagined.
[00:41:12.900] - Janice Summers
And it probably lighten the load for you and Ben.
[00:41:15.690] - Amanda Licastro
Well, we still edit it all every single time.
[00:41:17.940] - Janice Summers
Yeah, but I mean-
[00:41:19.970] - Amanda Licastro
And what it did was it now almost all the chapters do cite each other. You're going to see the names appearing again and again. So there's a very clear dialog between all of the sections. But also the authors, the response from the authors was like glee and delight and joy. They love reading each other's work.
[00:41:39.210] - Janice Summers
It got them excited to be a part of the collection. They felt pride in the collection they were part of because they were all exciting work that their peers were doing within the same binding of this collection.
[00:41:55.740] - Amanda Licastro
And it also, when the entire collection was done and they sent it out for external peer review, the external peer reviewers, they were blind. So I wish I could give them credit. Thank you blind peer reviewers.
[00:42:11.310] - Amanda Licastro
They said, is so cohesive. This is so cohesive. That this collection really goes together so well because these authors were working together collaboratively.
[00:42:23.100] - Janice Summers
And I think that's a great because, believe it or not, our time is up with you at this meeting.
[00:42:28.380] - Liz Fraley
But I don't want to go yet. But this is a very important point for practitioners.
[00:42:33.180] - Janice Summers
Yeah, I really-
[00:42:34.170] - Liz Fraley
Practitioners, they all know each other, they all work together. So it is open peer review if they take advantage of it in the right way.
[00:42:43.800] - Janice Summers
Yeah. So one of the things I was going to say, and is a perfect place to end our conversation, because what you've shown through this book, I mean, everybody should go get the book, absolutely phenomenal.
[00:42:58.140] - Liz Fraley
[00:42:58.140] - Janice Summers
And I like section 4. And Section 1, section 2 and 3 are good, too. Trust me, I've seen the table of contents. I haven't gotten the book yet.
[00:43:06.840] - Amanda Licastro
You have introduction, though, which has...
[00:43:10.560] - Janice Summers
But it's not all the chapters, right? There's some really good authors on this. But one of the things that I think everyone should take home, practitioners, students, professors, everybody, collaboration, collaboration, collaboration.
[00:43:24.030] - Janice Summers
Peer review is not about criticizing and tearing down, it is about building up. And what is that saying with the rising tide lifts all boats, right? So it's that whole thing to collectively, if we collaborate, we're going to be stronger.
[00:43:41.850] - Janice Summers
And if we are brave enough to open source our data process and how we do things, it will help you become stronger. If you're doing data mining and researching, it just makes you stronger.
[00:43:56.370] - Janice Summers
Sure, somebody might come along and draw a different conclusion. Okay, take a look at that. We have to separate the self from our work, like Liz pointed out. So I want to have-
[00:44:09.270] - Amanda Licastro
I think it makes the work we're producing and the plethora of publications that we can produce stronger when there is dispute. I think that the most rich areas of research are the ones that are the most contentious, are ones that are the most controversial, are those that emerge in conversations that don't have clear answers yet where we don't know the methods, where the methods are going, where they're going to lead us and how they're going to be developing over the next couple of years.
[00:44:43.290] - Amanda Licastro
If we don't have those tough conversations now and if we don't actively disagree with each other and have these debates about what the right way to do this work is then that's the really scary part.
[00:44:59.010] - Amanda Licastro
If we don't actively engage in this peer review to make sure that we're doing this in a way that is ethical, responsible, and again, thinking through the actions on the other side, how it's going to impact our field, how it's going to impact our students, how it's going to impacts the institutions that we're providing the interpretation for, that's the real fear here, is that we if we don't invite those diverse perspectives and that open transparent and collegial and friendly and respectful debate, constructive criticism, then we are opening ourselves up to colonizing and to supporting the power structures that are already in place and that we actively want to reinterpret and reimagine.
[00:45:59.850] - Janice Summers
Well, Amanda, I hope to have you back again. This has been just a fascinating conversation. I really enjoyed talking with you. And I can't wait. The book comes out in September, right?
[00:46:09.820] - Amanda Licastro
Yeah, it's very early September. It's available for preorder now. I think we might have one of our authors in the audience, so hi.
[00:46:16.170] - Amanda Licastro
And I just want to say thank you so much to Ben Miller, to our wonderful mentors and to all of the contributing authors to this collection, because really it wouldn't be anything without them. And they're all.. Hi Chen! Say Hi Chen! Thank you so much for coming. And yes, please, all the credit goes to all of these wonderful person.
In this episode...
The forthcoming edited collection Composition and Big Data published by the University of Pittsburgh Press (available September 2021) and co-edited by Amanda Licastro and Ben Miller intentionally brings together researchers working at the intersections of Digital Humanities and Writing Studies--two groups that rarely find themselves working together. The unique approach to peer review they engaged in with the contributing authors created a radical approach in collaboration and cooperation that crossed boundaries, knocked down barriers, and yielded astounding results. In this episode of Room 42, learn how big data is shaping our scholarship, what we need to do now to prepare, and how a collaborative collection of authors can highlight the ethical and practical considerations of applying data analytics to the field of Composition and Rhetoric.
Dr. Amanda Licastro has a doctorate in English and recently moved from her position as an Assistant Professor to take on a role as the Emerging and Digital Literacy Designer at the University of Pennsylvania. Her research explores the intersection of technology and writing, including book history, dystopian literature, and digital humanities, with a focus on multimodal composition and Extended Reality. Amanda serves as the Director of Pedagogical Initiatives of the Book Traces project and is co-founder of the Journal of Interactive Technology and Pedagogy and the Writing Studies Tree. Publications include articles in Kairos, Digital Pedagogy in the Humanities, Hybrid Pedagogy, and Communication Design Quarterly, as well as chapters in Digital Reading and Writing in Composition Studies, and Critical Digital Pedagogy.
Hosts & Guests
Digitocentrism website: http://digitocentrism.com/