11  Concordancing

Author
Affiliation

Vladimir Buskin

Catholic University of Eichstätt-Ingolstadt

11.1 Suggested reading

In-depth introduction to concordancing with R:

Schweinberger (2024)

Naturale Language Processing (NLP) with quanteda:

Benoit et al. (2018)

Online reference

On corpus-linguistic theory:

Wulff and Baker (2020)

Lange and Leuckert (2020)

McEnery, Xiao, and Yukio (2006)

11.2 Preparation

Script

You can find the full R script associated with this unit here.

Working directory

In order for R to be able to recognise the data, it is crucial to set up the working directory correctly.

  1. Make sure your R-script and the corpus (e.g., ‘ICE-GB’) are stored in the same folder on your computer.

  2. In RStudio, go to the Files pane (usually in the bottom-right corner) and navigate to the location of your script. Alternatively, you can click on the three dots ... and use the file browser instead.

  3. Once you’re in the correct folder, click on the blue ⚙️ icon.

  4. Select Set As Working Directory. This action will update your working directory to the folder where the file is located.

In addition, make sure you have installed quanteda. Load it at the beginning of your script:

library(quanteda) # Package for Natural Language Processing in R
library(lattice) # for dotplots

To load a corpus object into R, place it in your working directory and read it into your working environment with readRDS().1

  • 1 The ICE-GB.RDS file you’ve been provided with has been pre-processed and saved in this specific format for practical reasons.

  • # Load corpus from directory
    ICE_GB <- readRDS("ICE_GB.RDS")

    If you encounter any error messages at this stage, ensure you followed steps 1 and 2 in the callout box above.

    11.3 Concordancing

    A core task in corpus-linguistic research involves finding occurrences of a single word or multi-word sequence in the corpus. Lange & Leuckert (2020: 55) explain that specialised software typically “provide[s] the surrounding context as well as the name of the file in which the word could be identified.” Inspecting the context is particularly important in comparative research, as it may be indicative of distinct usage patterns.

    11.3.1 Simple queries

    To obtain such a keyword in context (KWIC) in R, we use the kwic() function. We supply the corpus as well as the keyword we’re interested in:

    query1 <- kwic(ICE_GB, "eat")

    The output in query1 contains concordance lines that list all occurrences of the keyword, including the document, context to the left, the keyword itself, and the context to the right. The final column reiterates our search expression.

    head(query1)
    Keyword-in-context with 6 matches.                                                            
      [ICE_GB/S1A-006.txt, 785]           So I' d rather | eat |
     [ICE_GB/S1A-009.txt, 1198]              I must <, > | eat |
      [ICE_GB/S1A-010.txt, 958]         to <, > actually | eat |
      [ICE_GB/S1A-018.txt, 455] order one first and then | eat |
      [ICE_GB/S1A-018.txt, 498]  A > The bargain hunting | eat |
     [ICE_GB/S1A-023.txt, 1853]       B > Oh name please | eat |
                                
     beforehand just to avoid uh
     them < ICE-GB:S1A-009#71:  
     it for one' s              
     it and then sort of        
     < ICE-GB:S1A-018#29: 1     
     something <,, >            

    For a full screen display of the KWIC data frame, try View():

    View(query1)
    docname from to pre keyword post pattern
    ICE_GB/S1A-006.txt 785 785 So I ' d rather eat beforehand just to avoid uh eat
    ICE_GB/S1A-009.txt 1198 1198 I must < , > eat them < ICE-GB:S1A-009 #71 : eat
    ICE_GB/S1A-010.txt 958 958 to < , > actually eat it for one ' s eat
    ICE_GB/S1A-018.txt 455 455 order one first and then eat it and then sort of eat
    ICE_GB/S1A-018.txt 498 498 A > The bargain hunting eat < ICE-GB:S1A-018 #29 : 1 eat
    ICE_GB/S1A-023.txt 1853 1853 B > Oh name please eat something < , , > eat

    11.3.2 Multi-word queries

    If the search expression exceeds a single word, we need to mark it as a multi-word sequence by means of the phrase() function. For instance, if we were interested in the pattern eat a, we’d have to adjust the code as follows:

    query2 <- kwic(ICE_GB, phrase("eat a"))
    View(query2)
    docname from to pre keyword post pattern
    ICE_GB/S1A-059.txt 2230 2231 1 : B > I eat a < , > very balanced eat a
    ICE_GB/W2B-014.txt 1045 1046 : 1 > We can't eat a lot of Welsh or Scottish eat a
    ICE_GB/W2B-022.txt 589 590 have few labour-saving devices , eat a diet low in protein , eat a

    11.3.3 Multiple simultaneous queries

    A very powerful advantage of quanteda over traditional corpus software is that we can query a corpus for a multitude of keywords at the same time. Say, we need our output to contain hits for eat, drink as well as sleep. Instead of a single keyword, we supply a character vector containing the strings of interest.

    query3 <- kwic(ICE_GB, c("eat", "drink", "sleep"))
    View(query3)
    docname from to pre keyword post pattern
    ICE_GB/S1A-006.txt 785 785 So I ' d rather eat beforehand just to avoid uh eat
    ICE_GB/S1A-009.txt 869 869 : A > Do you drink quite a lot of it drink
    ICE_GB/S1A-009.txt 1198 1198 I must < , > eat them < ICE-GB:S1A-009 #71 : eat
    ICE_GB/S1A-010.txt 958 958 to < , > actually eat it for one ' s eat
    ICE_GB/S1A-014.txt 3262 3262 you were advised not to drink water in Leningrad because they drink
    ICE_GB/S1A-016.txt 3290 3290 > I couldn't I couldn't sleep if I didn't read < sleep

    11.3.4 Window size

    Some studies require more detailed examination of the preceding or following context of the keyword. We can easily adjust the window size to suit our needs:

    query4 <- kwic(ICE_GB, "eat", window = 20) 
    docname from to pre keyword post pattern
    ICE_GB/S1A-006.txt 785 785 #49 : 1 : A > Yeah < ICE-GB:S1A-006 #50 : 1 : A > So I ' d rather eat beforehand just to avoid uh < , , > any problems there < ICE-GB:S1A-006 #51 : 1 : B > eat
    ICE_GB/S1A-009.txt 1198 1198 < , > in in the summer < ICE-GB:S1A-009 #70 : 1 : A > I must < , > eat them < ICE-GB:S1A-009 #71 : 1 : A > Yes < ICE-GB:S1A-009 #72 : 1 : B > You ought eat
    ICE_GB/S1A-010.txt 958 958 1 : B > You know I mean it would seem to be squandering it to < , > actually eat it for one ' s own enjoyment < , , > < ICE-GB:S1A-010 #49 : 1 : A > Mm eat
    ICE_GB/S1A-018.txt 455 455 s so < ICE-GB:S1A-018 #27 : 1 : A > What you should do is order one first and then eat it and then sort of carry on from there < laughter > < , > by which time you wouldn't eat
    ICE_GB/S1A-018.txt 498 498 second anyway so < laugh > < , > < ICE-GB:S1A-018 #28 : 1 : A > The bargain hunting eat < ICE-GB:S1A-018 #29 : 1 : B > So all right what did I have < ICE-GB:S1A-018 #30 : 1 eat
    ICE_GB/S1A-023.txt 1853 1853 > I can't bear it < , , > < ICE-GB:S1A-023 #121 : 1 : B > Oh name please eat something < , , > < ICE-GB:S1A-023 #122 : 1 : A > Oh actually Dad asked me if < eat

    11.3.5 Saving your output

    You can store your results in a spreadsheet file just as described in the unit on importing and exporting data.

    • Microsoft Excel (.xlsx)
    library(writexl) # required for writing files to MS Excel
    
    write_xlsx(query1, "myresults1.xlsx")
    • LibreOffice (.csv)
    write.csv(query1, "myresults1.csv")

    As soon as you have annotated your data, you can load .xlsx files back into R with read_xlsx() from the readxl package and .csv files using the Base R function read.csv().

    11.4 Characterising the output

    Recall our initial query of the eat, whose output we stored in query1:

    docname from to pre keyword post pattern
    ICE_GB/S1A-006.txt 785 785 So I ' d rather eat beforehand just to avoid uh eat
    ICE_GB/S1A-009.txt 1198 1198 I must < , > eat them < ICE-GB:S1A-009 #71 : eat
    ICE_GB/S1A-010.txt 958 958 to < , > actually eat it for one ' s eat
    ICE_GB/S1A-018.txt 455 455 order one first and then eat it and then sort of eat
    ICE_GB/S1A-018.txt 498 498 A > The bargain hunting eat < ICE-GB:S1A-018 #29 : 1 eat
    ICE_GB/S1A-023.txt 1853 1853 B > Oh name please eat something < , , > eat

    First, we may be interested in obtaining some general information on our results, such as …

    • … how many tokens (= individual hits) does the query return?

    The nrow() function counts the number of rows in a data frame — these always correspond to the number of observations in our sample (here: 53).

    nrow(query1)
    [1] 53
    • … how many types (= distinct hits) does the query return?

    Apparently, there are 52 counts of eat in lower case and 1 in upper case. Their sum corresponds to our 53 observations in total.

    table(query1$keyword)
    
    eat Eat 
     52   1 
    • … how is the keyword distributed across corpus files?

    This question relates to the notion of dispersion: Is a keyword spread relatively evenly across corpus files or does it only occur in specific ones?

    # Frequency of keyword by docname
    query_distrib <- table(query1$docname, query1$keyword)
    
    # Show first few rows
    head(query_distrib)
                        
                         eat Eat
      ICE_GB/S1A-006.txt   1   0
      ICE_GB/S1A-009.txt   1   0
      ICE_GB/S1A-010.txt   1   0
      ICE_GB/S1A-018.txt   2   0
      ICE_GB/S1A-023.txt   1   0
      ICE_GB/S1A-025.txt   1   0
    # Create a simple dot plot
    dotplot(query_distrib, auto.key = list(columns = 2, title = "Tokens", cex.title = 1))

    # Create a fancy plot (requires tidyverse)
    ggplot(query1, aes(x = keyword)) + 
      geom_bar() +
      facet_wrap(~docname)

    It seems that eat occurs at least once in most text categories (both spoken and written), but seems to be much more common in face-to-face conversations (S1A). This is not surprising: It is certainly more common to discuss food in a casual chat with friends than in an academic essay (unless, of course, its main subject matter is food). Dispersion measures can thus be viewed as indicators of contextual preferences associated with lexemes or more grammatical patterns.

    The empirical study of dispersion has attracted a lot of attention in recent years Gries (2020). A reason for this is the necessity of finding a dispersion measure that is minimally correlated with token frequency. One such measure is the Kullback-Leibler divergence \(KLD\), which comes from the field of information theory and is closely related to entropy.

    Mathematically, \(KLD\) measures the difference between two probability distributions \(p\) and \(q\).

    \[ KLD(p \parallel q) = \sum\limits_{x \in X} p(x) \log \frac{p(x)}{q(x)} \tag{11.1}\]

    Let \(f\) denote the overall frequency of a keyword in the corpus, \(v\) its frequency in each corpus part, \(s\) the sizes of each corpus part (as fractions) and \(n\) the total number of corpus parts. We thus compare the posterior (= “actual”) distribution of keywords \(\frac{v_i}{f}\) for \(i = 1, ..., n\) with their prior distribution, which assumes all words are spread evenly across corpus parts (hence the division by \(s_i\)).

    \[ KLD = \sum\limits_{i=1}^n \frac{v_i}{f} \times \log_2\left({\frac{v_i}{f} \times \frac{1}{s_i}}\right) \tag{11.2}\]

    In R, let’s calculate the dispersion of the verbs eat, drink, and sleep from query3.

    # Let's filter out the upper-case variants:
    query3_reduced <- query3[query3$keyword %in% c("eat", "drink", "sleep"),]
    table(query3_reduced$keyword)
    
    drink   eat sleep 
       48    52    41 
    # Extract text categories
    query_registers <- separate_wider_delim(query3_reduced, cols = docname, delim = "-", names = c("Text_category", "File_number"))
    
    # Get separate data frames for each verb
    eat <- filter(query_registers, keyword == "eat")
    drink <- filter(query_registers, keyword == "drink")
    sleep <- filter(query_registers, keyword == "sleep")
    
    ## Get frequency distribution across files
    v_eat <- table(eat$Text_category)
    v_drink <- table(drink$Text_category)
    v_sleep <- table(sleep$Text_category)
    
    ## Get total frequencies
    f_eat <- nrow(eat)
    f_drink <- nrow(drink)
    f_sleep <- nrow(sleep)
    
    # The next step is a little trickier. First we need to find out how many distinct corpus parts there are in the ICE corpus.
    
    ## Check ICE-corpus structure and convert to data frame
    ICE_GB_str <- as.data.frame(summary(ICE_GB))
    
    ## Separate files from text categores
    ICE_GB_texts <- separate_wider_delim(ICE_GB_str, cols = Var1, delim = "-", names = c("Text_category", "File"))
    
    ## Get number of distinct text categories
    n <- length(unique(ICE_GB_texts$Text_category))
    
    ## Get proportions of distinct text categories (s)
    s <- table(ICE_GB_texts$Text_category)/sum(table(ICE_GB_texts$Text_category))
    
    ## Unfortunately not all of these corpus parts are represented in our queries. We need to correct the proportions in s for the missing ones!
    
    ## Store unique ICE text categories 
    ICE_unique_texts <- unique(ICE_GB_texts$Text_category)
    
    ## Make sure only those text proportions are included where the keywords actually occur
    s_eat <- s[match(names(v_eat), ICE_unique_texts)]
    s_drink <- s[match(names(v_drink), ICE_unique_texts)]
    s_sleep <- s[match(names(v_sleep), ICE_unique_texts)]
    
    # Compute KLD for each verb
    kld_eat <- sum(v_eat/f_eat * log2(v_eat/f_eat * 1/s_eat)); kld_eat
    [1] 0.6747268
    kld_drink <- sum(v_drink/f_drink * log2(v_drink/f_drink * 1/s_drink)); kld_drink
    [1] 0.8463608
    kld_sleep <- sum(v_sleep/f_sleep * log2(v_sleep/f_sleep * 1/s_sleep)); kld_sleep
    [1] 0.7047421
    # Plot
    kld_df <- data.frame(kld_eat, kld_drink, kld_sleep)
    
    barplot(as.numeric(kld_df), names.arg = names(kld_df), col = "steelblue",
            xlab = "Variable", ylab = "KLD Value (= deviance from even distribution)", main = "Dispersion of 'eat', 'drink', and 'sleep'")

    The plot indicates that drink is the most unevenly distributed verb out of the three considered (high KDL \(\sim\) low dispersion), whereas eat appears to be slightly more evenly distributed across corpus files. The verb sleep assumes an intermediary position.

    11.5 “I need a proper user interface”: Some alternatives

    There is a wide variety of concordancing software available, both free and paid. Among the most popular options are AntConc (Anthony 2020) and SketchEngine (Kilgarriff et al. 2004). However, as Schweinberger (2024) notes, the exact processes these tools use to generate output are not always fully transparent, making them something of a “black box.” In contrast, programming languages like R or Python allow researchers to document each step of their analysis clearly, providing full transparency from start to finish.

    The following apps attempt to reconcile the need for an intuitive user interface with transparent data handling. The full source code is documented in the respective GitHub repositories.