1 | Mean Length of Sentence | MLS | Syntactic Complexity | Length of Production Unit | The Mean Sentence Length (MSL) is a metric that calculates the average word count per sentence. |
2 | Mean Length of Clause | MLC | Syntactic Complexity | Length of Production Unit | The Mean Clause Length (MCL) is a metric that calculates the average word count per clause. |
3 | Mean Length of T-unit | MLT | Syntactic Complexity | Length of Production Unit | The Mean T-Unit Length (MTL) is a metric that quantifies the average word count per "minimal terminable unit" (T-unit). |
4 | Sentence Complexity Ratio | CS | Syntactic Complexity | Sentence Complexity | The Sentence Complexity Ratio (CS) is a metric that quantifies the average number of clauses per sentence. |
5 | T-Unit Complexity Ratio | CT | Syntactic Complexity | Subordination | The T-Unit Complexity Ratio (CT), also known as "Clauses per T-unit," is a metric that quantifies the average number of clauses per T-unit. |
6 | Complex T-Unit Ratio | cTT | Syntactic Complexity | Subordination | The Complex T-Unit Ratio (cTT) is a metric that calculates the average number of complex T-units per T-Unit. |
7 | Dependent Clause Ratio | dCC | Syntactic Complexity | Subordination | The Dependent Clause Ratio (dCC) is a metric that calculates the average number of dependent/subordinate clauses per clause. |
8 | Dependent Clauses per T-Unit | dCT | Syntactic Complexity | Subordination | The Dependent Clauses per T-Unit (dCT) is a metric that quantifies the average number of dependent/subordinate clauses per T-Unit. |
9 | Coordinate Phrases per Clause | cPC | Syntactic Complexity | Coordination | The Coordinate Phrases per Clause (cPC) is a metric that quantifies the average number of coordinated noun, verb, adjective, and adverb phrases per clause. |
10 | Coordinate Phrases per T-Unit | cPT | Syntactic Complexity | Coordination | The Coordinate Phrases per T-Unit (cPT) is a metric that quantifies the average number of coordinate coordinated noun, verb, adjective, and adverb phrases per T-unit. |
11 | Sentence Coordination Ratio | TS | Syntactic Complexity | Coordination | The Sentence Coordination Ratio (TS) is a metric used to quantify the average number of T-units per sentence. |
12 | Complex Nominals per Clause | cNC | Syntactic Complexity | Particular Structures | The Complex Nominals per Clause (cNOM) is a metric that quantifies the average number of complex nominals (nouns plus adjectives, possessives, prepositional phrases, relative clauses, participles, or appositives) per clause. |
13 | Complex Nominals per T-Unit | cNT | Syntactic Complexity | Particular Structures | The Complex Nominals per T-unit (cNT) is a metric that quantifies the number of complex nominals (nouns plus adjectives, possessives, prepositional phrases, relative clauses, participles, or appositives) per T-unit. |
14 | Complex Nominals per Sentence | cNS | Syntactic Complexity | Particular Structures | The Complex Nominals per Sentence (cNS) is a metric that quantifies the number of complex nominals (nouns plus adjectives, possessives, prepositional phrases, relative clauses, participles, or appositives) per sentence. |
15 | Verb Phrases per T-Unit | VPT | Syntactic Complexity | Particular Structures | The Verb Phrases per T-Unit (VPT) is a metric that calculates the average number of verb phrases per T-unit. |
16 | Noun Phrase Pre-Modification | NPpre | Syntactic Complexity | Particular Structures | The Noun Phrase Pre-modification (preNP) is a metric that calculates the average number of modifying words appearing before the main noun. |
17 | Noun Phrase Post-Modification | NPpost | Syntactic Complexity | Particular Structures | The Noun Phrase Post-modification (postNP) is a metric that calculates the average number of modifying words appearing after the main noun. |
18 | Kolmogorov Deflate | KDbase | Syntactic Complexity | Information-Theoretic | The Kolmogorov Deflate (KDbase) is a metric that calculates the ratio of the number of bytes needed to store a text, whether written or transcribed speech, after compression to the number of bytes needed to store the plain text. |
19 | Number of Different Words | NDW | Lexical Complexity | Lexical Diversity | The Number of Different Words (NDW) is a metric that quantifies the average number of unique words per sentence. |
20 | Type-Token Ratio | TTR | Lexical Complexity | Lexical Diversity | The Type-Token Ratio (TTR) is a metric that calculates the average number of unique word types (distinct words) in relation to the total number of words. |
21 | Corrected Type-Token Ratio | cTTR | Lexical Complexity | Lexical Diversity | The Corrected Type-Token Ratio (cTTR) is a variant of TTR that factors in the length of the speech/text sample. It calculates the ratio between the number of unique word types (distinct words) and the square root of two times the total number of word tokens (all words). |
22 | Root Type-Token Ratio | rTTR | Lexical Complexity | Lexical Diversity | The Root Type-Token Ratio (rTTR) is a variant of TTR that factors in the length of the speech/text sample. It calculates the ratio between the number of unique word types (distinct words) and the square root of the total number of word tokens (all words). |
23 | Bilogarithmic Type-Token Ratio | bTTR | Lexical Complexity | Lexical Diversity | The Bilogarithmic Type-Token Ratio (bTTR) is a variant of TTR that factors in the length of the speech/text sample. It calculates the ratio between the logarithm of the number of unique word types (distinct words) and the logarithm of the total number of word tokens (all words). |
24 | Uber Index | Uber | Lexical Complexity | Lexical Diversity | The Uber Index (Uber) is a variant of TTR that factors in the length of the document. It is equal to the base-2 logarithm of the total number of word tokens (N) divided by the natural logarithm of the ratio of N and the number of unique word types. |
25 | Lexical Word Variation | lwVAR | Lexical Complexity | Lexical Diversity | The Lexical Word Variation (lwVAR) is a metric that relates the total count of unique lexical words to the overall count of lexical words. |
26 | Verb Variation-I | vVAR1 | Lexical Complexity | Lexical Diversity | The Verb Variation-I (vVAR) is a metric that relates the total count of unique lexical verbs to the overall count of lexical verbs. |
27 | Squared Verb Variation | svVAR1 | Lexical Complexity | Lexical Diversity | The Squared Verb Variation (svVAR1) is a metric that is calculated by taking the square of the total number of unique verbs and then dividing it by the overall number of verb occurrences. |
28 | Corrected Verb Variation-1 | cvVAR1 | Lexical Complexity | Lexical Diversity | The Corrected Verb Variation-1 (cvVAR1) is a metric that is calculated by dividing the total number of unique verbs by the square root of twice the number of verb occurrences. |
29 | Verb Variation-2 | vVAR2 | Lexical Complexity | Lexical Diversity | The Verb Variation-2 (vVAR2) is a metric that relates the total count of unique lexical verbs to the overall count of lexical words. |
30 | Noun Variation | nVAR | Lexical Complexity | Lexical Diversity | The Noun Variation (nVAR) is a metric of that is calculated by dividing the total number of unique nouns by the overall number of noun occurrences. |
31 | Adjective Variation | adjVAR | Lexical Complexity | Lexical Diversity | The Adjective Variation (adjVAR) is a metric that is calculated by dividing the total number of unique adjectives by the overall number of adjective occurrences. |
32 | Adverb Variation | advVAR | Lexical Complexity | Lexical Diversity | The Adverb Variation (advVAR) is a metric that is calculated by dividing the total number of unique adverbs by the overall number of adverb occurrences. |
33 | Modifier Variation | modVAR | Lexical Complexity | Lexical Diversity | The Modifier Variation (modVAR) is a metric that is calculated by summing the total number of unique adjectives​ and the total number of unique adverbs​. This sum is then divided by the total number of lexical words. |
34 | Lexical Density | LD | Lexical Complexity | Lexical Density | The Lexical Density (LD) is a metric that calculates the proportion of content/lexical words. |
35 | Mean Length of Word in Characters | MLWc | Lexical Complexity | Lexical Sophistication | The Mean Length of Word in Characters (MLWc) is a metric that calculates the average number of characters or letters per word. |
36 | Mean Length of Word in Syllables | MLWs | Lexical Complexity | Lexical Sophistication | The Mean Length of Word in Syllables (MLWs) is a metric that calculates the average number of syllables per word. |
37 | Beyond 2000 Words ANC | B2KBANC | Lexical Complexity | Lexical Sophistication | The Beyond 2000 Words ANC (B2KBANC) is a metric that calculates the proportion of words that are not among the 2000 most frequent words in a reference corpus. |
38 | Beyond 2000 Words BNC | B2KBBNC | Lexical Complexity | Lexical Sophistication | The Beyond 2000 Words BNC (B2KBBNC) is a metric that calculates the proportion of words that are not among the 2000 most frequent words in a reference corpus. |
39 | Non-NGSL Words | NNGSL | Lexical Complexity | Lexical Sophistication | The Non-NGSL Words (NNGSL) is a metric that calculates the proportion of words that are not part of generally known words in a reference list. |
40 | Non-Stop Word Ratio | NSW | Lexical Complexity | Lexical Sophistication | The Non-Stop Word Ratio (NSW) is a metric that calculates the proportion of words that are not considered "stop words". |
41 | Unigram Academic Normalized Log Frequency | 1GNLFa | Stylistics | Academic Language | Unigram Academic Normalized Log Frequency (1GNLFa) is a metric that quantifies the prominence of academic language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
42 | Bigram Academic Normalized Log Frequency | 2GNLFa | Stylistics | Academic Language | Bigram Academic Normalized Log Frequency (2GNLFa) is a metric that quantifies the prominence of academic language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
43 | Trigram Academic Normalized Log Frequency | 3GNLFa | Stylistics | Academic Language | Trigram Academic Normalized Log Frequency (3GNLFa) is a metric that quantifies the prominence of academic language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
44 | Fourgram Academic Normalized Log Frequency | 4GNLFa | Stylistics | Academic Language | Fourgram Academic Normalized Log Frequency (4GNLFa) is a metric that quantifies the prominence of academic language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
45 | Fivegram Academic Normalized Log Frequency | 5GNLFa | Stylistics | Academic Language | Fivegram Academic Normalized Log Frequency (5GNLFa) is a metric that quantifies the prominence of academic language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
46 | Unigram Weblog Normalized Log Frequency | 1GNLFb | Stylistics | Weblog Language | Unigram Weblog Normalized Log Frequency (1GNLFb) is a metric that quantifies the prominence of weblog language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
47 | Bigram Weblog Normalized Log Frequency | 2GNLFb | Stylistics | Weblog Language | Bigram Weblog Normalized Log Frequency (2GNLFb) is a metric that quantifies the prominence of weblog language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
48 | Trigram Weblog Normalized Log Frequency | 3GNLFb | Stylistics | Weblog Language | Trigram Weblog Normalized Log Frequency (3GNLFb) is a metric of that quantifies the prominence of weblog language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
49 | Fourgram Weblog Normalized Log Frequency | 4GNLFb | Stylistics | Weblog Language | Fourgram Weblog Normalized Log Frequency (4GNLFb) is a metric that quantifies the prominence of weblog language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
50 | Fivegram Weblog Normalized Log Frequency | 5GNLFb | Stylistics | Weblog Language | Fivegram Weblog Normalized Log Frequency (5GNLFb) is a metric that quantifies the prominence of weblog language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
51 | Unigram Fiction Normalized Log Frequency | 1GNLFf | Stylistics | Fiction Language | Unigram Fiction Normalized Log Frequency (1GNLFf) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
52 | Bigram Fiction Normalized Log Frequency | 2GNLFf | Stylistics | Fiction Language | Bigram Fiction Normalized Log Frequency (2GNLFf) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
53 | Trigram Fiction Normalized Log Frequency | 3GNLFf | Stylistics | Fiction Language | Trigram Fiction Normalized Log Frequency (3GNLFf) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
54 | Fourgram Fiction Normalized Log Frequency | 4GNLFf | Stylistics | Fiction Language | Fourgram Fiction Normalized Log Frequency (4GNLFf) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
55 | Fivegram Fiction Normalized Log Frequency | 5GNLFf | Stylistics | Fiction Language | Fivegram Fiction Normalized Log Frequency (5GNLFf) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
56 | Unigram Magazine Normalized Log Frequency | 1GNLFm | Stylistics | Magazine Language | Unigram Fiction Normalized Log Frequency (1GNLFm) is a metric that quantifies the prominence of the language of fiction in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
57 | Bigram Magazine Normalized Log Frequency | 2GNLFm | Stylistics | Magazine Language | Bigram Magazine Normalized Log Frequency (2GNLFm) is a metric that quantifies the prominence of magazine language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
58 | Trigram Magazine Normalized Log Frequency | 3GNLFm | Stylistics | Magazine Language | Trigram Magazine Normalized Log Frequency (3GNLFm) is a metric that quantifies the prominence of magazine language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
59 | Fourgram Magazine Normalized Log Frequency | 4GNLFm | Stylistics | Magazine Language | Fourgram Magazine Normalized Log Frequency (4GNLFm) is a metric that quantifies the prominence of magazine language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
60 | Fivegram Magazine Normalized Log Frequency | 5GNLFm | Stylistics | Magazine Language | Fivegram Magazine Normalized Log Frequency (5GNLFm) is a metric that quantifies the prominence of magazine language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
61 | Unigram News Normalized Log Frequency | 1GNLFn | Stylistics | News Language | Unigram News Normalized Log Frequency (1GNLFn) is a metric that quantifies the prominence of news language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
62 | Bigram News Normalized Log Frequency | 2GNLFn | Stylistics | News Language | Bigram News Normalized Log Frequency (2GNLFn) is a metric that quantifies the prominence of news language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
63 | Trigram News Normalized Log Frequency | 3GNLFn | Stylistics | News Language | Trigram News Normalized Log Frequency (3GNLFn) is a metric that quantifies the prominence of news language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
64 | Fourgram News Normalized Log Frequency | 4GNLFn | Stylistics | News Language | Fourgram News Normalized Log Frequency (4GNLFn) is a metric that quantifies the prominence of news language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
65 | Fivegram News Normalized Log Frequency | 5GNLFn | Stylistics | News Language | Fivegram News Normalized Log Frequency (5GNLFn) is a metric that quantifies the prominence of news language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
66 | Unigram Spoken Normalized Log Frequency | 1GNLFs | Stylistics | Spoken Language | Unigram Spoken Normalized Log Frequency (1GNLFs) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
67 | Bigram Spoken Normalized Log Frequency | 2GNLFs | Stylistics | Spoken Language | Bigram Spoken Normalized Log Frequency (2GNLFs) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
68 | Trigram Spoken Normalized Log Frequency | 3GNLFs | Stylistics | Spoken Language | Spoken Normalized Log Frequency (3GNLFs) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
69 | Fourgram Spoken Normalized Log Frequency | 4GNLFs | Stylistics | Spoken Language | Fourgram Spoken Normalized Log Frequency (4GNLFs) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
70 | Fivegram Spoken Normalized Log Frequency | 5GNLFs | Stylistics | Spoken Language | Fivegram Spoken Normalized Log Frequency (5GNLFs) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
71 | Unigram TV/Media Normalized Log Frequency | 1GNLFtv | Stylistics | TV/Media Language | Unigram TV/Media Normalized Log Frequency (1GNLFtv) is a metric that quantifies the prominence of TV/media language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
72 | Bigram TV/Media Normalized Log Frequency | 2GNLFtv | Stylistics | TV/Media Language | Bigram TV/Media Normalized Log Frequency (2GNLFtv) is a metric that quantifies the prominence of TV/media language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
73 | Trigram TV/Media Normalized Log Frequency | 3GNLFtv | Stylistics | TV/Media Language | Trigram TV/Media Normalized Log Frequency (3GNLFtv) is a metric that quantifies the prominence of TV/media language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
74 | Fourgram TV/Media Normalized Log Frequency | 4GNLFtv | Stylistics | TV/Media Language | Fourgram TV/Media Normalized Log Frequency (4GNLFtv) is a metric that quantifies the prominence of TV/media language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
75 | Fivegram TV/Media Normalized Log Frequency | 5GNLFtv | Stylistics | TV/Media Language | Fivegram TV/Media Normalized Log Frequency (5GNLFtv) is a metric that quantifies the prominence of TV/media language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
76 | Unigram Web Normalized Log Frequency | 1GNLFw | Stylistics | Web Language | Unigram Web Normalized Log Frequency (1GNLFw) is a metric that quantifies the prominence of web language in a document by normalizing the weighted logarithmic frequency of unigrams (single words) in a reference corpus against the document's total word count. |
77 | Bigram Web Normalized Log Frequency | 2GNLFw | Stylistics | Web Language | Bigram Web Normalized Log Frequency (2GNLFw) is a metric that quantifies the prominence of web language in a document by normalizing the weighted logarithmic frequency of bigrams (two-word combinations) in a reference corpus against the document's total word count. |
78 | Trigram Web Normalized Log Frequency | 3GNLFw | Stylistics | Web Language | Trigram Web Normalized Log Frequency (3GNLFw) is a metric that quantifies the prominence of web language in a document by normalizing the weighted logarithmic frequency of trigrams (three-word combinations) in a reference corpus against the document's total word count. |
79 | Fourgram Web Normalized Log Frequency | 4GNLFw | Stylistics | Web Language | Fourgram Web Normalized Log Frequency (4GNLFw) is a metric that quantifies the prominence of web language in a document by normalizing the weighted logarithmic frequency of fourgrams (four-word combinations) in a reference corpus against the document's total word count. |
80 | Fivegram Web Normalized Log Frequency | 5GNLFw | Stylistics | Web Language | Fivegram Spoken Normalized Log Frequency (5GNLFw) is a metric that quantifies the prominence of spoken language in a document by normalizing the weighted logarithmic frequency of fivegrams (five-word combinations) in a reference corpus against the document's total word count. |
81 | Next-Sentence Lemma Overlap | NSLO | Cohesion | Lexical Overlap | Next-Sentence Lemma Overlap (NSLO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique lemmas (the base form of a word) in the next sentence by the total number of unique lemmas in both sentences. |
82 | Next-Sentence Lemma Overlap (sentence normalized) | NSLOsn | Cohesion | Lexical Overlap | Next-Sentence Lemma Overlap (sentence normalized) (NSLOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of unique lemmas (the base form of a word) in the subsequent sentence by the total number of sentences in a document. |
83 | Next-Sentence Lemma Overlap Binary | NSLOb | Cohesion | Lexical Overlap | Next-Sentence Lemma Overlap Binary (NSLOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the lemma sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
84 | Next-Two Sentences Lemma Overlap | N2SLO | Cohesion | Lexical Overlap | Next-Two Sentences Lemma Overlap (N2SLO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique lemmas (the base form of a word) in those next sentences by the total number of unique lemmas in all sentences. |
85 | Next-Two Sentences Lemma Overlap (sentence normalized) | N2SLOsn | Cohesion | Lexical Overlap | Next-Two Sentences Lemma Overlap (sentence normalized) (N2SLOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique lemmas (the base form of a word) in those next sentences by the total number of sentences in a document. |
86 | Next-Two Sentences Lemma Overlap Binary | N2SLOb | Cohesion | Lexical Overlap | Next-Two Sentences Lemma Overlap Binary (N2SLOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is defined as the sum of indicator functions across all pairs of adjacent sentences. The indicator function evaluates whether the intersection of lemma sets between the current sentence and the next two sentences is non-empty. |
87 | Next-Sentence Content Word Overlap | NSCWO | Cohesion | Lexical Overlap | Next-Sentence Content Word Overlap (NSCWO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique content words in the next sentence by the by the total number of content words in both sentences. |
88 | Next-Sentence Content Word Overlap (sentence normalized) | NSCWOsn | Cohesion | Lexical Overlap | Next-Sentence Content Word Overlap (sentence normalized) (NSCWOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of content words in the subsequent sentence by the total number of sentences in a document. |
89 | Next-Sentence Content Word Overlap Binary | NSCWOb | Cohesion | Lexical Overlap | Next-Sentence Content Word Overlap Binary (NSCWOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the content word sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
90 | Next-Two Sentences Content Word Overlap | N2SCWO | Cohesion | Lexical Overlap | Next-Two Sentences Content Word Overlap (N2SCWO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique content words in those next sentences by the total number of unique content words in all sentences. |
91 | Next-Two Sentences Content Word Overlap (sentence normalized) | N2SCWOsn | Cohesion | Lexical Overlap | Next-Two Sentences Content Word Overlap (sentence normalized) (NSCWOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique content words in those next sentences by the total number of sentences in a document. |
92 | Next-Two Sentences Content Word Overlap Binary | N2SCWOb | Cohesion | Lexical Overlap | Next-Two Sentence Content Word Overlap Binary (N2SCWOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the content word sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
93 | Next-Sentence Function Word Overlap | NSFWO | Cohesion | Lexical Overlap | Next-Sentence Function Word Overlap (NSFWO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique function words in the next sentence by the total number of function words in both sentences. |
94 | Next-Sentence Function Word Overlap (sentence normalized) | NSFWOsn | Cohesion | Lexical Overlap | Next-Sentence Function Word Overlap (sentence normalized) (NSFWOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of function words in the subsequent sentence by the total number of sentences in a document. |
95 | Next-Sentence Function Word Overlap Binary | NSFWOb | Cohesion | Lexical Overlap | Next-Sentence Function Word Overlap Binary (NSFWOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the function word sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
96 | Next-Two Sentences Function Word Overlap | N2SFWO | Cohesion | Lexical Overlap | Next-Two Sentences Function Word Overlap (N2SFWO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique function words in those next sentences by the total number of unique function words in all sentences. |
97 | Next-Two Sentences Function Word Overlap (sentence normalized) | N2SFWOsn | Cohesion | Lexical Overlap | Next-Two Sentences Function Word Overlap (sentence normalized) (N2SFWOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique function words in those next sentences by the total number of sentences in a document. |
98 | Next-Two Sentences Function Word Overlap Binary | N2SFWOb | Cohesion | Lexical Overlap | Next-Two Sentence Function Word Overlap Binary (N2SFWOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the function word sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
99 | Next-Sentence Noun Overlap | NSNO | Cohesion | Lexical Overlap | Next-Sentence Noun Overlap (NSNO) is a cohesion measure that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique nouns in the next sentence by the total number of nouns in both sentences. |
100 | Next-Sentence Noun Overlap (sentence normalized) | NSNOsn | Cohesion | Lexical Overlap | Next-Sentence Noun Overlap (sentence normalized) (NSNOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of nouns in the subsequent sentence by the total number of sentences in a document. |
101 | Next-Sentence Noun Overlap Binary | NSNOb | Cohesion | Lexical Overlap | Next-Sentence Noun Overlap Binary (NSNOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the noun sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
102 | Next-Two Sentences Noun Overlap | N2SNO | Cohesion | Lexical Overlap | Next-Two Sentences Noun Overlap (N2SNO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique nouns in those next sentences by the total number of unique nouns in all sentences. |
103 | Next-Two Sentences Noun Overlap (sentence normalized) | N2SNOsn | Cohesion | Lexical Overlap | Next-Two Sentences Noun Overlap (sentence normalized) (N2SNOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique nouns in those next sentences by the total number of sentences in a document. |
104 | Next-Two Sentences Noun Overlap Binary | N2SNOb | Cohesion | Lexical Overlap | Next-Two Sentence Noun Overlap Binary (N2SNOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the noun sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
105 | Next-Sentence Verb Overlap | NSVO | Cohesion | Lexical Overlap | Next-Sentence Verb Overlap (NSVO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique verbs in the next sentence by the total number of verbs in both sentences. |
106 | Next-Sentence Verb Overlap (sentence normalized) | NSVOsn | Cohesion | Lexical Overlap | Next-Sentence Verb Overlap (sentence normalized) (NSVOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of verbs in the subsequent sentence by the total number of sentences in a document. |
107 | Next-Sentence Verb Overlap Binary | NSVOb | Cohesion | Lexical Overlap | Next-Sentence Verb Overlap Binary (NSVOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the verb sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
108 | Next-Two Sentences Verb Overlap | N2SVO | Cohesion | Lexical Overlap | Next-Two Sentences Verb Overlap (N2SVO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique verbs in those next sentences by the total number of unique verbs in all sentences. |
109 | Next-Two Sentences Verb Overlap (sentence normalized) | N2SVOsn | Cohesion | Lexical Overlap | Next-Two Sentences Verb Overlap (sentence normalized) (N2SVOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique verbs in those next sentences by the total number of sentences in a document. |
110 | Next-Two Sentences Verb Overlap Binary | N2SVOb | Cohesion | Lexical Overlap | Next-Two Sentence Verb Overlap Binary (N2SVOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the verb sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
111 | Next-Sentence Adjective Overlap | NSAO | Cohesion | Lexical Overlap | Next-Sentence Adjective Overlap (NSAO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique adjectives in the next sentence by the total number of adjectives in both sentences. |
112 | Next-Sentence Adjective Overlap (sentence normalized) | NSAOsn | Cohesion | Lexical Overlap | Next-Sentence Adjective Overlap (sentence normalized) (NSAOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of adjectives in the subsequent sentence by the total number of sentences in a document. |
113 | Next-Sentence Adjective Overlap Binary | NSAOb | Cohesion | Lexical Overlap | Next-Sentence Adjective Overlap Binary (NSAOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the adjective sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
114 | Next-Two Sentences Adjective Overlap | N2SAO | Cohesion | Lexical Overlap | Next-Two Sentences Adjective Overlap (N2SAO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique adjectives in those next sentences by the total number of unique adjectives in all sentences. |
115 | Next-Two Sentences Adjective Overlap (sentence normalized) | N2SAOsn | Cohesion | Lexical Overlap | Next-Two Sentences Adjective Overlap (sentence normalized) (N2SAOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique adjectives in those next sentences by the total number of sentences in a document. |
116 | Next-Two Sentences Adjective Overlap Binary | N2SAOb | Cohesion | Lexical Overlap | Next-Two Sentence Adjective Overlap Binary (N2SAOb) is a metric hat measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the adjective sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
117 | Next-Sentence Adverb Overlap | NSAdvO | Cohesion | Lexical Overlap | Next-Sentence Adverb Overlap (NSAdvO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique adverbs in the next sentence by the total number of adverbs in both sentences. |
118 | Next-Sentence Adverb Overlap (sentence normalized) | NSAdvOsn | Cohesion | Lexical Overlap | Next-Sentence Adverb Overlap (sentence normalized) (NSAdvOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of adverbs in the subsequent sentence by the total number of sentences in a document. |
119 | Next-Sentence Adverb Overlap Binary | NSAdvOb | Cohesion | Lexical Overlap | Next-Sentence Adverb Overlap Binary (NSAdvOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the adverb sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
120 | Next-Two Sentences Adverb Overlap | N2SAdvO | Cohesion | Lexical Overlap | Next-Two Sentences Adverb Overlap (N2SAdvO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique adverbs in those next sentences by the total number of unique adverbs in all sentences. |
121 | Next-Two Sentences Adverb Overlap (sentence normalized) | N2SAdvOsn | Cohesion | Lexical Overlap | Next-Two Sentences Adverb Overlap (sentence normalized) (N2SAdvOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique adverbs in those next sentences by the total number of sentences in a document. |
122 | Next-Two Sentences Adverb Overlap Binary | N2SADVOb | Cohesion | Lexical Overlap | Next-Two Sentences Adverb Overlap Binary (N2SAdvOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the adverb sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
123 | Next-Sentence Pronoun Overlap | NSPO | Cohesion | Lexical Overlap | Next-Sentence Pronoun Overlap (NSPO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique pronouns in the next sentence by the total number of pronouns in both sentences. |
124 | Next-Sentence Pronoun Overlap (sentence normalized) | NSPOsn | Cohesion | Lexical Overlap | Next-Sentence Pronoun Overlap (sentence normalized) (NSPOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of pronouns in the subsequent sentence by the total number of sentences in a document. |
125 | Next-Sentence Pronoun Overlap Binary | NSPOb | Cohesion | Lexical Overlap | Next-Sentence Pronoun Overlap Binary (NSPOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the pronoun sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
126 | Next-Two Sentences Pronoun Overlap | N2SPO | Cohesion | Lexical Overlap | Next-Two Sentences Pronoun Overlap (N2SPO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique pronouns in those next sentences by the total number of unique pronouns in all sentences. |
127 | Next-Two Sentences Pronoun Overlap (sentence normalized) | N2SPOsn | Cohesion | Lexical Overlap | Next-Two Sentences Pronoun Overlap (sentence normalized) (N2SPOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique pronouns in those next sentences by the total number of sentences in a document. |
128 | Next-Two Sentences Pronoun Overlap Binary | N2SPOb | Cohesion | Lexical Overlap | Next-Two Sentence Pronoun Overlap Binary (N2SPOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the pronoun sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
129 | Next-Sentence Argument Overlap | NSARGO | Cohesion | Lexical Overlap | Next-Sentence Argument Overlap (NSARGO) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the number of unique arguments (noun and pronoun lemmas) in the next sentence by the total number of arguments in both sentences. |
130 | Next-Sentence Argument Overlap (sentence normalized) | NSARGOsn | Cohesion | Lexical Overlap | Next-Sentence Argument Overlap (sentence normalized) (NSARGOsn) is a metric that quantifies the degree of overlap between two consecutive sentences by dividing the count of arguments (noun and pronoun lemmas) in the subsequent sentence by the total number of sentences in a document. |
131 | Next-Sentence Argument Overlap Binary | NSARGOb | Cohesion | Lexical Overlap | Next-Sentence Argument Overlap Binary (NSARGOb) is a metric that measures the degree of overlap between two consecutive sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the argument (noun and pronoun lemmas) sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
132 | Next-Two Sentences Argument Overlap | N2SARGO | Cohesion | Lexical Overlap | Next-Two Sentences Argument Overlap (N2SARGO) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique arguments (noun and pronoun lemmas) in those next sentences by the total number of unique arguments in all sentences. |
133 | Next-Two Sentences Argument Overlap (sentence normalized) | N2SARGOsn | Cohesion | Lexical Overlap | Next-Two Sentences Argument Overlap (sentence normalized) (N2SARGOsn) is a metric that quantifies the degree of overlap between a sentence and the next two sentences by dividing the number of unique arguments (noun and pronoun lemmas) in those next sentences by the total number of sentences in a document. |
134 | Next-Two Sentences Argument Overlap Binary | N2SARGOb | Cohesion | Lexical Overlap | Next-Two Sentence Argument Overlap Binary (N2SARGOb) is a metric that measures the degree of overlap between a sentence and the next two sentences in a document. It is calculated by summing the indicator function over all pairs of adjacent sentences, where the indicator function returns 1 if there is a non-empty intersection between the argument (noun and pronoun lemmas) sets of two consecutive sentences, and 0 otherwise. The total number of sentences in the document is represented by n. |
135 | Addition Density | AD | Cohesion | Connectives | Addition Density (AD) is a metric that calculates the ratio of the number of addition words used in a document to the total number of words in that document. |
136 | Additive Density | AVD | Cohesion | Connectives | Additive Density (AVD) is a metric that calculates the ratio of the number of additive words used in a document to the total number of words in that document. |
137 | All Causal Density | CAUSD | Cohesion | Connectives | All Causal Density (CAUSD) is a metric that calculates the ratio of the number of causal words used in a document to the total number of words in that document. |
138 | Connectives Density | COND | Cohesion | Connectives | Connectives Density (COND) is a metric that calculates the ratio of the number of connectives used in a document to the total number of words in that document. |
139 | Demonstratives Density | DEMD | Cohesion | Connectives | Demonstratives Density (DEMD) is a metric that calculates the ratio of the number of demonstratives used in a document to the total number of words in that document. |
140 | Logical Density | LOGD | Cohesion | Connectives | Logical Density (LOGD) is a metric that calculates the ratio of the number of logical connectives used in a document to the total number of words in that document. |
141 | Negative Density | NEGD | Cohesion | Connectives | Negative Density (NEGD) is a metric that calculates the ratio of the number of negative connectives used in a document to the total number of words in that document. |
142 | Positive Density | POSD | Cohesion | Connectives | Positve Density (POSD) is a metric that calculates the ratio of the number of positive connectives used in a document to the total number of words in that document. |
143 | Basic Connectives Density | BCD | Cohesion | Connectives | Basic Connectives Density (BCD) is a metric that calculates the ratio of the number of basic connectives used in a document to the total number of words in that document. |
144 | Conjunctions Density | CD | Cohesion | Connectives | Conjunctions Density (CD) is a metric that calculates the ratio of the number of conjunctions used in a document to the total number of words in that document. |
145 | Coordinating Conjuncts Density | CCD | Cohesion | Connectives | Coordinating Conjunct Density (CCD) is a metric that calculates the ratio of the number of coordinating conjunctions used in a document to the total number of words in that document. |
146 | Determiners Density | DETD | Cohesion | Connectives | Determiners Density (DETD) is a metric that calculates the ratio of the number of determiners used in a document to the total number of words in that document. |
147 | Disjunctions Density | DD | Cohesion | Connectives | Disjunctions Density (DD) is a metric that calculates the ratio of the number of disjunctions used in a document to the total number of words in that document. |
148 | Lexical Subordinators Density | LSD | Cohesion | Connectives | Lexical Subordinators Density (LSD) is a metric that calculates the ratio of the number of lexical subordinators used in a document to the total number of words in that document. |
149 | Negative Logical Density | negLOGD | Cohesion | Connectives | Negative Logical Density (negLOGD) is a metric that calculates the ratio of the number of negative logical connectives used in a document to the total number of words in that document. |
150 | Opposition Density | OPPD | Cohesion | Connectives | Opposition Density (OPPD) is a metric that calculates the ratio of the number of opposition words used in a document to the total number of words in that document. |
151 | Order Density | ORDD | Cohesion | Connectives | Order Density (OD) is a metric that calculates the ratio of the number of order words used in a document to the total number of words in that document. |
152 | Positive Causal Density | posCAUSD | Cohesion | Connectives | Positive Causal Density (posCAUSD) is a metric that calculates the ratio of the number of positive causal words used in a document to the total number of words in that document. |
153 | Positive Intentional Density | posID | Cohesion | Connectives | Positive Intentional Density (posID) is a metric that calculates the ratio of the number of positive intention connectives used in a document to the total number of words in that document. |
154 | Positive Logical Density | posLOGD | Cohesion | Connectives | Positive Logical Density (posLOGD) is a metric that calculates the ratio of the number of positive logical connectives used in a document to the total number of words in that document. |
155 | Reason and Purpose Density | RPD | Cohesion | Connectives | Reason and Purpose Density (RPD) is a metric that calculates the ratio of the number of reason and purpose words used in a document to the total number of words in that document. |
156 | Sentence Linking Density | SLD | Cohesion | Connectives | Sentence Linking Density (SLD) is a metric that calculates the ratio of the number of linking words used in a document to the total number of words in that document. |
157 | Temporal Density | TD | Cohesion | Connectives | Temporal Density (TD) is a metric that calculates the ratio of the number of temporal connectives used in a document to the total number of words in that document. |
158 | Automated Readability Index | ARI | Readability | | The Automated Readability Index (ARI) is a metric that gauges the understandability of a document based on its characters, words, and sentences. |
159 | Dale-Chall Index | DCI | Readability | | The Dale-Chall Index, also known as the Dale-Chall Readability Formula, is a metric that quantifies the difficulty of understanding a document by taking into account the percentage of difficult words in the document and the average sentence length. |
160 | Powers-Sumner-Kearl Variation of the Dale–Chall Index | DCIpsk | Readability | | The Powers-Sumner-Kearl Variation of the Dale-Chall Index (DCIpsk) is a metric of readability, an adaptation of the original Dale-Chall Index (DCI), that quantifies the difficulty of understanding a document based on the percentage of difficult words in the document and the average sentence length. |
161 | Coleman–Liau Index | CLI | Readability | | The Coleman-Liau Index (CLI) is a metric that quantifies the difficulty of understanding a document based on the average number of letters per 100 words and the average number of sentences per 100 words. |
162 | Flesh Kincaid Reading Ease | FKRE | Readability | | The Flesh Kincaid Reading Ease (FKRE), also known as the Flesch-Kincaid Readability Formula, is a metric that quantifies the difficulty of understanding a document by taking into account the average sentence length and the number of syllables per word. |
163 | Flesh Kincaid Grade Level | FKGL | Readability | | The Flesh Kincaid Grade Level (FKGL) is a metric of readability, measuring the approximate grade level required to comprehend a text. It considers factors such as total words, total sentences, and total syllables per word to estimate the reading difficulty. |
164 | FORCAST Index | FORCAST | Readability | | The FORCAST Index (FORCAST) is a metric that measures the complexity of a text by analyzing the frequency of rare words. |
165 | Fry's Readability Graph (x-axis) | FRYx | Readability | | The x-axis of Fry's Readability Graph (FRYx) is a is a metric that is computed from the average number of words per sentence (MLWs). |
166 | Fry's Readability Graph (y-axis) | FRYy | Readability | | The y-axis of Fry's Readability Graph (FRYy) is a is a metric that is based on the average number of syllables per word (MLS). |
167 | Gunning Fog Index | GFI | Readability | | The Gunning Fog Index (GFI) is a metric that estimates the difficulty of understanding a text by considering sentence and word complexity. |
168 | Lix Index | LIX | Readability | | The Lix Index (LIX) is a metric measure that measures the complexity of a text based on its sentence structure and the prevalence of longer words. |
169 | Rix Index | RIX | Readability | | The Rix Index (RIX) is a metric that measures the complexity of a text by calculating the ratio of long words to the number of sentences. |
170 | SMOG Index | SMOG | Readability | | The SMOG Index (SMOG) is a metric that calculates the readability of a text based on the number of polysyllabic words. |
171 | Spache Index | SPACHE | Readability | | The Spache Index (SPACHE) is a metric that evaluates the readability of a text by considering the average sentence length and the percentage of unfamiliar words. |
172 | Word Frequency COCA Written | WFCOCAw | Lexical Complexity | Lexical Sophistication | The Word Frequency COCA Written (WFCOCAw) is a metric that gauges the aggregate frequency of specific words based on their occurrences in a reference corpus. |
173 | Top10K COCA Written Coverage | T10KCOCAw | Lexical Complexity | Lexical Sophistication | The Top10K COCA Written Coverage (T10KCOCAw) is a metric of that calculates the proportion of words that are among the top 10,000 most frequent words in a reference corpus. |
174 | Top20K COCA Written Coverage | T20KCOCAw | Lexical Complexity | Lexical Sophistication | The Top20K COCA Written Coverage (T20KCOCAw) is a metric that calculates the proportion of words that are among the top 20,000 most frequent words in a reference corpus. |
175 | Top30K COCA Written Coverage | T30KCOCAw | Lexical Complexity | Lexical Sophistication | The Top30K COCA Written Coverage (T30KCOCAw) is a metric that calculates the proportion of words that are among the top 30,000 most frequent words in a reference corpus. |
176 | Top40K COCA Written Coverage | T40KCOCAw | Lexical Complexity | Lexical Sophistication | The Top40K COCA Written Coverage (T40KCOCAw) is a metric that calculates the proportion of words that are among the top 40,000 most frequent words in a reference corpus. |
177 | Top50K COCA Written Coverage | T50KCOCAw | Lexical Complexity | Lexical Sophistication | The Top50K COCA Written Coverage (T50KCOCAw) is a metric that calculates the proportion of words that are among the top 50,000 most frequent words in a reference corpus. |
178 | Top60K COCA Written Coverage | T60KCOCAw | Lexical Complexity | Lexical Sophistication | The Top60K COCA Written Coverage (T60KCOCAw) is a metric that calculates the proportion of words that are among the top 60,000 most frequent words in a reference corpus. |
179 | Top70K COCA Written Coverage | T70KCOCAw | Lexical Complexity | Lexical Sophistication | The Top70K COCA Written Coverage (T70KCOCAw) is a metric that calculates the proportion of words that are among the top 70,000 most frequent words in a reference corpus. |
180 | Top80K COCA Written Coverage | T80KCOCAw | Lexical Complexity | Lexical Sophistication | The Top80K COCA Written Coverage (T80KCOCAw) is a metric that calculates the proportion of words that are among the top 80,000 most frequent words in a reference corpus. |
181 | Top90K COCA Written Coverage | T90KCOCAw | Lexical Complexity | Lexical Sophistication | The Top90K COCA Written Coverage (T90KCOCAw) is a metric that calculates the proportion of words that are among the top 90,000 most frequent words in a reference corpus. |
182 | Top100K COCA Written Coverage | T100KCOCAw | Lexical Complexity | Lexical Sophistication | The Top100K COCA Written Coverage (T100KCOCAw) is a metric that calculates the proportion of words that are among the top 100,000 most frequent words in a reference corpus. |
183 | Word Frequency COCA Spoken | WFCOCAs | Lexical Complexity | Lexical Sophistication | The Word Frequency COCA Spoken (WFCOCAs) is a metric that gauges the aggregate frequency of specific words based on their occurrences in a reference corpus. |
184 | Top10K COCA Spoken Coverage | T10KCOCAs | Lexical Complexity | Lexical Sophistication | The Top10K COCA Spoken Coverage (T10KCOCAs) is a metric that calculates the proportion of words that are among the top 10,000 most frequent words in a reference corpus. |
185 | Top20K COCA Spoken Coverage | T20KCOCAs | Lexical Complexity | Lexical Sophistication | The Top20K COCA Spoken Coverage (T20KCOCAs) is a metric that calculates the proportion of words that are among the top 20,000 most frequent words in a reference corpus. |
186 | Top30K COCA Spoken Coverage | T30KCOCAs | Lexical Complexity | Lexical Sophistication | The Top30K COCA Spoken Coverage (T30KCOCAs) is a metric that calculates the proportion of words that are among the top 30,000 most frequent words in a reference corpus. |
187 | Top40K COCA Spoken Coverage | T40KCOCAs | Lexical Complexity | Lexical Sophistication | The Top40K COCA Spoken Coverage (T40KCOCAs) is a metric that calculates the proportion of words that are among the top 40,000 most frequent words in a reference corpus. |
188 | Top50K COCA Spoken Coverage | T50KCOCAs | Lexical Complexity | Lexical Sophistication | The Top50K COCA Spoken Coverage (T50KCOCAs) is a metric that calculates the proportion of words that are among the top 50,000 most frequent words in a reference corpus. |
189 | Top60K COCA Spoken Coverage | T60KCOCAs | Lexical Complexity | Lexical Sophistication | The Top60K COCA Spoken Coverage (T60KCOCAs) is a metric that calculates the proportion of words that are among the top 60,000 most frequent words in a reference corpus. |
190 | Top70K COCA Spoken Coverage | T70KCOCAs | Lexical Complexity | Lexical Sophistication | The Top70K COCA Spoken Coverage (T70KCOCAs) is a metric that calculates the proportion of words that are among the top 70,000 most frequent words in a reference corpus. |
191 | Top80K COCA Spoken Coverage | T80KCOCAs | Lexical Complexity | Lexical Sophistication | The Top80K COCA Spoken Coverage (T80KCOCAs) is a metric that calculates the proportion of words that are among the top 80,000 most frequent words in a reference corpus. |
192 | Top90K COCA Spoken Coverage | T90KCOCAs | Lexical Complexity | Lexical Sophistication | The Top90K COCA Spoken Coverage (T90KCOCAs) is a metric that calculates the proportion of words that are among the top 90,000 most frequent words in a reference corpus. |
193 | Top100K COCA Spoken Coverage | T100KCOCAs | Lexical Complexity | Lexical Sophistication | The Top100K COCA Spoken Coverage (T100KCOCAs) is a metric that calculates the proportion of words that are among the top 100,000 most frequent words in a reference corpus. |
194 | Word Range COCA Written | WRCOCAw | Lexical Complexity | Lexical Sophistication | Word Range COCA Written (WRCOCAw) is a metric that quantifies how frequently specific words appear across a range of texts within a reference corpus. |
195 | Standard Deviation COCA Written | SDCOCAw | Lexical Complexity | Lexical Sophistication | Standard Deviation COCA Written (SDCOCAw) is a metric that calculates the standard deviation of word frequencies within texts from a reference corpus. |
196 | Variation Coefficient COCA Written | VCCOCAw | Lexical Complexity | Lexical Sophistication | Variation Coefficient COCA Written (VCCOCAw) is a metric that calculates the normalized standard deviation of the word frequencies in the texts of a reference corpus of written English. |
197 | Juilland’s D COCA Written | JDCOCAw | Lexical Complexity | Lexical Sophistication | Juilland’s D COCA Written (JDCOCAw) is a metric that quantifies the dispersion of specific words based on the coefficient of variation across a set of texts in a reference corpus. |
198 | Carroll’s D COCA Written | CDCOCAw | Lexical Complexity | Lexical Sophistication | Carroll's D COCA Written (CDCOCAw) is a metric that quantifies the dispersion of specific words based on the normalized entropy of words in a reference corpus. |
199 | Rosengren’s S COCA Written | RSCOCAw | Lexical Complexity | Lexical Sophistication | Rosengren’s S COCA Written (RSCOCAw) is a metric that quantifies word dispersion by weighting and squaring word frequencies in each text and normalizing by the total word frequency in a reference corpus. |
200 | Deviation of Proportions COCA Written | DPCOCAw | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Written (DPCOCAw) is a metric that gauges the absolute differences between observed and expected word frequencies in texts from a reference corpus. |
201 | Deviation of Proportions COCA Written (normalized) | DPCOCAwnorm | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Written (DPCOCAwnorm) is a metric that gauges the normalized absolute differences between observed and expected word frequencies in texts from a reference corpus. |
202 | Kullback-Leibler divergence COCA Written | KLDCOCAw | Lexical Complexity | Lexical Sophistication | Kullback-Leibler divergence COCA Written (KLDCOCAw) is a metric that quantifies the difference between two probability distributions, indicating how one distribution deviates from a reference distribution. |
203 | Word Range COCA Spoken | WRCOCAs | Lexical Complexity | Lexical Sophistication | Word Range COCA Spoken (WRCOCAs) is a metric that quantifies how frequently specific words appear across a range of texts in a reference corpus. |
204 | Standard Deviation COCA Spoken | SDCOCAs | Lexical Complexity | Lexical Sophistication | Standard Deviation COCA Spoken (SDCOCAs) is a metric that calculates the standard deviation of the word frequencies within texts from a reference corpus. |
205 | Variation Coefficient COCA Spoken | VCCOCAs | Lexical Complexity | Lexical Sophistication | Variation Coefficient COCA Spoken (VCCOCAs) is a metric that calculates the normalized standard deviation of word frequencies from a reference corpus. |
206 | Juilland’s D COCA Spoken | JDCOCAs | Lexical Complexity | Lexical Sophistication | Juilland’s D COCA Spoken (JDCOCAs) is a metric that quantifies the dispersion of words based on the coefficient of variation across a set of texts within a reference corpus. |
207 | Carroll’s D COCA Spoken | CDCOCAs | Lexical Complexity | Lexical Sophistication | Carroll’s D COCA Spoken (CDCOCAs) is a metric that quantifies the dispersion of words based on the normalized entropy of words within a reference corpus. |
208 | Rosengren’s S COCA Spoken | RSCOCAs | Lexical Complexity | Lexical Sophistication | Rosengren’s S COCA Spoken (RSCOCAs) is a metric that quantifies the dispersion of words by weighting word frequencies in each text, summing and squaring these values, and then dividing by the overall word frequency in a reference corpus. |
209 | Deviation of Proportions COCA Spoken | DPCOCAs | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Spoken (DPCOCAs) is a metric that gauges the absolute differences between observed and expected word frequencies in texts from a reference corpus. |
210 | Deviation of Proportions COCA Spoken (normalized) | DPCOCAsnorm | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Spoken (DPCOCAsnorm) is a metric that gauges the normalized absolute differences between observed and expected word frequencies in texts from a reference corpus. |
211 | Kullback-Leibler divergence COCA Spoken | KLDCOCAs | Lexical Complexity | Lexical Sophistication | Kullback-Leibler divergence COCA Spoken (KLDCOCAs) is a metric that quantifies the difference between two probability distributions, indicating how one distribution deviates from a reference distribution. |
212 | Word Frequency COCA Academic Language | WFCOCAa | Lexical Complexity | Lexical Sophistication | Word Frequency COCA Academic Language (WFCOCAa) is a metric that gauges the aggregate the frequency of words based on their occurrences in a reference corpus. |
213 | Top10K COCA Academic Language Coverage | T10KCOCAa | Lexical Complexity | Lexical Sophistication | Top10K COCA Academic Language Coverage (T10KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 10,000 most frequent words in the academic language component of a reference corpus. |
214 | Top20K COCA Academic Coverage | T20KCOCAa | Lexical Complexity | Lexical Sophistication | Top20K COCA Academic Coverage (T20KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 20,000 most frequent words in the academic language component of a reference corpus. |
215 | Top30K COCA Academic Language Coverage | T30KCOCAa | Lexical Complexity | Lexical Sophistication | Top30K COCA Academic Language Coverage (T30KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 30,000 most frequent words in the academic language component of a reference corpus. |
216 | Top40K COCA Academic Language Coverage | T40KCOCAa | Lexical Complexity | Lexical Sophistication | Top40K COCA Academic Language Coverage(T40KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 40,000 most frequent words in the academic language component of a reference corpus. |
217 | Top50K COCA Academic Language Coverage | T50KCOCAa | Lexical Complexity | Lexical Sophistication | Top50K COCA Academic Language Coverage (T50KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 50,000 most frequent words in the academic language component of a reference corpus. |
218 | Top60K COCA Academic Language Coverage | T60KCOCAa | Lexical Complexity | Lexical Sophistication | Top60K COCA Academic Language Coverage (T60KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 60,000 most frequent words in the academic language component of a reference corpus. |
219 | Top70K COCA Academic Language Coverage | T70KCOCAa | Lexical Complexity | Lexical Sophistication | Top70K COCA Academic Language Coverage (T70KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 70,000 most frequent words in the academic language component of a reference corpus. |
220 | Top80K COCA Academic Language Coverage | T80KCOCAa | Lexical Complexity | Lexical Sophistication | Top80K COCA Academic Language Coverage (T80KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 80,000 most frequent words in the academic language component of a reference corpus. |
221 | Top90K COCA Academic Language Coverage | T90KCOCAa | Lexical Complexity | Lexical Sophistication | Top90K COCA Academic Language Coverage (T90KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 90,000 most frequent words in the academid language component of a reference corpus. |
222 | Top100K COCA Academic Language Coverage | T100KCOCAa | Lexical Complexity | Lexical Sophistication | Top100K COCA Academic Language Coverage (T100KCOCAa) is a metric that calculates the proportion of words in a document that are among the top 100,000 most frequent words in the academic language component of a reference corpus. |
223 | Word Range COCA Academic Language | WRCOCAa | Lexical Complexity | Lexical Sophistication | Word Range COCA Academic Language (WRCOCAa) is a metric used to gauge the lexical sophistication of words within a document by quantifying how frequently they appears across a range of documents within the academic language component of a reference corpus. |
224 | Standard Deviation COCA Academic | SDCOCAa | Lexical Complexity | Lexical Sophistication | Standard Deviation COCA Academic (SDCOCAa) is a metric that calculates the standard deviation of the word frequencies in the texts of a reference corpus of academic English. |
225 | Variation Coefficient COCA Academic | VCCOCAa | Lexical Complexity | Lexical Sophistication | Variation Coefficient COCA Academic (VCCOCAa) is a metric that calculates the normalized standard deviation of the word frequencies in the texts of a reference corpus of academic English. |
226 | Juilland’s D COCA Academic | JDCOCAa | Lexical Complexity | Lexical Sophistication | Juilland’s D COCA Written (JDCOCAa) is a metric for assessing the lexical sophistication of words in a document by quantifying their dispersion based on the coefficient of variation across a set of documents within the academic component of a reference corpus. |
227 | Carroll’s D COCA Academic | CDCOCAa | Lexical Complexity | Lexical Sophistication | Carroll's D COCA Written (CDCOCAa) is a metric for assessing the lexical sophistication of words in a document by quantifying their dispersion based on the normalized entropy of words within the academic component of a reference corpus. |
228 | Rosengren’s S COCA Academic | RSCOCAa | Lexical Complexity | Lexical Sophistication | Rosengren’s S COCA Written (RSCOCAa), a metric for assessing lexical sophistication in a document, considers variable document sizes by weighting word frequencies in each document, summing and squaring these values, and then dividing by the overall word frequency in the academic component of a reference corpus. |
229 | Deviation of Proportions COCA Academic | DPCOCAa | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Academic (DPCOCAa) is a metric that assesses the absolute differences between observed and expected word percentages across the texts of an academic reference corpus. |
230 | Deviation of Proportions COCA Academic (normalized) | DPCOCAanorm | Lexical Complexity | Lexical Sophistication | Deviation of Proportions COCA Academic (DPCOCAanorm) is a metric that assesses the normalized absolute differences between observed and expected word percentages across the texts of an academic reference corpus. |
231 | Kullback-Leibler divergence COCA Academic | KLDCOCAa | Lexical Complexity | Lexical Sophistication | Kullback-Leibler divergence COCA Academic (KLDCOCAa) is a metric that quantifies the difference between two probability distributions, indicating how one distribution deviates from a reference distribution. |
232 | Word Frequency in TV Shows | WFtv | Lexical Complexity | Lexical Sophistication | Word Frequency in TV Shows (WFtv) is a metric that gauges the aggregate frequency of specific words based on their occurrences within a reference corpus of TV show transcripts. |
233 | Word Frequency in Social Media | WFsm | Lexical Complexity | Lexical Sophistication | Word Frequency in Social Media (WFsm) is a metric that gauges the aggregate frequency of specific words based on their occurrences within a reference corpus of social media language. |
234 | Word Frequency in Podcasts | WFpod | Lexical Complexity | Lexical Sophistication | Word Frequency in Podcasts (WFpod) is a metric that gauges the aggregate frequency of specific words based on their occurrences within a reference corpus of podcast language. |
235 | Word Frequency in Media Language | WFml | Lexical Complexity | Lexical Sophistication | Word Frequency in Media Language (WFml) is a metric that gauges the aggregate frequency of specific words based on their occurrences within a reference corpus of TV, podcast and social media language. |
236 | Contextual Diversity in Podcasts | CDpod | Lexical Complexity | Lexical Sophistication | Contextual Diversity in Podcasts (CDpod) is a metric that quantifies the average familiarity of the words based on their occurrence across 500-word segments from a podcast language reference corpus. |
237 | Contextual Diversity in TV Shows | CDtv | Lexical Complexity | Lexical Sophistication | Contextual Diversity in TV Shows (CDtv) is a metric that quantifies the average familiarity of the words based on their occurrence across 500-word segments from a TV language reference corpus. |
238 | Contextual Diversity in Social Media | CDsm | Lexical Complexity | Lexical Sophistication | Contextual Diversity in Social Media (CDsm) is a metric that quantifies the average familiarity of the words based on their occurrence across 500-word segments from a social media language reference corpus. |
239 | Word Prevalence in TV Shows | WPtv | Lexical Complexity | Lexical Sophistication | Word Prevalence in TV Shows (WPtv) is a metric that quantifies the average familiarity of the words based on their distributions in a TV language reference corpus. |
240 | Word Prevalence in Podcasts | WPpod | Lexical Complexity | Lexical Sophistication | Word Prevalence in Podcasts (WPpod) is a metric that quantifies the average familiarity of the words based on their distributions in a podcast language reference corpus. |
241 | Word Prevalence in Social Media | WPsm | Lexical Complexity | Lexical Sophistication | Word Prevalence in Social Media (WPsm) is a metric that quantifies the average familiarity of the words based on their distributions in a social media language reference corpus. |
242 | Auxiliary verb | AUX | LexGram | Auxiliary Verbs | Auxiliary verb (AUX) is a metric that quantifies the number of auxiliary verbs. |
243 | Quantifier (cardinal number) | QUANTcn | LexGram | Quantifiers | Quantifier (cardinal number) (QUANTcn) is a metric that measures the number of cardinal number quantifiers. |
244 | Subordinating conjunction (cause effect) | CONJscaus | LexGram | Conjunctions | Subordinating conjunction (cause effect) (CONJscaus) is a metric that measures the number of causal conjunctions. |
245 | Preposition (complex) | PREPc | LexGram | Prepositions | Pronoun (complex) (PREPc) is a metric that measures the number of complex prepositions. |
246 | Subordinating conjunction (comparison) | CONJscom | LexGram | Conjunctions | Subordinating conjunction (comparison) (CONJscom) is a metric that measures the number of comparative conjunctions. |
246 | Subordinating conjunction (condition) | CONJscond | LexGram | Conjunctions | Subordinating conjunction (condition) (CONJscond) is a metric that measures the number of conditonal conjunctions. |
248 | Conjunction | CONJ | LexGram | Conjunctions | Conjunction (CONJ) is a metric that measures the number of conjunctions. |
249 | Coordinating conjunction | CONJc | LexGram | Conjunctions | Coordinating conjunction (CONJc) is a metric that measures the number of coordinating conjunctions. |
250 | Determiner (definite article) | DETda | LexGram | Determiners | Determiner (definite article) (DETda) is a metric that measures the number of definite articles. |
251 | Determiner (demonstrative) | DETdem | LexGram | Determiners | Determiner (demonstrative) (DETdem) is a metric that measures the number of determiners. |
252 | Determiner (demonstrative, pl) | DETdemp | LexGram | Determiners | Determiner (demonstrative, pl) (DETdemp) is a metric that measures the number of plural demonstrative determiners. |
253 | Determiner (demonstrative, sg) | DETdems | LexGram | Determiners | Determiner (demonstrative, sg) (DETdems) is a metric that measures the number of singular demonstrative determiners. |
254 | Determiner | DET | LexGram | Determiners | Determiner (DET) is a metric that measures the number of determiners. |
255 | Quantifier (fractional number) | QUANTfn | LexGram | Quantifiers | Quantifier (fractional numbers) (QUANTfn) is a metric that measures the instances of fractional number quantifiers. |
256 | Pronoun (indefinite) | PRNi | LexGram | Pronouns | Pronoun (indefinite) (PRNi) is a metric that quantifies the number of indefinite pronouns. |
257 | Pronoun (indefinite, pl) | PRNip | LexGram | Pronouns | Pronoun (indefinite, pl) (PRNip) is a metric that quantifies the number of plural indefinite pronouns. |
258 | Pronoun (indefinite, sg) | PRNis | LexGram | Pronouns | Pronoun (indefinite, sg) (PRNis) is a metric that quantifies the number of singular indefinite pronouns. |
259 | Quantifier (indefinite) | QUANTi | LexGram | Quantifiers | Quantifier (indefinite) (QUANTi) is a metric that quantifies the number of indefinite quantifiers. |
260 | Determiner (indefinit article) | DETia | LexGram | Determiners | Determiner (indefinite article) (DETia) is a metric that quantifies the number of indefinite articles. |
261 | Subordinating conjunction (manner) | CONJsm | LexGram | Conjunctions | Subordinating conjunction (manner) (CONJsm) is a metric that quantifies the number of manner conjunctions. |
262 | Quantifier (multiplicative numbers) | QUANTmn | LexGram | Quantifiers | Quantifier (multiplicative numbers) (QUANTmn) is a metric that measures the instances of multiplicative numbers quantifiers. |
263 | Negation | NEG | LexGram | Negation | Negation (NEG) is a metric that quantifies the number of negation words. |
264 | Quantifier (numerical) | QUANTn | LexGram | Quantifiers | Quantifier (numerical) (QUANTn) is a metric that quantifies the number of numerical quantifiers. |
265 | Quantifier (ordinal number) | QUANTon | LexGram | Quantifiers | Quantifier (ordinal number) (QUANTon) is a metric that measures the number of ordinal quantifiers. |
266 | Pronoun (personal) | PRNp | LexGram | Pronouns | Pronoun (personal) (PRNp) is a metric that measures the number of personal pronouns. |
267 | Pronoun (personal, 1st, pl) | PRNp1p | LexGram | Pronouns | Pronoun (personal, 1st, pl) (PRNp1p) is a metric that measures the number of first person plural personal pronouns. |
268 | Pronoun (personal, 1st, sg) | PRNp1s | LexGram | Pronouns | Pronoun (personal, 1st, sg) (PRNp1s) is a metric that measures the number of first person singular personal pronouns. |
269 | Pronoun (personal, 2nd) | PRNp2 | LexGram | Pronouns | Pronoun (personal, 2nd) (PRNp2) is a metric that measures the number of second person personal pronouns. |
270 | Pronoun (personal, 3rd, pl) | PRNp3p | LexGram | Pronouns | Pronoun (personal, 3rd, pl) (PRNp3p) is a metric that measures the number of third person plural personal pronouns. |
271 | Pronoun (personal, 3rd, sg) | PRNp3s | LexGram | Pronouns | Pronoun (personal, 3rd, sg) (PRNp3s) is a metric that measures the number of third person singular personal pronouns. |
272 | Subordinating conjunction (place) | CONJsp | LexGram | Conjunctions | Subordinating conjunction (place) (CONJsp) is a metric that measures the number of spatial conjunctions. |
273 | Pronoun (possessive) | PRNposs | LexGram | Pronouns | Pronoun (possessive) (PRNposs) is a metric that measures the number of possessive pronouns. |
274 | Pronoun (possessive, 1st, pl) | PRNposs1p | LexGram | Pronouns | Pronoun (possessive, 1st, pl) (PRNposs1p) is a metric that measures the number of first person plural possessive pronouns. |
275 | Pronoun (possessive, 1st, sg) | PRNposs1s | LexGram | Pronouns | Pronoun (possessive, 1st, sg) (PRNposs1s) is a metric that measures the number of first person singular possessive pronouns. |
276 | Pronoun (possessive, 2nd) | PRNposs2 | LexGram | Pronouns | Pronoun (possessive, 2nd, pl) (PRNposs2) is a metric that measures the number of second person possessive pronouns. |
277 | Pronoun (possessive, 3rd, pl) | PRNposs3p | LexGram | Pronouns | Pronoun (possessive, 3rd, pl) (PRNposs3p) is a metric that measures the number of third person plural possessive pronouns. |
278 | Pronoun (possessive, 3rd, sg) | PRNposs3s | LexGram | Pronouns | Pronoun (possessive, 3rd, sg) (PRNposs3s) is a metric that measures the number of third person singular possessive pronouns. |
279 | Determiner (possessive) | DETposs | LexGram | Determiners | Determiner (possessive) (DETposs) is a metric that measures the number of possessive determiners. |
280 | Determiner (possessive, 1st, pl) | DETposs1p | LexGram | Determiners | Determiner (possessive, 1st, pl) (DETposs1p) is a metric that measures the number of first person plural possessive determiners. |
281 | Determiner (possessive, 1st, sg) | DETposs1s | LexGram | Determiners | Determiner (possessive, 1st, sg) (DETposs1s) is a lexicon-based feature that measures the number of first person singular possessive determiners. |
282 | Determiner (possessive, 2nd) | DETposs2 | LexGram | Determiners | Determiner (possessive, 2nd) (DETposs1p) is a metric that measures the number of first person plural possessive determiners. |
283 | Determiner (possessive, 3rd, pl) | DETposs3p | LexGram | Determiners | Determiner (possessive, 3rd, pl) (DETposs3p) is a metric that measures the number of third person, plural possessive determiners. |
284 | Determiner (possessive, 3rd, sg) | DETposs3s | LexGram | Determiners | Determiner (possessive, 3rd, sg) (DETposs3s) is a metric that measures the number of third person, singular possessive determiners. |
285 | Preposition | PREP | LexGram | Prepositions | Pronoun (PREP) is a metric that measures the number of prepositions. |
286 | Auxiliary verb (primary) | AUXp | LexGram | Auxiliary Verbs | Auxiliary verb (primary) (AUXp) is a metric that measures the number of primary auxiliary verbs. |
287 | Auxiliary verb (modal) | AUXm | LexGram | Auxiliary Verbs | Auxiliary verb (modal) (AUXm) is a metric that measures the number of modal auxiliary verbs. |
288 | Pronoun | PRN | LexGram | Pronouns | Pronoun (PRN) is a metric that measures the number of pronouns. |
289 | Subordinating conjunction (purpose) | CONJspur | LexGram | Conjunctions | Subordinating conjunction (purpose) (CONJspur) is a metric that measures the number of purposive conjunctions. |
290 | Quantifier | QUANT | LexGram | Quantifiers | Quantifier (QUANT) is a metric that measures the number of quantifiers. |
291 | Pronoun (reciprocal) | PRNrec | LexGram | Pronouns | Pronoun (reciprocal) (PRNrec) is a metric that measures the number of reciprocal pronouns. |
292 | Pronoun (reflexive) | PRNref | LexGram | Pronouns | Pronoun (reflexive) (PRNref) is a metric that measures the number of reflexive pronouns. |
293 | Pronoun (reflexive, 1st, pl) | PRNref1p | LexGram | Pronouns | Pronoun (reflexive, 1st, pl) (PRNref1p) is a metric that measures the number of first person plural reflexive pronouns. |
294 | Pronoun (reflexive, 1st, sg) | PRNref1s | LexGram | Pronouns | Pronoun (reflexive, 1st, sg) (PRNref1s) is a metric that measures the number of first person singular reflexive pronouns. |
295 | Pronoun (reflexive, 2nd, pl) | PRNref2p | LexGram | Pronouns | Pronoun (reflexive, 2nd, pl) (PRNref2p) is a metric that measures the number of second person plural reflexive pronouns. |
296 | Pronoun (reflexive, 2nd, sg) | PRNref2s | LexGram | Pronouns | Pronoun (reflexive, 2nd, sg) (PRNref2s) is a metric that measures the number of first person singular reflexive pronouns. |
297 | Pronoun (reflexive, 3rd, pl) | PRNref3p | LexGram | Pronouns | Pronoun (reflexive, 3rd, pl) (PRNref3p) is a metric that measures the number of third person plural reflexive pronouns. |
298 | Pronoun (reflexive, 3rd, sg) | PRNref3s | LexGram | Pronouns | Pronoun (reflexive, 3rd, sg) (PRNref3s) is a metric that measures the number of third person singular reflexive pronouns. |
299 | Preposition (simple) | PREPs | LexGram | Prepositions | Pronoun (simple) (PREPs) is a metric that measures the number of simple prepositions. |
300 | Subordinating conjunction | CONJs | LexGram | Conjunctions | Subordinating conjunction (CONJs) is a metric that measures the number of subordinating conjunctions. |
301 | Subordinating conjunction (time) | CONJst | LexGram | Conjunctions | Subordinating conjunction (time) (CONJst) is a metric that measures the number of temporal conjunctions. |
302 | Art | TOPart | LexTopic | Art | Art (TOPart) is a metric that counts the number of words related to the topic of art. |
303 | Business | TOPbus | LexTopic | Business | Business (TOPbus) is a metric that counts the number of words related to the topic of business. |
304 | Education | TOPedu | LexTopic | Education | Education (TOPedu) is a metric that counts the number of words related to the topic of education. |
305 | Entertainment | TOPent | LexTopic | Entertainment | Entertainment (TOPent) is a metric that counts the number of words related to the topic of entertainment. |
306 | Environment | TOPenv | LexTopic | Environment | Environment (TOPenv) is a metric that counts the number of words related to the topic of environment. |
307 | Fashion | TOPfas | LexTopic | Fashion | Fashion (TOPfas) is a metric that counts the number of words related to the topic of fashion. |
308 | Food | TOPfood | LexTopic | Food | Food (TOPfood) is a metric that counts the number of words related to the topic of food. |
309 | Health | TOPhea | LexTopic | Health | Health (TOPhea) is a metric that counts the number of words related to the topic of health. |
310 | Music | TOPmus | LexTopic | Music | Music (TOPmus) is a metric that counts the number of words related to the topic of music. |
311 | Politics | TOPpol | LexTopic | Politics | Politics (TOPpol) is a metric that counts the number of words related to the topic of politics. |
312 | Relationships | TOPrel | LexTopic | Relationships | Relationships (TOPrel) is a metric that counts the number of words related to the topic of relationships. |
313 | Science | TOPsci | LexTopic | Science | Science (TOPsci) is a metric that counts the number of words related to the topic of science. |
314 | Sports | TOPspo | LexTopic | Sports | Sports (TOPspo) is a metric that counts the number of words related to the topic of sports. |
315 | Technology | TOPtec | LexTopic | Technology | Technology (TOPtec) is a metric that counts the number of words related to the topic of technology. |
316 | Travel | TOPtra | LexTopic | Travel | Travel (TOPtra) is a metric that counts the number of words related to the topic of travel. |
317 | Acceptance | EMOacc | LexEmo | Positive Emotion | Acceptance (EMOacc) is a metric that quantifies the number of words associated with the emotion of acceptance. |
318 | Anger | EMOang | LexEmo | Negative Emotion | Anger (EMOang) is a metric that quantifies the number of words associated with the emotion of anger. |
319 | Annoyance | EMOann | LexEmo | Negative Emotion | Annoyance (EMOann) is a metric that quantifies the number of words associated with the emotion of annoyance. |
320 | Anticipation | EMOant | LexEmo | Positive Emotion | Anticipation (EMOant) is a metric that quantifies the number of words associated with the emotion of anticipation. |
321 | Anxiety | EMOanx | LexEmo | Negative Emotion | Anxiety (EMOanx) is a metric that quantifies the number of words associated with the emotion of anxiety. |
322 | Bliss | EMObli | LexEmo | Positive Emotion | Bliss (EMObli) is a metric that quantifies the number of words associated with the emotion of bliss. |
323 | Calmness | EMOcal | LexEmo | Positive Emotion | Calmness (EMOcal) is a metric that quantifies the number of words associated with the emotion of calmness. |
324 | Contentment | EMOcon | LexEmo | Positive Emotion | Contentment (EMOcon) is a metric that quantifies the number of words associated with the emotion of contentment. |
325 | Delight | EMOdel | LexEmo | Positive Emotion | Delight (EMOdel) is a metric that quantifies the number of words associated with the emotion of delight. |
326 | Disgust | EMOdsg | LexEmo | Negative Emotion | Disgust (EMOdsg) is a metric that quantifies the number of words associated with the emotion of disgust. |
327 | Dislike | EMOdsl | LexEmo | Negative Emotion | Dislike (EMOdsl) is a metric that quantifies the number of words associated with the emotion of dislike. |
328 | Eagerness | EMOeag | LexEmo | Positive Emotion | Eagerness (EMOeag) is a metric that quantifies the number of words associated with the emotion of eagerness. |
329 | Ecstasy | EMOecs | LexEmo | Positive Emotion | Ecstasy (EMOecs) is a metric that quantifies the number of words associated with the emotion of ecstasy. |
330 | Enthusiasm | EMOent | LexEmo | Positive Emotion | Enthusiasm (EMOent) is a metric that quantifies the number of words associated with the emotion of enthusiasm. |
331 | Fear | EMOfea | LexEmo | Negative Emotion | Fear (EMOfea) is a metric that quantifies the number of words associated with the emotion of fear. |
332 | Grief | EMOgri | LexEmo | Negative Emotion | Grief (EMOgri) is a metric that quantifies the number of words associated with the emotion of grief. |
333 | Joy | EMOjoy | LexEmo | Positive Emotion | Joy (EMOjoy) is a metric that quantifies the number of words associated with the emotion of joy. |
334 | Loathing | EMOloa | LexEmo | Negative Emotion | Loathing (EMOloa) is a metric that quantifies the number of words associated with the emotion of loathing. |
335 | Love | EMOlov | LexEmo | Positive Emotion | Love (EMOlov) is a metric that quantifies the number of words associated with the emotion of love. |
336 | Melancholy | EMOmel | LexEmo | Negative Emotion | Melancholy (EMOmel) is a metric that quantifies the number of words associated with the emotion of melancholy. |
337 | Pleasantness | EMOple | LexEmo | Positive Emotion | Pleasantness (EMOple) is a metric that quantifies the number of words associated with the emotion of pleasantness. |
338 | Rage | EMOrag | LexEmo | Negative Emotion | Rage (EMOrag) is a metric that quantifies the number of words associated with the emotion of rage. |
339 | Responsiveness | EMOres | LexEmo | Positive Emotion | Responsiveness (EMOres) is a metric that quantifies the number of words associated with the emotion of responsiveness. |
340 | Sadness | EMOsad | LexEmo | Negative Emotion | Sadness (EMOsad) is a metric that quantifies the number of words associated with the emotion of sadness. |
341 | Serenity | EMOser | LexEmo | Positive Emotion | Serenity (EMOser) is a metric that quantifies the number of words associated with the emotion of serenity. |
342 | Surprise | EMOsur | LexEmo | Neutral Emotion | Surprise (EMOsur) is a metric that quantifies the number of words associated with the emotion of surprise. |
343 | Terror | EMOter | LexEmo | Negative Emotion | Terror (EMOter) is a metric that quantifies the number of words associated with the emotion of terror. |
344 | Trust | EMOtru | LexEmo | Positive Emotion | Trust (EMOtru) is a metric that quantifies the number of words associated with the emotion of trust. |