This message only pertains to those with chatbox privileges. If you do not have those privileges yet keep posting and being a part of the community and soon you'll have access.
I'm working on a paper right now with the speech formality of educators (this actually has a great deal to do with he formality that Greta has been discussing). One of the elements of the language teachers use is that of formal language vs. contextual language. Here's a fascinating (IMHO) paper regarding this:
http://pespmc1.vub.ac.be/Papers/Formality.pdf
I have written a function to measure formality in speech using R. I ran it for the chat box and thought I'd share. You can run the code yourself just download the qdap and talkstats packages from my github repo (if you already downloaded qdap I'd update).
I warn you that the function takes a while to run the first time. Consider it's generating a part of speech per word. On my machine i7 quad core windows 7 machine it took about 10 minutes to run.
Here's the results (formality ranges from 0 to 100; neither extreme being possible). Also the second visual is with people less than 300 words used as the measure isn't recommended for under this number of words.
Results:
Visuals:
Code:
I'm working on a paper right now with the speech formality of educators (this actually has a great deal to do with he formality that Greta has been discussing). One of the elements of the language teachers use is that of formal language vs. contextual language. Here's a fascinating (IMHO) paper regarding this:
http://pespmc1.vub.ac.be/Papers/Formality.pdf
I have written a function to measure formality in speech using R. I ran it for the chat box and thought I'd share. You can run the code yourself just download the qdap and talkstats packages from my github repo (if you already downloaded qdap I'd update).
I warn you that the function takes a while to run the first time. Consider it's generating a part of speech per word. On my machine i7 quad core windows 7 machine it took about 10 minutes to run.
Here's the results (formality ranges from 0 to 100; neither extreme being possible). Also the second visual is with people less than 300 words used as the measure isn't recommended for under this number of words.
Results:
Code:
person word.count formality
[COLOR="#708090"]1 bugman 15 86.67
2 SmoothJohn 15 73.33
3 ledzep 74 71.62
4 TheEcologist 195 67.95
5 spunky 278 63.31
6 bukharin 25 62.00[/COLOR]
[COLOR="blue"]7 quark 995 61.46
8 bryangoodrich 10957 60.76[/COLOR]
[COLOR="#708090"]9 duskstar 92 60.33[/COLOR]
[COLOR="blue"]10 vinux 2664 58.20
11 Lazar 2162 57.28[/COLOR]
[COLOR="#708090"]12 Dragan 84 57.14[/COLOR]
[COLOR="blue"]13 Jake 8249 57.10
14 trinker 8194 57.04
15 victorxstc 8765 56.81
16 SiBorg77 985 55.69
17 noetsi 1588 55.57
18 GretaGarbo 5872 55.29
19 Dason 13415 54.16[/COLOR]


Code:
Code:
# install.packages("devtools")
library(devtools)
install_github("qdap", "trinker")
install_github("talkstats", "trinker")
x <- ts_chatbox()
#the first one talks a long time as it's parsing parts of speech
(res <- formality(x$dialogue, x$person, plot = TRUE))
formality(res, x$person, plot = TRUE, min.wrd = 300)
formality(res, x$date, plot = TRUE)
with(x, formality(res, list(person, date)))