tag:blogger.com,1999:blog-80092068154467857522024-03-14T15:02:21.151+01:00Buffalo linguistChristianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-8009206815446785752.post-89547552269105908822023-12-10T19:44:00.005+01:002023-12-10T19:44:54.897+01:00On the generalization of linguistic discovery<p><span style="font-size: medium;">Discovery is a crucial part of the evolution of most academic disciplines that take a scientific approach towards understanding the world. New empirical evidence of a phenomenon leads researchers to re-examine old perceptions they had. Or rather, as Kuhn (1962) would argue, those with the old perceptions of the world eventually die or fade away while those who only have these newer perceptions mature.</span></p><p><span style="font-size: medium;">But how do we generalize discovery? There are certainly many disciplines where discovery is generalizable. Findings in many of the physical sciences and mathematics are truths that will continue to be true forever. Discover a solution to a long-held mathemetical problem and it will remain true from now on. </span></p><p><span style="font-size: medium;">In the social and cognitive sciences though, discoveries seem somewhat murkier. Where they relate to biological, neurobiological, biophysical principles, the discoveries seem more generalizable. In my main sub-discipline, phonetics, there are clear physical relationships between what a person does with their speech articulators and what this produces in an acoustic signal, for instance. This is true across languages because all humans have similar oral and laryngeal anatomy. Yet, since speakers can massively vary just how they produce similar speech sounds, generalization is challenging here too.</span></p><p><span style="font-size: medium;">Where they do not relate to biological or physical principles, behavioral and linguistic discoveries are usually observational findings restricted to a certain type of population. Generalization here necessarily needs to proceed to multiple experiments or studies with different types of populations. From a linguists' perspective (and I can only speak as a linguist here), that necessarily means that discoveries need more languages. <br /></span></p><p><span style="font-size: medium;">There's a danger here that comes out of a kind of science-envy with behavioral and linguistic sciences. Though some of the methods in the social/behavioral sciences have become more scientifically rigorous (mostly in relation to statistical testing and modeling), the findings are not magically more generalizable to new populations than they were in the past. Discovering that college-aged speakers of English prefer certain syntactic structures over others does not mean anything about any other language unless subsequent research is undertaken. It might make predictions about patterns in other languages, but predictions are not generalizations.</span></p><p><b><span style="font-size: medium;">Can we ever generalize about "Language"? What if we can't?</span></b></p><p><span style="font-size: medium;">There are a lot of half-truths that linguists hold about "Language" that arise from a casual extension of findings in a few languages. Demonstrate that some linguistic phenomenon occurs in American English, Spanish, and German and linguists will believe it is a universal or "strong tendency" without a very clear criterion for what "universal" or "strong tendency" would mean.</span></p><p><span style="font-size: medium;">Why be so careful with formal and statistical methods but so uncareful regarding the scientific bread-and-butter of hypothesis testing? The answer seems to lie in a kind of all-or-nothing perspective about where linguistic discoveries have value to a discipline. Linguists either believe linguistic patterns demonstrate unique characteristics of individual languages or populations; -or- they are universal patterns reflecting something deep about human evolution or murkier things like universal grammar. The field tends to narrowly merit the latter type of work since it is smells like a generalization.</span></p><p><span style="font-size: medium;">This all-or-nothing approach means that we often come up empty-handed when we wish to talk about the relevance of our findings to the discipline - we're delving deeply into specific languages with an empirical or historical goal or we're looking broadly (and more superficially) at patterns in a larger number of languages. What might exist in the middle? We're a small discipline examining a huge topic with a gigantic amount of variation. We can't do it all.</span></p><p><span style="font-size: medium;">I think one future path for the discipline is to take a note from the quantitative revolution that has occurred over the past 20-25 years in the discipline. The more we examine phenomena that we once believed to be discrete (<i>x </i>occurs in context A, but <i>y </i>occurs in context B), the more we discover that these are strong statistical tendencies instead. And the reason for this is that linguistic phenomena are behavioral. They are not the formal mathematical proofs that remain true forever after being solved. We just keep wanting to commit our error of generalization because of this science envy.</span></p><p><span style="font-size: medium;">Might there not be any true linguistic universals? Maybe there are but we can never be typologically-balanced enough to prove anything more than fairly superficial patterns. Maybe there aren't any at all and this is ok. Languages are endlessly fascinating and we can still demonstrate how many languages work along statistical lines. The idea that there is massive inter-language variation and that this is structured to occur in certain <i>types</i> of languages necessarily means that we can look at <i>types</i> of languages to construct complex cross-linguistic hypotheses. To provide a concrete example, do speakers of fusional languages or those with non-concatentive morphology store words differently than speakers of isolating languages? This is an interesting question but it does not require a model of what must be universal. It just requires experiments and cross-linguistic research.</span></p><p><span style="font-size: medium;">This is a blog post, so take my musings with a grain of salt. I don't have the answers to my own subdiscipline, let alone all of linguistics. I think though that we need to be more careful distinguishing between the things that we believe are proven/demonstrated and the things that are demonstrated typological patterns or universals. </span></p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-59973350842961534902023-11-01T22:51:00.002+01:002023-11-01T22:51:17.401+01:00Issues in choosing a statistical model in phonetics<p>What's the bar for deciding to use a new statistical model in research? It seems like often enough within linguistics or speech science, one chooses a model based on what is à<i> la mode. </i>That frequently translates into increasing complexity.</p><p>Is it always good to have a more complex model? No. It might reveal more intricate interactions in the data. It might also model interactions between terms better than competing models, usually by improving fit with non-linear terms (cf. GCA, GAMMS). Yet, there are missing evaluative criteria for choosing a model that end up being crucially important. </p><p><b>1. Is the model easily implementable and understandable?</b> </p><p>If a model is easy to implement and understand, then it is easy enough for new users to emerge and for a set of standards to come about. Yet, if neither of these things are true, there is severe <i>social cost. </i></p><p>If there are a handful of researchers proposing using a new model, is there an existing infrastructure that can help with training and implementation? Usually there is not and, as a consequence, many researchers get frustrated if the field pushes a model where no infrastructure exists. The same people proposing the model will end up fielding hundreds or thousands of questions about how to use it. And nobody has time for that.</p><p>Now, why might the field (or paper reviewers, most likely) decide that everyone has to use one particularly new and popular model for one's data? Sometimes important new factors are discovered that need to be modeled. But sometimes it's just the impostor syndrome, i.e. we are only a serious field if we have increasingly more mathematically opaque models for our data. And it's easy to give a post-hoc reason to include all possible factors when our predictions are so weak.</p><p><b>2. Does the model enable us to generalize?</b></p><p>Do we actually need to model as many of the details as we can? Even models that take a fairly generic approach to avoiding overfitting can end up overfitting things like dynamics. Resultingly, researchers lose time needing to discuss details that end up being unimportant and we end up losing the ability to generalize.</p><p>I'll provide one personal example of this. In <a href="https://www.sciencedirect.com/science/article/pii/S009544701730219X">my co-authored paper</a> on the phonetics of focus in Yoloxóchitl Mixtec, we provided statistical models for f0 dynamics alongside statistical models for midpoint f0 values. There is certainly good reason to model changes in f0, but in a language with a number of level tones (and tone levels), this type of modeling might not say much. Indeed, we found mostly the same results when we looked at f0 midpoint for many of the level tones than when we looked at dynamic trajectories for them. Including two sets of models resulted in twice as many statistical tests and twice as much reporting.</p><p>Why did we choose to do this? We favored being comprehensive over possibly missing some unknown pattern (maybe the lower level tones had some different dynamic behavior?) Given the subtlety of the resulting patterns, it's hard to say what might be important.</p><p>Nowadays, I think we would be asked to choose to use GAMS instead of the mixed effects modeling. Yet, that also results in a statistical bloat (e.g. you have to model <i>each</i> tone separately). The results of our research should lead us to make scientific conclusions about speech, not get lost in 101 statistical tests where we spend time analyzing our three-way interactions. </p><p>I don't know the right answer to how the field might address this issue, but I do <i>not </i>believe that it has to do with reducing the purview of one's study. GAMs are great if you are looking at one pattern in one language, but they are terrible for generalizing over a language's inventory (of vowel formants, of tones, of prosodic contexts, etc). One finds either studies using GAMs for limited topics (one vowel or one context) or studies where 101 models are included to provide a comprehensive account of a language's patterns. The former are more likely in studies examining well-studied languages while the latter are more likely in exploratory analyses of languages.</p><p>The negative consequence here might be that the "clear case" for GAMs is made within the less complex pattern in a well-studied language, while no one can make heads or tails of all the analyses in the less well-studied language. I see this as just an extension of <a href="https://languagelog.ldc.upenn.edu/nll/?p=41758">linguistic common ground as privilege</a>. Yet, now it's done with statistics.</p><p><br /><br /><br /></p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-71558468743434738732023-01-08T17:26:00.003+01:002023-01-08T19:33:28.485+01:00The "Bender rule" in some linguistics journals in 2022<p><span style="font-size: medium;">The <i><a href="https://thegradient.pub/the-benderrule-on-naming-the-languages-we-study-and-why-it-matters/">Bender rule</a></i> is the informal idea that one ought to explicitly mention the name of a language in a publication on language and linguistics. It is named after <a href="https://faculty.washington.edu/ebender/" target="_blank">Emily Bender</a>, a computational linguist at the University of Washington (Seattle) who has written and discussed the need to be explicit about languages that one studies. The impetus behind it is the observation that studies on English (or other commonly-studied languages) are typically understood as a default norm, while less commonly studied languages are more likely to be overtly mentioned. This contributes to a biased perspective in linguistics that only the conclusions from studies on English contribute to a general picture of <i>Language, </i>while similar conclusions from studies on other languages reflect language-specific phenomena and are less generalizable. A similar issue arises in work on indigenous languages that <a href="https://languagelog.ldc.upenn.edu/nll/?p=41758">I've written about before</a>.<br /><br />People have talked about the Bender rule since 2019. I'd like to think that linguists have paid attention to what this means in academic publications since then. After all, it would be fairly simple for journal editors or editorial boards to implement a policy where languages are mentioned in titles or in abstracts. After all, people often read/skim the titles and abstracts of most publications without investing in more time to read all the details. If one were to apply the Bender rule to titles and/or abstracts (and yes, I am suggesting it), it has the additional benefit of helping your librarians organize publications better by topic language.<br /><br />So, how have some popular journals fared in 2022? Are many publications mentioning the languages of study? I thought I would look at two popular journals that I am familiar with: the <i><a href="https://www.sciencedirect.com/journal/journal-of-memory-and-language" target="_blank">Journal of Memory and Language</a></i> (JML), and the <i><a href="https://www.sciencedirect.com/journal/journal-of-phonetics" target="_blank">Journal of Phonetics</a> </i>(JPhon)<i>. </i>Both journals heavily focus on experimental research. I decided to include two separate measures here: does the journal article mention the language of study in the title? and does it mention it in the abstract? I have excluded publications that reflect surveys of methodological reports, as these lack experiments and they tend not to focus on individual languages anyways.<br /><br />For JML, between January 2022 - present, 43 relevant articles have been published. Of these, just 2/43 mention the language of study in the title. Within the abstracts, 8/43 articles mention the language of study. Studies that explicitly mentioned languages were those on Mandarin Chinese, ASL, and those involving bilingual populations.</span></p><p><span style="font-size: medium;">For JPhon, between January 2022 - present, 40 relevant articles have been published. Of these, 18/40 mentioned the language of study in the title. Within the abstracts, 35/40 articles mention the language of study. <br /><br />Why might these numbers (and practices) might be so different across journals? Are the psycholinguistic patterns found in brains and minds in the articles in JML fundamentally different in terms of their language-specificity from studies on phonetic memory/perception, speech planning, speech coordination, and speech articulation found in JPhon? In other words, is it that only the phoneticians need worry about the Bender rule?</span></p><p><span style="font-size: medium;">I think most phoneticians would probably state that a study on the articulatory and acoustic phonetics of one language is bound to be fundamentally different from a similar study on another language. Thus, there is less of an expectation that one's findings will immediately generalize to all of <i>Language</i>. Rather, one draws conclusions and amasses evidence for common patterns by looking across a large enough sample of languages. Existing theories are examined, tested with new data, and revised.<br /><br />I don't know what psycholinguists believe here though. Perhaps it is the case that many still believe that English-focused studies in psycholinguistics are always uncovering something fundamental about <i>Language</i> in a way that studies in phonetics are not, despite <a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00236-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661322002364%3Fshowall%3Dtrue" target="_blank">apparent evidence to the contrary</a>. I have to doubt that though. I know many psycholinguists and they seem to be a pretty open-minded group. For the time being, it would seem like JML is failing the Bender rule.<br /></span></p><p><br /></p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com1tag:blogger.com,1999:blog-8009206815446785752.post-54366560465100695102022-08-05T16:00:00.006+02:002022-08-05T16:00:53.127+02:00Open projects for collaboration<p><b>Open projects for possible collaboration</b></p><p>In the Whova event page for the meeting in Laboratory Phonology, I started a thread with the title "How are better collaborations created?" The goal of this was to really ask the question of "what works?" with labphon-related projects that involve multiple people and institutions. I suppose there is another kind of guilty reason - I have several things I've worked on but many things that are at various earlier stages of development. It would be great to see people tackle some of these types of projects and also be involved with other things along the way.</p><p>I received feedback from eight people: Jen Nycz, Anne Pycha, Valerie Freeman, Paul De Decker, Miao Zhang, Bihua Chen, Ivy Hauser, and Timo Roettger. </p><p>Paul started by asking whether it is really clear if people want to collaborate. Is there a mechanism that we can think of to make this known to others? Valerie suggested a "collaborator's corner" at conferences with skills/preferences for a particular project. She also mentioned a <a href="https://ombudsman.nih.gov/sites/default/files/Sample%20Partnering%20Agreement%20Template.pdf)." target="_blank">resource</a> for ensuring that collaborators are on the same page with regard to goals. Jen mentioned how each person has their strengths/preferences in research projects and that we might try to match along these preferences. This way we would be truly aiming to find not just collaborators, but ideal collaborators where all parties benefit. Ivy mentioned that more intentional networking at conferences might serve some of these goals. Timo's idea involved a special session proposal for a conference (maybe the next LabPhon?).</p><p>I like all these ideas. I think there are some separate threads:</p><p><b>(a) Identification.</b> We could identify what we're doing and discuss our project goals with others. Maybe this is the collaborator's corner that become part of the networking process at conferences?</p><p><b>(b) Needs/Wants.</b> We could focus on really identifying what we would like with each of the projects we are working on. Is it in the idea stage? Is the data already collected? Is the data already annotated? Is the data ready for analysis? Where are you stuck and what would you like to collaborate on?</p><p><b>(c) Goals and agreements.</b> As per Ivy's point, each project could have a timeline and set of goals that collaborators agree upon. Is the project part of a larger project? Do you want to submit a paper this year? Next year? What about author order in submission? Will the collaboration continue or end at a certain point? Who is responsible for managing goals?</p><p>With these in mind, I'm going to try to identify some of my own projects that are seeking collaboration.</p><p><b>1. Speech rate and lenition in Spanish</b></p><p>Back in 2010, I collected a set of recordings from 9 young Oaxacan Spanish speakers (ages 19 - 26). They produced a short read passage (Sleeping Beauty), a retelling of a narrative after a short video (the pear story), and a free narrative. The initial goals of this project were to examine speech rate variation across speech styles across different dialects of Spanish. The cross-dialectal goal did not work out, but the data remains.</p><p><i>Team:</i> Myself (UB Department of Linguistics, Colleen Balukas (UB Romance Languages and Literature), Jamieson Wezelis (UB Romance Languages and Literature)</p><p>The current goals of this project are rather open, but we have considered three topics:</p><p>a. An exploratory study on vowel sequences and vowel hiatus patterns across word boundaries. There is a literature on this topic in Spanish phonetics, but not with spontaneous speech data (and certainly not across speech styles).</p><p>b. An exploratory study on aspects of vowel reduction in Oaxacan Spanish.</p><p>c. An exploratory study on patterns of vowel devoicing in Mexican Spanish.</p><p>The eventual goal would be one (or more) papers on the acoustic phonetics of spontaneous speech in Spanish.</p><p><i>The current state:</i> All the recordings have been trasnscribed in ELAN and force-aligned. The read passages have also now been hand-corrected. All recordings have also been syllabified using a custom Praat script. However, Jamieson can no longer be actively involved in the process of hand correction of the data.</p><p>An <i>ideal collaborator </i>is (1) interested in helping with the remaining hand-correction of the acoustic recordings (roughly 1.5 - 2 hours worth), (2) is either interested in one of our goals or has their own which we could all pursue once the alignments are corrected, (3) has some knowledge of statistics as it applies to analyzing acoustic phonetic data, (4) is interested in delving into some of the literature in Spanish phonetics (lots of dissertations), and (5) is literate in Spanish.</p><p><i>Timeline:</i> We're kind of stuck right now (no progress for about a year), but we can devote some time to this starting in the next semester. It would be great to see results in 2023 (a talk, a paper, etc).</p><p><i>Bonus:</i> I'm open to data sharing after collaboration.</p><p><b>2. Glottal reduction in Itunyoso Triqui</b></p><p>Throughout the course of my language documentation and phonetic data analysis grant, we collected about 35 hours of spontaneous speech in Itunyoso Triqui, an Otomanguean language spoken in Oaxaca, Mexico. Triqui languages are rather tonally complex and have orthogonal contrasts involving glottal consonants (/ʔ, ɦ/). While there is some description of glottalization in the language (DiCanio 2012), there is an open question as to how much lenition of glottal stops occurs. The goal would be to analyze the acoustic data to examine variation in the production of the glottalization. We are particularly interested in variation in glottalization as a function of word position (VCV vs. VC#) and contrast type (pre-glottalized sonorant vs. glottal stop). This project would tie in nicely with recent work on Hawaiian glottal stops (Davidson 2021).</p><p><i>Team:</i> Myself (UB), Lisa Davidson (NYU), Richard Hatcher (postdoc, Hanyang University - former UB grad student)</p><p><i>The current state:</i> All of the recordings are force aligned with a custom-built aligner for Triqui. The recordings of interest have also been hand-corrected. We have begun some analysis of variation in production of the glottalization using a script I wrote for Praat which allows users to identify glottal reduction types. We presented preliminary results from this work at Haskins Laboratories in Fall 2021. We would like a collaborator to help us analyze more of the existing data.</p><p>An <i>ideal collaborator</i> is (1) interested in non-modal phonation type in complex tone languages, (2) has some knowledge of the phonation literature and acoustic phonetics, (3) is familiar with running voice quality scripts in Praat (or at least scripts), (4) has some knowledge of statistics as it applies to analyzing acoustic phonetic data, (4) is interested in judging patterns of glottal reduction in field recordings, and (5) would like to get involved with work on phonetic variation in Itunyoso Triqui.</p><p><i>Timeline:</i> We have not made new progress for about a year, but some of us can devote some time to this starting in the next semester. It would be great to see results in 2023 (a talk, a paper, etc).</p><p><i>Bonus:</i> I'm open to data sharing after collaboration.</p><p><b>3. Triqui clitic phonetics study</b></p><p>Certain Triqui person clitics (speech act participant clitics) condition tonal changes on the right edge of the root they attach to. This is described in the literature on the language (DiCanio 2008, 2016, 2020, 2022). Consider that the 2S clitic /=ɾeʔ¹/ conditions (1) tonal raising on certain roots, e.g. /ɾa³ʔa³/ 'hand / mano' > /ɾa³ʔa⁴=ɾeʔ¹/ 'your hand', (2) leftward, low-tone spreading on others, e.g. /ka⁴ne⁴³/ 'bathed / se bañó' > /ka⁴ne¹=ɾeʔ¹/ `you bathed', and (3) no tonal change on others, e.g. /ki³ɾi¹/ `took out / sacar' > /ki³ɾi¹=ɾeʔ¹/ `you took out'. There are two research questions here. First, there is an empirical question as to what these tonal changes look like for roots containing the 9 lexical tones. Of particular interest is the observation that, in those roots where no tonal changes occur, pre-clitic lengthening may. Second, utterance-final prosodic lengthening takes place for lexical roots (DiCanio & Hatcher 2018, submitted), but the conditions on this are quite limited (almost no lengthening takes place for roots ending with coda /ʔ, ɦ/). Moreover, is prosodic lengthening limited to roots or may it also affect clitics? The study here sought to try to answer these empirical questions for Itunyoso Triqui.</p><p><i>Team:</i> Myself (so far)</p><p><i>The current state:</i> This has been on hold for 4 years now. The recordings that were collected alongside this data has been analyzed (DiCanio & Hatcher 2018, submitted). The relevant stimuli were recorded in 2018, consisting of 224 trials with target words in clitic and non-clitic conditions, in both utterance-final and non-final position, repeated 5 times per speaker, with 10 speakers (11,200 sentences). This data has not yet been transcribed or segmented in Praat, though all the stimuli and their (random) order of presentation are saved in an Excel file, so transcription should be relatively straightforward.</p><p>An <i>ideal collaborator</i> is (1) interested in tone production and the phonetics of tone sandhi, (2) has some knowledge of acoustic phonetics and Praat, (3) has some knowledge of statistics as it applies to analyzing acoustic phonetic data, and (4) is interested doing speech segmentation work with this data.</p><p><i>Timeline:</i> No work has taken place on this since the recordings were made. It's a big project given the amount of data and speakers. So, it's completely open. I imagine an analysis of the data alongside segmentation would take at least several months with a few researchers.</p><div><i>Bonus:</i> I'm open to data sharing after collaboration.</div>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-63912030413920022472021-11-28T20:45:00.003+01:002021-11-28T20:46:11.498+01:00On the lexicalization of Triqui compounds<p><span style="font-family: times; font-size: medium;">In the process of doing historical reconstruction, one is often led to believe that the conditioning factors leading to sound change are specific to a phonotactic context, i.e. one finds /k/ > [tʃ]/_i and perhaps only in onsets. Yet, there are several variable patterns in Itunyoso Triqui compounds that suggest that stress-induced simplification might also cause unique types of sound changes.</span></p><p><span style="font-family: times; font-size: medium;">As a bit of background, it is important to know that Itunyoso Triqui words are mostly polysyllabic. About 70% of the lexicon is disyllabic or trisyllabic roots. Though, monosyllabic roots have higher token frequency in running speech (as per Zipf's law). The final syllable of these morphemes has special status. It is phonetically longer than non-final syllables and most of the contrasts occur on the final syllable (cf. DiCanio 2010).</span></p><p><span style="font-family: times; font-size: medium;">What occurs in the final syllable in a polysyllabic word? <br />a. Every possible tone: /1, 2, 3, 4, (4)5, 13, 32, 43, 31/.<br />b. All consonants: /p, t, k, kʷ, tʃ, ʈʂ, ʔ, m, n, ⁿd, ᵑɡ, ᵑɡʷ, ɾ, β, s, l, j, ˀm, ˀn, ˀⁿd, ˀᵑɡ, ˀɾ, ˀβ, ˀl, ˀj/.<br />c. All vowels: /i, e, a, o, u, ĩ, ã, ũ/<br />d. Coda consonants /ʔ, ɦ/ (though all syllables are otherwise open).</span></p><p><span style="font-family: times; font-size: medium;">What occurs in the non-final syllable of a polysyllabic word?<br />a. Only level tones /1, 2, 3, 4/, but the caveat is that tones /1/ and /4/ are not truly contrastive here - they only occur due to leftward tonal spreading onto the non-final syllable (cf. DiCanio, Martínez Cruz, and Martínez Cruz 2020). So, really it's just tone /2/ and tone /3/ that contrast here.<br />b. Only simple consonants (no prenasalized stops, no glottalized sonorants, no glottal stop): /p, t, k, kʷ, tʃ, ʈʂ, m, n, ɾ, β, s, l, j/.<br />c. Only oral vowels /i, e, a, o, u/ and mid vowels <i>only</i> occur if they also occur in the final syllable. So, really just /i, a, u/ are contrastive here.<br />d. All syllables are open.</span></p><p><span style="font-family: times; font-size: medium;">So, we have many asymmetries in which sounds occur by syllable. We can call this stress or prominence or whatever term you wish, but the patterns above occur mostly without exception.</span></p><p><span style="font-family: times; font-size: medium;">There is an additional observation too - a contrast between singletons and geminates only occurs in monosyllabic words, e.g. ta³ 'this' vs. tta³ 'field', nũ³² 'be inside' vs. nnũ³² 'epazote.' This contrast does not occur in polysyllabic words (cf. DiCanio 2010, 2012).</span></p><p><span style="font-family: times; font-size: medium;">Now that we know about the stress-based consonant patterns, what does this mean for sound change? Consider that one very common type of word formation process in Triqui (and in Otomanguean languages more generally) is compounding. When each morpheme of a compound retains some of its phonological identity as a distinct root, there may be no sound changes. Yet, if the compound begins to lexicalize, the restrictions on phonological distributions above start to cause rather robust changes. Let's look at some examples.</span></p><p><span style="font-family: times; font-size: medium;">1. The Triqui word 'de veras/truly' is a reduplicated form yya¹³ yya¹³, literally meaning 'true true.' Most adverbs in the language appear post-verbally before personal clitics (V+ADV+SUBJ order), so clitic morphophonology applies to them. The 1P clitic involves a > o, glottal stop insertion, and tone 4. Yet, with this word you get yyo¹³ yyoʔ⁴, with vowel harmony. Then with lexicalization, you can't get a contour tone on a non-final syllable and no geminates are permitted in polysyllabic words, so it's yo³yoʔ⁴.</span></p><p><span style="font-family: times; font-size: medium;"><span>2. The Triqui word 'each' is a reduplicated compound </span>ᵑɡo² ˀᵑɡo² 'one-one.' Yet, it is often pronounced as [ko²ˀᵑɡo²] in running speech. You lose the prenasalized stop in the penultimate syllable as per the patterns above.<br /><br />3. The Triqui word 'soda/soft drink' is a compound nne³² tsiʔ¹ 'water + sweet.' Yet, it is often pronounced as [ne³siʔ¹]. You lose the contour tone and the gemination on the penultimate syllable because neither are permitted there.<br /><br />4. The Triqui word for 'bread' is a historical compound /ʈʂːa³ ʈʂũɦ⁵/, lit. tortilla+horno (tortilla del horno). It is pronounced as [ʈʂa³ʈʂũɦ⁵] by older speakers but as [tʃa³tʃũɦ⁵] by younger speakers (who have mostly merged the retroflex and post-alveolar affricates). The historical gemination of 'tortilla' has been lost here.<br /><br />5. The Triqui word for 'rifle' is [ʈʂu³ʈʂi³aʔ³], but the roots are ʈʂːũ³ 'wood' + ʈʂi³aʔ³ 'to shoot.' In the compound, we see observe degemination (because it's in a disyllabic word now) and loss of the vowel nasalization too. And as mentioned above, many speakers now produce the retroflex series as post-alveolar.</span></p><p><span style="font-family: times; font-size: medium;">I am mentioning this examples here because, as per Rensch (1976), it is extremely difficult to reconstruct non-final syllables in many Otomanguean languages. It may be that (a) processes of reduction in unstressed syllables and (b) a general pattern of distributional asymmetries in the phonological inventories will help to reconstruct them. The [k] you observe that comes from a reduced [ᵑɡ] (as in #2 above) might only occur in a handful of words because reduplicated compounds are relatively uncommon in Otomanguean languages.</span></p><p><span style="font-family: times; font-size: medium;">In sum, neutralization due to stress-based distributional asymmetries can lead to superficial similarities between words, e.g. the /n/ onset in #3 'soda' is from */nn/ while a different word like /ne³tã³/ 'ejote/green bean' is probably related to Mixtec words like /ñityì/ (SJC Mixtec) where onset /n/ has a */ny/ reflex. </span></p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-15657755418819171892021-04-17T19:21:00.000+02:002021-04-17T19:21:11.099+02:00Linguistic tidbitSome linguists obsessed with a theory of all<br />
forget there are others who need to think small,<br />
of how to inflect a verb that's perfective<br />
or reasons why 'so' isn't just a connective.<br />
<br />
And others might glean an elaborate fact<br />
from language in use as a societal act<br />
with agents whose motives are far from mundane<br />
but an essence of self quite hard to contain.<br />
<br />
There's meaning and purpose in digging quite deep<br />
at cognates in history whose meaning we keep,<br />
And time to get lost in the tangle of weeds,<br />a morphological context and the pattern it feeds.<div><br /></div><div>And many a language, pattern, and word</div><div>hold secrets and histories that we've never heard</div><div>Of just how a people connect with the past</div><div>or just how a pattern changes so fast.</div><div><br /></div><div>So before you admonish the detail-obsessed</div><div>those whose minutiae is seldomly blessed</div><div>with an appearance in Nature or Science and so</div><div>appears to be findings you don't need to know.</div><div><br /></div><div><div>An ego obese with a theory so tangled</div><div>Can deflate in an instant when new data is wrangled.</div></div><div>Consider that details, however so small</div><div>are the basis of asking the biggest questions of all.</div><div><br /></div>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com3tag:blogger.com,1999:blog-8009206815446785752.post-47095670429291495292021-01-02T00:49:00.000+01:002021-01-02T00:49:03.324+01:00What does not work for sentence elicitation with Triqui speakers<p>One part of doing fieldwork is discovering just what does not work while you're in the field. Several summers ago, after receiving some critical methodological remarks from a reviewer on a submission of mine, I started to seriously question just what works in my fieldwork.<br /></p><p>We're all addicted to our past methods and sometimes we need a jolt to reconsider what we're doing in the field. I have a tendency to rely a lot of repetition among speakers because there is no literacy among most speakers in Triqui. There are three options for elicitation here, as it happens. One possibility is to just ask for speakers to provide a translation of a Spanish sentence, another is to have them see some image and describe it, and another is to have them repeat after another speaker who can read Triqui (my main consultants).<br /><br />I rely a lot on the third method, but it's possible that Triqui speakers will overly mimic what the other speaker is doing when they are doing this. (There is a serious question as to what they would mimic - there is no non-tonal prosody in the language, but perhaps speech rate and optional pauses?) So, this logically leads reviewers and other linguists to suggest the first two options above. We can toss out the second option for anything that involves more carefully-controlled speech. If you are want to use identical nouns but change the verbs, for instance, this simply leaves way too much open to interpretation. Speakers will never provide the target sentence.<br /><br />But what about the first option? This also often fails for various reasons. I'm in the process of looking at a large data set examining tonal changes with person morphology in Triqui across 11 speakers. We tried the translation method, but it regularly fails with speakers. Here's a transcription of one exchange:<br /><br />Consultant: Cantaste una canción (You sang a song.)<br />Speaker: Ka³ra⁴³ ngo² chah³ (I sang a song.)<br />Consultant: Ka³raj⁵ ngo² chah³ (You sang a song.)<br />Speaker repeats consultant<br /><br />Many fieldworkers might laugh at the following exchange - asking people to get personal pronouns correct in translation is a common issue. But if you're looking at how words change tone with personal pronouns, then it's important to get right.<br /><br />There is an added issue though - we often assume that we can examine speech in translation because we assume strong bilingualism or a clear 1:1 mapping between words in a lingua franca and words in a language we're investigating in a field context. Sometimes neither can be found. In the same recording, we observe the speaker becoming confused when he has to distinguish between <i>lavas</i> 'you are washing' and <i>lavaste </i>'you washed' in Triqui. </p><p>Consultant: Lavas la ropa. (You wash the clothing.)<br />Speaker: nan...[s]... (long pause)<br />Me: Nanj⁵ reh¹...<br />Speaker: Nan⁴³ (I wash)....(pause)<br />Consultant: Nanj⁵ reh¹ a⁴sij⁴ (You wash the clothing.)<br />Speaker repeats consultant</p><p>In this exchange, the speaker is caught off guard because he is either uncertain about the aspect marking of 'wash' (as the previous exchanges involved him producing it with the perfective prefix - <i>ki³nanj⁵) </i>or he is confused about the pronominal referents again. The result is the same though - the speaker ends up relying on repetition from another speaker/consultant.<br /><br />If you have to rely on repetition, perhaps a way around it is to have speakers count between hearing a sentence and repeating it. If the concern over repetition in elicited speech contexts in the field is that speakers are likely to mimic, then counting before repeating might resolve this. The idea here is that counting takes time and auditory memory decays quickly. So, if speakers have to say "one, two, three" (or <i>ngoj¹³ bbi¹³ ba¹hnin³</i> in Triqui), then their reproductions of the target sentences might more closely resemble long-term memory representations for the words in the short sentences. I owe this idea to Lisa Davidson (via one of our interesting Facebook/Twitter discussions).</p><p>But in practice this only <i>kinda</i> ends up working. Speakers can do this, but they end up sometimes forgetting the target sentence. So, you get exchanges like the following:<br /><br />Consultant: Ka³ne³ ni²hrua⁴¹ reh¹ chu⁴ba⁴³ beh³ (Te sentaste mucho en la casa.)<br />Speaker: ngoj¹³ bbi¹³ ba¹hnin³... ka³ne³..... ka³ne³...<br />Consultant: Ka³ne³ ni²hrua⁴¹ reh¹ chu⁴ba⁴³ beh³ <br />Speaker repeats consultant</p><p>In effect, it is hard to pay attention to reproducing specific sentences when you have to produce numbers first. So, the end result is to just repeat what the consultant has said. When you add the additional stress of being recorded to this (many speakers become nervous knowing they are recorded), this can produce pauses/errors in the elicitation.</p><p>So, what is the way around all of this? One thing we might address head on is the assumption of mimicry. We seem to believe that all speakers/participants, when asked to repeat words, will focus on the specific phonetic characteristics of the signal they heard instead of the content. Though, the jury on this is still out. I have found two papers that have addressed the question - Cole and Shattuck-Hufnagel (2011) and D'Imperio, Cavone, and Petrone (2014). In both cases, speakers were told to explicitly imitate the form of the speech signal and they mostly imitated pitch accents, but not F0 level. In a language where only level is adjustable (lexical tone is fixed), what predictions does this previous work make for Itunyoso Triqui? I'm testing this with a study I ran in 2019. There is no work on what tone languages speakers do in such tasks (and we have no idea about what happens when the concern is just getting the words right - not trying to imitate fine phonetic detail).<br /><br />I wish I could find an ideal way to do careful elicitation that was immune to these concerns. In the meanwhile though, prosody-folks might consider a warning mentioned in DiCanio, Benn, and Castillo García (2020) - no method for the elicitation of prosody is immune to stylistic effects. Read speech is just as much a speech style as repeated speech and most languages have no writing system or literacy (Harrison 2007). That means that this methodological concern must be addressed as we look at prosodic systems across more of the world's languages.</p><p>References:<br />Cole, J. and Shattuck-Hufnagel, S. (2011). The phonology and phonetics of perceived prosody: What do listeners imitate? In <i>Proceedings from Interspeech 2011</i>, pages 969–972. ISCA.</p><p>D’Imperio, M., Cavone, R., and Petrone, C. (2014). Phonetic and phonological imitation of intonation in two varieties of Italian. <i>Frontiers in Psychology</i>, 5(1226):1–10.</p><div>DiCanio, C., Benn, J., and Castillo García, R. (2020). Disentangling the effects of position and utterance-level declination on the production of complex tones in Yoloxóchitl Mixtec. <i>Language and Speech</i>, Onlinefirst (https://journals.sagepub.com/doi/10.1177/0023830920939132):1–43.</div><p>Harrison, K. D. (2007). <i>When languages die</i>. Oxford University Press.</p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-68792302405399359362020-12-29T01:50:00.002+01:002020-12-29T01:50:43.032+01:00Algunas conexiones entre raices triquis y amuzgos<span style="font-family: arial;">La semana entre las fiestas de navidad (o la coronidad de cubrebocas) es siempre un tiempo para reflejar y relajar en la casa para mí. Después de un semestre muy ocupado con charlas, conferencias, la enseñanza de dos cursos, dictamenes, sobrevivir en una pandemia etc, necesito tiempo para no pensar en el trabajo. En estos tiempos a veces regresan mis pasiones de trabajo - la fonología histórica de lenguas mixtecanas. Ya sé que debo continuar leyendo mis libros de ficción fantasía, tocar el piano y mirar películas largas pero sabes qué? Dicen que las ideas interesantes se surgen de estos momentos donde no se pone uno tanto a la meta de trabajar. Bueno. No necesito explicarme - eso es el amor de lenguas mixtecanas.</span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Siempre he pensado que las relaciones entre triqui y la familia mixtecana fueron interesantes. Hay buenos cognados para una gran cantidad de palabras (véase <a href="http://www.acsu.buffalo.edu/~cdicanio/pdfs/Albany_talk.pdf" target="_blank">aquí</a>) pero a mí me parece que a eso de 70% de las raices triquis no tienen cognados claros en lenguas mixt<b>eca</b>s (no mixtecanas) por el trabajo que hemos hecho Michael Swanton y yo. Y por el diccionario cuicateco de Anderson y Roque (1983), parece que hay menos cognados con el cuicateco. Entonces, de dónde vienen las otras raices que observamos en las lenguas triquis? <br /></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Empecé a estudiar algo de Amuzgo en estos días por el tesis de Cortés Vásquez (2016) para ver si hubieran algunos cognados entre triqui y amuzgo. Tal vez hay más raices en común entre lenguas amuzgos y lenguas triquis y esta comparación me podría decir de dónde vino el 70% de las raices triquis que me siguen pareciendo misteriosas. Revisé el tesis entero de Cortés Vásquez buscando cognados más obvios para mí y recopilé una lista de 68 palabras que parecen cognados con formas triquis de <a href="http://www.acsu.buffalo.edu/~cdicanio/Diccionario_Triqui_01-08-20.xhtml">mi léxico de triqui de Itunyoso</a>. Hay unas observaciones tal vez interesantes aquí abajo.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">1. Más evidencia para la formación de raices con geminación inicial /kk/:</span></div><div><span style="font-family: arial;"><br /></span></div><div><table style="width: 80%;">
<tbody><tr>
<th><span style="font-family: arial;">Triqui de Itunyoso</span></th>
<th><span style="font-family: arial;">San Pedro Amuzgos</span></th>
<th><span style="font-family: arial;">Proto-Mixteco <br />(Josserand 1983)</span></th>
<th><span style="font-family: arial;">Glosa</span></th>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">kkə̃:³²</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ntkẽĩ³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">semilla</span></td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">kkə̃h³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tkõ³⁵</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">huarache<br /></span></td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">kkə̃:³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tskĩ³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">*jɨkɨ̃ʔ</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">calabaza<br /></span></td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">kkoh³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ntsko³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">*juku</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">hoja<br /></span></td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">kkaʔ³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ska³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">vela<br /></span></td>
</tr>
</tbody></table>
</div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">La forma en proto-mixteco por 'huarache' es */ndiʃẽʔ/ (sin relación?) y las formas por 'semilla' y 'vela' no existen en el trabajo de Josserand (1983). Por mi trabajo (DiCanio 2014), el origen de la mayoría de las consonantes geminadas ("fortis") es la pérdida de una sílaba pre-tónica. Normalmente esta sílaba empieza con una semivocal opcional /j, w/ y una vocal alta como se observa en los cognados con proto-mixteco. En Amuzgo, parece que hay una oclusiva o fricativa en estas sílabas.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Ya sé que las vocales son raras acá. Por qué? Voy a adivinar algo no más porque es mi blog, no un artículo. Hay más vocales en Amuzgo y secuencias de vocales también. Amuzgo tiene 7 vocales orales y 7 vocales nasales. Pero triqui de Itunyoso solamente tiene 5 vocales orales y 3 nasales (/ĩ, ũ, ə̃/). En Triqui de Chicahuaxtla, mantienen la vocal /ɨ/ pero eso se produce como /i/ en Triqui de Itunyoso y /u/ en Triqui de Copala. Entre las variedades triquis, varias vocales nasales se unieron, p.ej. */õ, ũ/ > /ũ/, */ɨ̃, ã/ > /ə̃/. Creo que los cambios vocalicos arriba se surgieron de este tipo de proceso histórico.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">2. Una relación entre /t͡s/ en Amuzgo y /j/ (o /</span><span style="font-family: arial; text-align: center;">β/) </span><span style="font-family: arial;">en triqui. Esta relación es parecido a la relación entre formas que empiezan con /j/ en triqui y las formas con /t/ en mixteco (van Doesburg et al. entregado) donde la /t/ refleja una mutación de /y/ > /t/ para marcar la posesión. Este proceso todavía existe en lenguas triquis pero observamos raices fosilizadas en lenguas mixtecanas.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table style="font-family: Times; width: 535px;">
<tbody><tr>
<th><span style="font-family: arial;">Triqui de Itunyoso</span></th>
<th><span style="font-family: arial;">San Pedro Amuzgos</span></th>
<th><span style="font-family: arial;">Proto-Mixteco <br />(Josserand 1983)</span></th>
<th><span style="font-family: arial;">Glosa</span></th></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ja:³²</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsa¹</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">lengua</span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">j:ah³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsʰaʔ³⁵</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">/ja:³²/ en mixteco de Yoloxóchitl</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ceniza</span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ja³ʔah³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsᵃʔa¹</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">/jaʔa/ (en muchas variedades)</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">chile<br /></span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">jãh³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsõ³</span></td><td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">papel</span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">ja³tã:³²</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsã¹</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">---</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">granizo<br /></span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">j:eh³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsʰɔʔ³</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">*juuʔ</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">piedra<br /></span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">jã:³²</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsãʔ¹</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">*j</span>ɨ̃ɨ̃ʔ</td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">sal<br /></span></td></tr>
<tr>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">β:eh³²</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">tsueʔ</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">*juwiʔ</span></td>
<td style="text-align: center; vertical-align: middle;"><span style="font-family: arial;">cueva<br /></span></td></tr>
</tbody></table></span></div><div><span style="font-family: arial;"><br />
</span></div><div><span style="font-family: arial;">Con esta lista he cambiado la transcripción de Cortés Vásquez un poco - incluye vocales laringizadas escritas por un diacrítico - /a̰/ - pero la glotalización según sus figuras acústicas refleja una secuencia como se observa en la mayoría de lenguas mixtecanas (Gerfen & Baker 2005, DiCanio 2012) y lenguas mazatecas (Garellek & Keating 2011, Silverman et al 1995). A veces por transcribirlo diferente, es más difícil observar los cognados.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Sabemos que este cambio con la posesión en triqui, p.ej. ja³ʔah³ 'chile' > ta³ʔah³ 'chile de...' tiene cognados fosilizados en lenguas mixtecanas pero muchas veces no ocurre con /t/ sino con /ⁿd~n/ o con /ð/ o con </span><span style="font-family: arial;">/t͡s/</span><span style="font-family: arial;">, como vemos en Amuzgo acá. Copio una tabla de datos de estos dobletes de van Doesburg et al. (entregado) acá abajo para mostrarlo.<br /><br /></span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXu19nkqLtF8pCRGOSZnRRAQ3rKrDDUbQpNXQwq62igHIY1xqQXIocIJYfv1o8dT8u1aRoZDKLaQBlhR2Ty1RMb2CicO8vmVvdV8CJ6KZYXghCZm3_OMqpbliTiJqLgsPqrr1oGaubg0hM/s1690/dobletes.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Dobletes en cuicateco, mixteco y triqui" border="0" data-original-height="1281" data-original-width="1690" height="486" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXu19nkqLtF8pCRGOSZnRRAQ3rKrDDUbQpNXQwq62igHIY1xqQXIocIJYfv1o8dT8u1aRoZDKLaQBlhR2Ty1RMb2CicO8vmVvdV8CJ6KZYXghCZm3_OMqpbliTiJqLgsPqrr1oGaubg0hM/w640-h486/dobletes.jpg" title="Dobletes en cuicateco, mixteco y triqui" width="640" /></a></div><span style="font-family: arial;"><div><br /></div>En la mayoría de los casos, la forma de la palabra fosilizada es la forma usada por un ente, p.ej. 'hilo' y 'telaraña.' Por qué observaríamos tantas formas diferentes de consonantes acá? Consideramos que, en varios lenguas mixtecanas, como triqui de Itunyoso y mixteco de Yoloxóchitl, las oclusivas coronales no son alveolares sino dentales (DiCanio 2010, DiCanio et al 2019). Eso incluye la africada /ts/ en Triqui, por ejemplo. Hay una relación clara entre [t̪] - [t͡θ] - [t̪s] - [ð] a través de estas lenguas pero a veces no lo vemos porque se escribe estas consonantes con letras muy distintas (t - ts/tz/dz - d).</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Hay más pares interesantes entre triqui y amuzgo pero no más estoy recopilando mis observaciones acá.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">
<u>Referencias:</u></span></div><div><span style="font-family: arial;">Anderson, E. R. and Concepción Roque, H. (1983). <i>Diccionario Cuicateco</i>. Number 26 in Serie de Vocabularios y Diccionarios Indígenas “Mariano Silva y Aceves”. Instituto Lingüístico de Verano: Mexico, D.F.<br /><br />Cortés Vásquez, Mariela (2016). <i>Fonología del amuzgo de San Pedros Amuzgos, Oaxaca</i>. Tesis de licenciatura, ENAH.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">DiCanio, C. T. (2010). Illustrations of the IPA: San Martín Itunyoso Trique. <i>Journal of the International Phonetic Association</i>, 40(2):227–238.<br /><br />DiCanio, C. T. (2012). Coarticulation between Tone and Glottal Consonants in Itunyoso Trique. <i>Journal of Phonetics</i>, 40(1):162–176.<br /><br />DiCanio, C. (2014) </span><span style="font-family: arial;">The Sounds of Triqui: </span><span style="font-family: arial;">quantitative approaches to language description and its ramifications for historical change. Colloquium talk - University of Albany. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">DiCanio, C., Zhang, C., Whalen, D. H., and Castillo García, R. (2020). Phonetic structure in Yoloxóchitl Mixtec consonants. <i>Journal of the International Phonetic Association</i>, 50(3):333– 365.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Garellek, M. and Keating, P. (2011). The acoustic consequences of phonation and tone interactions in Jalapa Mazatec. <i>Journal of the International Phonetic Association</i>, 41(2):185–205.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Gerfen, C. and Baker, K. (2005). The production and perception of laryngealized vowels in Coatzospan Mixtec. <i>Journal of Phonetics</i>, 33(3):311–334.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Josserand, J. K. (1983). <i>Mixtec Dialect History</i>. PhD thesis, Tulane University.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Silverman, D., Blankenship, B., Kirk, P., and Ladefoged, P. (1995). Phonetic structures in Jalapa Mazatec. <i>Anthropological Linguistics</i>, 37(1):70–88.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">van Doesburg, S., Swanton, M., de Ávila Blomberg, A., and DiCanio, C. (entregado). Flores blancas, campos quemados y quetzales: la morfología histórica mixtecana y la etimología de Chiyoyuhu (Suchixtlán, Oaxaca).<br /><br /></span></div>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-79924888103145423522020-10-17T03:06:00.030+02:002020-10-22T16:38:57.579+02:00Revistas de lingüística publicadas en castellano<span style="font-family: times; font-size: medium;">Como muchas disciplinas académicas, hay una gran asimetría entre las revistas publicadas en castellano sobre la lingüística y las que están publicadas en inglés. El inglés funciona como una <i><a href="https://en.wikipedia.org/wiki/Lingua_franca">lingua franca</a> - </i>un idioma que muchos lingüistas usan para compartir sus ideas, observaciones e investigaciones. Hay varias consecuencias del dominio casi completo de inglés en publicaciones sobre la lingüística. Por ejemplo, hay una gran ignorancia de revistas actuales que aceptan artículos en español - estoy tan culpable de este pecado que otros lingüistas. Cuando pienso en una revista como fonetista o como investigador de lenguas indígenas de las Américas, las primeras que me aparecen en la memoria son las más populares - <i>Journal of Phonetics, </i>the <i>International Journal of American Linguistics (IJAL), </i>the <i>Journal of the International Phonetic Association</i>, <i>Language and Speech</i>, <i>Laboratory Phonology </i>and <i>Phonetica. </i>Y si se revisan las citaciones en tales revistas se observa que la mayoría viene de las ya mencionadas.</span><div><span style="font-family: times; font-size: medium;"><br /></span></div><div><span style="font-family: times; font-size: medium;">Este patrón de auto-citación entre revistas publicadas en inglés aumenta sus estatus en las métricas de citación internacionales. Si hay más citaciones, hay más estatus para una revista dada según las métricas de evaluación internacionales. El reportaje de citación de revistas (<i><a href="https://clarivate.com/webofsciencegroup/solutions/journal-citation-reports/" target="_blank">Journal Citation Reports</a> - JCR) </i>depende del número de citaciones atribuidas a la revista y el número de publicaciones con un índice alto que cita una publicación en la revista dada. Actualmente hay pocas revistas publicadas en castellano que aparecen en ese reportaje en la lingüística. Eso es una gran problema. En varios países (como los EEUU) las decisiones para titulación requieren publicaciones aceptadas en revistas que aparecen en el JCR. Esta dinámica subvalora publicaciones de lingüística en castellano. Y si se estudia un idioma indígena de las Américas en una región donde la lingua franca no es inglés sino castellano, portugués o francés, resulta que una publicación escrita en inglés no será leída ni por miembros de la población (muchas veces multilingüe) que habla el idioma ni por una comunidad científica en la región donde se lo habla. El/la lingüista que habla inglés entonces necesita escoger a publicar en una revista con estatus científico o en una revista accesible. El/la lingüista que habla español (o portugués) como idioma nativa necesita escoger a publicar en una revista accesible o producir un artículo en inglés que podrá necesitar varias revisiones de lenguaje (véase <a href="https://news.berkeley.edu/2020/10/14/is-english-the-lingua-franca-of-science-not-for-everyone/" target="_blank">este artículo reciente</a> para una discusión).</span></div><div><span style="font-family: times; font-size: medium;"><br /></span></div><div><span style="font-size: medium;"><span style="font-family: times;">Cómo podremos cambiar esta situación? Si hay más lingüistas que publican y citan revistas que ya existen en partes de Latinoamérica, podremos empezar a cambiar su número de citaciones y elevar sus estatus en las métricas. He recopilado una lista de revistas que aceptan publicaciones en castellano o en portugués. No menciono revistas específicamente que aceptan publicaciones solamente de las lenguas romanas - mi enfoque acá es mostrar que hay más ámbitos para la divulgación de la lingüística que debemos considerar.<br /><br /><b>Revistas en castellano que publican artículos de lingüística</b></span></span></div><div><span style="font-size: medium;"><span style="font-family: times;"><br />1. <a href="http://editora.museu-goeldi.br/humanas_sp/index_sp.html" target="_blank">Boletim do museu paraense Emílio Goeldi - Ciências Humanas</a> (em Português) - La misión de la Revista es publicar trabajos originales en las áreas de antropología, arqueología, lingüística indígena y disciplinas relacionadas. Admite contribuciones en portugués, español, inglés y francés para las siguientes secciones: artículos científicos, artículos de revisión, notas de investigación, memoria, reseñas bibliográficas, tesis de maestría y doctorado.<br /><br />2. <a href="http://www.etnolinguistica.org/cadernos:home" target="_blank">Cadernos de etnolingüística</a> (em Português) - uma publicação eletrônica destinada a divulgar contribuições originais sobre línguas indígenas sul-americanas, incluindo artigos, resenhas, squibs, notas curtas e documentos inéditos (ou de circulação até o momento limitada).<br /><br />3. <a href="https://cuadernoslinguistica.colmex.mx/index.php/cl/about" target="_blank">Cuadernos de lingüística en el colegio de México</a> - </span><span style="background-color: white; color: #1d2129; font-family: times; white-space: pre-wrap;">una revista electrónica de publicación continua, cuyo objetivo es difundir y promover la investigación lingüística acerca de diversas lenguas y sin preferencia por algún marco teórico en particular. Se busca así que los trabajos publicados contribuyan a nuestro entendimiento de las lenguas naturales, ya sea desde un punto de vista teórico o puramente descriptivo.</span><span style="font-family: times;"><br /><br /><span style="background-color: white; color: #1d2129; white-space: pre-wrap;">4. <a href="https://scielo.conicyt.cl/scielo.php?script=sci_serial&pid=0071-1713&lng=en&nrm=iso" target="_blank">Estudios filológicos</a> (Chile) - </span><span style="background-color: white; color: #1d2129; white-space: pre-wrap;">is a biannual publication of the Universidad Austral de Chile, Facultad de Filosofía y Humanidades, Instituto de Lingüística y Literatura. It hosts specialized studies in linguistics and literature, and related areas, especially issues relating to the Spanish language and Spanish and Latin American literatures.</span></span></span></div><div><span style="font-family: times;"><span style="color: #1d2129;"><span style="font-size: medium; white-space: pre-wrap;"><br /></span></span></span></div><div><span style="font-size: medium;"><span style="font-family: times;"><span style="color: #1d2129;"><span style="white-space: pre-wrap;">5. <a href="https://revistas.unal.edu.co/index.php/formayfuncion" target="_blank">Forma y Función</a> (Colombia) - </span></span></span><span style="color: #1d2129; font-family: times;"><span style="white-space: pre-wrap;">La revista Forma y Función está adscrita al Departamento de Lingüística de la Universidad Nacional de Colombia, sede Bogotá. Su objetivo es la divulgación de estudios sobre el lenguaje desde una variedad de perspectivas teóricas y metodológicas que corresponden a los diversos campos de la lingüística.</span></span></span></div><div><span style="font-size: medium;"><span style="font-family: times;"><span style="color: #1d2129;"><span style="white-space: pre-wrap;"><br /></span></span>6. <a href="https://www.ub.edu/journalofexperimentalphonetics/es/index.html" target="_blank">Estudios de fonética experimental </a>(Cataluña) - publica artículos de investigación original relacionados con cualquier rama de la fonética experimental (articulatoria, acústica, perceptiva, aplicada) y de la fonología de laboratorio. También publica contribuciones sobre aspectos teóricos de la fonética, descripciones de inventarios fonéticos y revisiones de libros sobre fonética y fonología de laboratorio. Se publican artículos escritos en inglés, francés e italiano, así como en español, catalán, portugués y gallego.<br /><br />7. <a href="http://revistas.pucp.edu.pe/index.php/lexis" target="_blank">Lexis </a>(Perú) - Lexis es una de las principales revistas de lingüística y literatura que se publican en Hispanoamérica. La revista acoge trabajos originales en los diversos campos de la lingüística, de la teoría y crítica literarias, de la hispanística y los estudios amerindios. </span><span style="font-family: times;">Lexis está abierta a trabajos de investigadores peruanos y extranjeros.</span><span style="font-family: times;"><br /><br />8. <a href="https://periodicos.sbu.unicamp.br/ojs/index.php/liames" target="_blank">LIAMES: Línguas Indígenas Americanas</a> (em Português) - uma publicação semestral, editada pela área de Linguística Antropológica (Línguas Indígenas) / Centro de Estudos de Línguas e Culturas Ameríndias (CELCAM) do Departamento de Linguística, Instituto de Estudos da Linguagem / UNICAMP. Seu principal objetivo é propiciar aos pesquisadores da área a publicação de artigos de pesquisa e reflexão acadêmicas, estudos analíticos e resenhas que, por sua temática, versem sobre a investigação e documentação de línguas indígenas americanas, elaborados segundo distintas abordagens teóricas. <br /><br />9. <a href="http://linguisticamexicana-amla.colmex.mx/index.php/Linguistica_mexicana" target="_blank">Lingüística Mexicana - Nueva Época</a>: es una revista científica cuyo objetivo es la publicación de artículos inéditos de investigación relacionados con los temas, áreas y disciplinas que conforman los distintos campos de la lingüística de las diversas lenguas habladas en México y la lingüística de cualquier lengua o dialecto en contacto con una variante mexicana; los acercamientos pueden ser teóricos, descriptivos o aplicados.<br /><br />10. <a href="http://onomazein.letras.uc.cl/01_Presentacion/Presentacion.html" target="_blank">Onomázein - Revista de lingüística, filología y traducción</a> (Chile) - </span><span style="color: #1d2129; font-family: times; white-space: pre-wrap;">acoge artículos inéditos derivados de investigaciones científicas en las diferentes disciplinas de la lingüística teórica y aplicada; en filología clásica, indoeuropea, románica e hispánica; en teoría de la traducción y terminología, así como estudios destacados sobre lenguas indígenas.</span></span></div><div><span style="font-size: medium;"><span style="font-family: times;"><br />11. <span style="background-color: white;"><a href="https://scielo.conicyt.cl/scielo.php/script_sci_serial/pid_0718-4883/lng_es/nrm_iso" target="_blank">Revista de lingüística teórica y aplicada - RLA</a><span style="color: #1d2129;"><span style="white-space: pre-wrap;"> (Chile) - tiene como objetivo difundir la investigación lingüística teórica y aplicada en el ámbito académico universitario nacional y extranjero. Los trabajos que se publican son inéditos, provenientes de las diversas áreas de investigación lingüística teórica o aplicada de preferencia escritos en español y otras lenguas como inglés, italiano, francés o portugués.
12. </span><a href="https://revistas.ucr.ac.cr/index.php/filyling/index" style="white-space: pre-wrap;" target="_blank">Revista de Filología y Lingüística de la Universidad de Costa Rica</a><span style="white-space: pre-wrap;"> - una publicación dedicada a la difusión de artículos académicos sobre temas relevantes en las áreas de la filología, la lingüística y la literatura.</span>
<br /></span>
</span><br />13. <a href="https://signoslinguisticos.izt.uam.mx/index.php/SL" target="_blank">Signos lingüísticos </a>(México) - una revista especializada cuyo fin es dar a conocer los resultados de investigaciones originales, rigurosas y metodológicamente consistentes relacionadas con temas de la lingüística, sociolingüística, fonología, adquisición del lenguaje, sintaxis, tanto desde un punto de vista sistemático como histórico. Teniendo un enfoque abierto, Signos Lingüísticos no se ciñe a una determinada concepción de lingüística, poniendo el énfasis en la calidad y originalidad de los trabajos publicados. Signos Lingüísticos aparece ininterrumpidamente desde 2005, previa evaluación, sólo acepta artículos inéditos, notas y reseñas sobre libros de publicación reciente.<br /><br />14. <a href="https://revistas-filologicas.unam.mx/tlalocan/index.php/tl" target="_blank">Tlalocan</a> (México) - </span><span style="font-family: times;">una revista especializada en la documentación de fuentes y textos de tradición oral en lenguas originarias de México, además de lenguas de Guatemala y el suroeste de Estados Unidos que estén lingüísticamente emparentadas. </span><span style="font-family: times;">Publica fuentes relacionadas con las culturas originarias de México y Mesoamérica, tanto documentales como recopiladas de textos orales. También se aceptan para su consideración textos en lenguas originarias emparentadas con lenguas mexicanas, sean de origen documental u oral. Se buscan textos que tengan interés etnográfico o histórico además del interés lingüístico. Se incluyen asimismo reseñas bibliográficas y notas. </span><span style="font-family: times;">Tlalocan sólo recibe trabajos inéditos. Los trabajos se pueden publicar en español o en inglés.<br /><br /><b>Revistas adicionales que publican artículos escritos en español:</b><br /><br /></span></span></div><div><span style="font-size: medium;"><span style="font-family: times;">15. <a href="https://www.jbe-platform.com/content/journals/15699714" target="_blank">Diachronica</a> (John Benjamins - Países Bajos) - provides a forum for the presentation and discussion of information concerning all aspects of language change in any and all languages of the globe. Contributions which combine theoretical interest and philological acumen are especially welcome.<br /><br />16. <a href="https://www.journals.uchicago.edu/journals/ijal/about" target="_blank">International Journal of American Linguistics </a> (EEUU/Chicago) - The International Journal of American Linguistics (IJAL) is dedicated to the documentation and analysis of the indigenous languages of the Americas. Founded by Franz Boas and Pliny Earle Goddard in 1917, the journal focuses on the linguistics of American Indigenous languages. IJAL is an important repository for research based on field work and archival materials on the languages of North and South America.<br /><br />17. <a href="http://www.elpublishing.org/journal" target="_blank">Language Documentation and Description</a> - publishes general research articles on the theory and practice of language documentation, language description, sociolinguistics, language policy, and language revitalisation, with a focus on minority and endangered languages. Also publishes <i>Language Contexts</i> articles with detailed information on the contexts in which languages or varieties are spoken, providing social and cultural information, such as about speaker populations, social organisation, cultural aspects, linguistic ecology, multilingualism, language vitality, and language use and transmission in the community, diaspora and cyberspace. Also publishes <i>Language Snapshot</i> articles providing compact overviews of one or more languages or varieties, with up-to-date key data on language facts and speakers, and current research activity. <br /><br />18. <a href="https://langsci-press.org/catalog/series/tpd" target="_blank">Topics in Phonological Diversity</a> - This series provides a platform for researchers in synchronic and diachronic phonology. By bringing together detailed descriptive work on individual languages with a comparative, cross-linguistic focus, it aims to advance our understanding of the evolution and patterning of phonological systems and the role of phonology in the language system more broadly. We welcome submissions in the following areas: Phonological descriptions of individual languages; Cross-linguistic studies of synchronic and diachronic phonological phenomena; Historical phonology of languages, their groupings, or particular phenomena; Interfaces of phonology with morphology, syntax, and phonetics; Phonological variation induced by dialectal, areal, and other factors.<br /></span></span></div>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com2tag:blogger.com,1999:blog-8009206815446785752.post-23389170024369728922020-10-10T21:21:00.001+02:002020-10-10T21:21:31.755+02:00The boundaries of phonetics and owning language diversity<p><span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">One topic that came out of a departmental forum on institutionalized white supremacy yesterday was the extent to which who we decide to cite can perpetuate racist boundaries within fields. So, I started to think about just who we cite in phonetics and what research we decide is part of the field.
One major divide within phonetics is between papers which are mainly concerned with theory-building and those which investigate empirical observations from experiments or from corpus data. Many fields place the former on pedestals (at least for a time) while the latter comprise the bulk of the work that allows us to amass evidence in favor of certain perspectives. Moreoever, since there is just so much that has never been studied on the phonetics of different patterns in different languages, there is no shortage of empirically-motivated topics in phonetics.
If I complete a study on the phonetics of tone in <a href="https://en.wikipedia.org/wiki/Trique_languages">Triqui</a> or another language that has been under-studied, my work is categorized as both a contribution to phonetics and a contribution to endangered language (or areal) research. Yet, the same allowance is often not afforded to research on minority groups in the US. A study on speech production or perception among speakers of <a href="https://en.wikipedia.org/wiki/African-American_Vernacular_English">Black English</a> or among speakers of <a href="https://en.wikipedia.org/wiki/Puerto_Rican_Spanish">Puerto Rican Spanish</a> is often not placed within the phonetics canon, but within the sociolinguistic or sociophonetic canon. As far as being part of phonetics, there is nothing inherently different in between doing speech production research on Black English or Triqui or Finnish.
Yet, historically, dialectology has fallen within sociolinguistics rather than having been treated as what we might more broadly call "Language diversity." And note that once I say "language diversity", linguists kind of like to think of this as a course taught by a sociolinguist. Diversity is not under the purview of sociolinguistics though. Both phonetics and sociolinguistics can be equally focused on individual languages or interested in a diversity of languages. Research on the syntax of Black English is no more inherently research on sociolinguistics than research on the phonetics of <a href="https://en.wikipedia.org/wiki/Kera_language">Kera</a> is. What binds linguistic research into sub-disciplines is the domain of study and the approach to the phenomenon, not the language.
What this might mean in practice (at least in phonetics - I can't speak about other disciplines as much) is that the boundaries of the field are logically broader than currently defined. The growth of sociophonetics as a discipline has pushed quantitative phonetic research forward by forcing us to normalize discussions of language varieties in well-studied languages. However, it remains the job of sociophoneticians to tell other phoneticians that variation matters - linguists do not yet <i>own</i> language diversity as an issue for the entire field. </span><span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">Yet, a dismissal of sociophonetics has also probably kept it from being incorporated into what phoneticians would call "research on speech production and perception." </span><span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">
</span><span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">
I'll own that there was a time when I did not always see sociophonetics as being as rigorous as phonetics, but I no longer feel this way. It probably is also the case that by being sidelined, research on different language varieties has not undergone the same type of reviewer-ship that papers in "speech production and perception" might get. If I were to submit my own research to journals evaluating variation though, I shudder to think at how my work might fare. In other words, it's easy to elevate the importance of traditional metrics for scholarship when one is examining</span><span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"> idealized language varieties and to under-value metrics that might be applied from a variationist standpoint.
So, one way that phonetics might move forward here is to start to accept that many of our theories of production and perception that we tend to elevate are mostly not informed by any work on language diversity and, in fact, we know very little. The implications of this are as huge as the number of different languages and varieties and dialects and communities that have not been studied. We all own language diversity.
</span></p>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-50833310486036116212020-08-12T20:47:00.002+02:002020-08-12T20:47:16.566+02:00A problem like morphophonology<b>A problem like morphophonology</b><br />(sung to <i>How do you solve a problem like Maria? </i>from <i>The Sound of Music</i>)<br /><br />It might look like any morpheme but then change the root, you see<br />It can lenite, subtract, or mutate and the morpheme is not free<br />It can even copy pieces from the stem too, if need be<br />It isn't quite a part of the morphology!<br /><br />It has alternations looking like some well-regarded rules<br />Were it general we'd analyze with well-respected tools<br />But once we see it's limited it means we're all just fools<br />It isn't just a part of the phonology!<br /><br />But you'd be mistaken if you believed we're outdone.<br />It's actually... quite some fun.<br /><br />How do you solve a problem like morphonology?<br />How do you catch a morph and pin it down?<br />How do you solve a problem like morphonology?<br />Can vowel deletion derive a noun?<br /><br />Many a time you think it's in the lexicon.<br />Many a morph you might misunderstand<br />But how do you pin it down, and account for all the sounds<br />How do get the pointing little hand?<br /><br />Oh, how do you solve a problem like morphophonology?<br />How do you hold a morpheme in your hand?Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-78860143375486155992020-06-26T21:17:00.004+02:002020-07-15T04:47:57.343+02:00What's universal in phonetics?As a fieldworker, I'm often struck by how many linguistic patterns I've observed that just "shouldn't" occur. Linguistics often propels itself as a field by asserting theories that are both too strong and too myopic. The thinking goes that one should assume universality first and then adjust accordingly afterwards (or unfortunately, ignore exceptions and continue on).<br />
<div>
<br /></div>
<div>
In phonetics, there has been a long history around the notion of universalism. Jakobson, Fant, and Halle (1961) assumed that one needed only distinctive features to characterize cross-linguistic differences. Once you got features down, you could just assume that all speakers had the same sort of mapping from features to articulation. This idea persisted into the 1970's (at least among phonologists), but began to break down in the 1980's - 1990's with Pat Keating's work on voicing (1984), Doug Whalen's discussion of coarticulation (1990), and Kingston & Diehl's discussion about "automatic" and "controlled" phonetics (1994). The emerging consensus from this earlier work and the resulting evolution of laboratory phonology was that phonetic patterns are closely controlled by speakers and many patterns are language-specific.1</div>
<div>
<br /></div>
<div>
Ladd's (2014) book provides a nice overview of many of these ideas - in particular the view that <i>"Phonologists want their descriptions to account for the phonetic detail of utterances. Yet most are reluctant to consider the use of formalisms involving continuous mathematics and quantitative variables, and without such formalisms, it is doubtful that any theory can deal adequately with all aspects of the linguistic use of sound." </i>(p.51)</div>
<div>
<br /></div>
<div>
If we fast-forward to the present day, the landscape of phonetics and phonology is quite different than what it used to be. I think most laboratory phonologists (and most phonologists nowadays are laboratory phonologists) would agree that representations reflect distributions of productions in some way and that the statistical and articulatory details can vary in a gradient way across languages. <br />
<br />
With this in mind, what is left of phonetic universals? There are certainly several universals regarding phonological inventories that could be discussed (see Gordon's recent 2016 book on the topic). But what of phonetic patterns that are best captured quantitatively? What are the universals and near universals? I thought I would start to collect a list of these here as a way to organize my thoughts and to challenge/question my assumptions. I invite anyone to propose additional things here too.<br />
<br />
<b>1. Dorsal stops (almost always) have longer VOT (voice onset time) than coronal or labial stops</b><br />
On the basis of looking at 18 different languages, Cho and Ladefoged (1999) first noted that, after one adjusts for the same laryngeal category (voiced, voiceless, voiceless aspirated), dorsal stops will tend to have a longer VOT than coronal or labial stops. A more recent analysis of this question is found in Chodroff et al. (2019) where the authors looked at over 100 different languages. Of the languages that they sampled, 95% displayed the dorsal > coronal pattern. This finding probably relates to a mechanical constraint on movement of the tongue dorsum. Since the dorsum has greater mass, the release portion tends to take longer (Stevens 2000). All else being equal, larger articulators usually move more slowly than smaller ones - a general principle of physiology and movement. This longer release portion delays venting of the supralaryngeal cavity which ultimately facilitates aerodynamic conditions for voicing.</div>
<div>
<br /></div>
<div>
Chodroff et al.'s sampling revealed another near universal - VOT is strongly correlated within a particular language. That is, if a language tends to have very short lag VOT values for one stop consonant, it has very short lag VOT values for all the others too. This finding is interesting since it suggests that speakers and languages produce identical laryngeal gestures regardless of the supralaryngeal constriction. There is some physiological evidence for this universal (Munhall & Löfqvist 1992).<br />
<br />
<b>2. All languages have utterance-final lengthening.</b><br />
<br />
Though languages tend to vary in the extent to which words are lengthened at phrase-final or utterance-final position, it seems to have been found in every language where it has been investigated (Fletcher 2010, White et al. 2020). Even languages which lack phonological units used in intonation systems (boundary tones, pitch accents) seem to have utterance-final lengthening (DiCanio and Hatcher, 2018, DiCanio et al. 2018, in press).</div>
<div>
<br /></div>
<div>
There is probably a biomechanical explanation for utterance-final lengthening based on articulatory slowing at the end of utterances. As speakers are finishing utterances, their articulators gradually move more slowly (Byrd & Saltzman 2003). The scope of this effect varies across languages and it is not yet clear whether certain syllable types are more affected than others, i.e. closed syllables or syllables with short vowels might undergo less final lengthening.<br />
<br />
<b>3. Languages optimize the distance between vowels in articulation/acoustics.</b></div>
<div>
<br />
I'll leave it open for now whether this refers just to articulatory dispersion or acoustic dispersion (there is debate around this, of course), but it seems like most languages try to optimize the height and backness of vowels. In languages with asymmetric vowel systems, e.g. /i, e, a, o/, or /i, e, ɛ, a, o, u/, the back vowels will have F1 values that often sit in-between the values for the corresponding front vowels (Becker-Kristal 2010). Becker-Kristal looked at the acoustics of over 100 different languages and found this to be a general pattern. The opposite pattern is ostensibly true, but most languages have more front vowel contrasts than back vowel contrasts.<br />
<br />
***Edited to include new things - thanks to Eleanor Chodroff, David Kamholz, Joseph Casillas, Rory Turnbull, Claire Bowern, Carlos Wagner and various others on Twitter whose identities/names are not clear.***<br />
<br />
<b>4. Intrinsic F0 of high vowels</b><br />
<b><br /></b>
There is some discussion of this effect, but it seems to be the case that, all else being equal, high vowels will have higher F0 than low vowels (Whalen & Levitt 1995). In all languages where it has been investigated, researchers have found positive evidence for this. Whalen & Levitt note that the explanation here has to do with enhanced subglottal pressure and greater cricothyroid (CT) activitiy in the production of high vowels relative to low vowels. Ostensibly, as the tongue is raised, it exerts a pull on the larynx via the geniohyoid and hyothyroid muscles. This raises the thyroid cartilage and thus exerts pull on the cricothyroid itself (raising F0). Greater subglottal pressure would then be needed to surpass the impedance due to greater vocal fold tension.<br />
<br />
There is a tendency, however, to not observe the effect in low F0 contexts, in particular for low tones in tone languages. I've personally wondered about this in Mixtec and Triqui languages, though it is usually quite difficult to control for glottalization, tone, and vowel quality all at once in these languages in order to investigate this question. Why might the effect not be found for low tones? One possibility is that F0 control is essentially different in a low F0 context. According to Titze's body-cover model of vocal fold vibration (1994), the thyroarytenoid (TA) muscles are more responsible for vocal fold vibration when F0 is low. Perhaps tongue raising exerts less force on the TA than it does on the CT.<br />
<br />
<b>5. Voiced stops are shorter in duration than voiceless stops</b><br />
<b><br /></b>
Voicing is hard to maintain when there is any constriction in the supraglottal cavity. Assuming no velopharyngeal port venting, supralaryngeal oral stop closure will cause a build up in pressure above the glottis which will inhibit the necessary pressure differential across the glottis required for continued voicing - the <i>aerodynamic voicing constraint </i>(Ohala, 1983). Thus, stops will stop voicing relatively quickly during closure. Similarly, for voiced fricatives, the necessity to maintain narrow constriction for frication and greater intra-oral air pressure relative to atmospheric air pressure is at odds with a simultaneous necessity to maintain greater subglottal pressure relative to intra-oral (supraglottal) air pressure for continued voicing. Thus, voiced fricatives will often devoice or de-fricativize (and be produced as continuants).<br />
<br />
A consequence of the aerodynamic voicing constraint in stops is that the duration of stop voicing is limited and so, it turns out, voiced stops are shorter than voiceless ones. This has been observed since early work of Lisker (1957) (c.f. Lisker 1986 as well). It seems to be a phonetic universal. What about fricatives though? Are voiced fricatives typically shorter than voiceless ones? I think that the jury is still out on this one. While it is difficult to maintain simultaneous voicing and frication for voiced fricatives, the temporal constraints are not as clear as with stops. Yet, voiced fricatives are almost always shorter than voiceless fricatives as well. </div>
<div>
<br /></div>
<div>
<b>What's not a universal?</b><br />
<br />
In thinking about ostensible phonetic universals, I am struck by many patterns that do not seem to be as universal as once believed. I am most familiar with those in the research that I have done.<br />
<br />
<b>6. Not a universal - word-initial strengthening</b></div>
<div>
<b><br /></b></div>
<div>
A common cross-linguistic pattern is that word-initial consonants will be produced with greater duration and/or with stronger articulations (more contact, faster velocity). Fougeron & Keating (1997) is a seminal paper observing this pattern with English speakers. It has been studied in various languages - most recently in work by Katz & Fricke (2018) and White et al. (2020). While Fougeron & Keating (1997) and subsequent work by Keating et al. (2003) do not assert that this pattern is universal, White et al. (2020) state the following (in their conclusions):</div>
<div>
<br /></div>
<div>
<i>"We propose, however, that initial consonant lengthening may be likely to maintain a universal structural function because of the critical importance of word onsets for the entwined processes of speech segmentation and word recognition."</i></div>
<div>
<i><br /></i></div>
<div>
I should admit, I'm working on a paper which addresses this claim with some of my research on Yoloxóchitl Mixtec, an Otomanguean language in Mexico. The language is prefixal and has final stress. Word-initial consonants are always shorter than word medial ones and (in the paper I'm working on now at least), undergo more lenition. You don't have to take my word about this based on something not-yet-published though. The durational finding is replicated in both DiCanio et al. (2018) and DiCanio et al. (to appear). So, three different publications all with different speakers have found the effect. (I'll just mention here, because this is a blog and not a publication, that the same pattern seems to hold in Itunyoso Triqui - another Mixtecan language with final stress and prefixation. That's another paper for this summer.)</div>
<div>
<br /></div>
<div>
There's an interesting thing here though - most of the languages which have been studied in relation to initial strengthening are not prefixing languages. In prefixal languages, like Scottish Gaelic, parsing word-initial consonants does not help too much in word identification (Ussishkin et al. 2017). The authors state the following:<br />
<br />
<i>"Our results show that during the process of spoken word recognition, listeners stick closer to the surface form until other sounds lead to an interpretation that the surface form results from the morphophonological alternation of mutation." </i>(Ussishkin et al. 2017, p.30)</div>
<div>
<i><br /></i></div>
<div>
While this research does not address word-initial strengthening, it suggests that there is just something different about prefixal languages in terms of word recognition. If the goal of word-initial strengthening is to enhance cues to word segmentation, then it stands to reason that word-initial strengthening might not occur in heavily prefixing languages. At the very least, the Mixtec data show that word-initial consonant lengthening is indeed not a universal.<br />
<br />
<b>7. Not a universal - native listeners of a tone language are better at pitch perception than native listeners of non-tonal languages</b><br />
<b><br /></b>
I know, I know, you want to believe that it's true. All tone language listeners must have superpowers when it comes to perceiving pitch, right? It turns out that the evidence is quite mixed here and that the role of music experience ends up playing a big role. There are papers that have found evidence that speaking a tone language confers some benefit in pitch discrimination when listeners have to discriminate both between tonal categories and within them (Burnham et al. 1996, Hallé et al 2004, Peng et al. 2010). However, there are other papers showing no advantage (Stagray & Downs 1993, DiCanio 2012, So & Best 2010). At issue is usually the musical background of the listeners under question. In Stagray & Downs (1993), the authors chose only speakers of Mandarin who did not have musical experience and in DiCanio (2012), none of the Triqui listeners had any music experience. In So & Best (2010), the authors screened 300 Hong Kong Cantonese listeners and chose only those with (a) no knowledge of Mandarin and (b) no formal music training. Only 30/300 qualified! Many other studies finding an advantage to tone language listeners have not controlled for musical background.<br />
<br />
So, how much does musical ability play a role in tonal discrimination? I can provide an example from some data from my 2012 paper (though this was not discussed in the paper itself). Triqui is heavily tonal, with nine lexical tones (/1, 2, 3, 4, 45, 13, 43, 32, 31/) and extensive tonal morphophonology (DiCanio 2016). One would imagine that, when presented with stimuli along a continuum between two tonal categories, e.g. falling tones 32 and 31, they might be extra careful at perceiving slight differences. It turns out that they have improved perception at perceiving between-category differences (steps 2-4, 3-5, 4-6, 5-7) than within-category differences (steps 1-3, 6-8).<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheNkRyToopyYIiRYjbI66m5IiAA8VCzjEXjRVQospVhVdoQNuvcEKk_VJj-uQrdJ6nfCTkWuEf4sCZ6e-kFNsh7Nyl9q0MTq9Xh5EqhTqdwVs6o22xSsLEc-OFECoALaZZA4Xjt9AJdMHZ/s1600/Music_influence.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1531" data-original-width="1600" height="381" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheNkRyToopyYIiRYjbI66m5IiAA8VCzjEXjRVQospVhVdoQNuvcEKk_VJj-uQrdJ6nfCTkWuEf4sCZ6e-kFNsh7Nyl9q0MTq9Xh5EqhTqdwVs6o22xSsLEc-OFECoALaZZA4Xjt9AJdMHZ/s400/Music_influence.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Discrimination accuracy of tonal continua for Triqui and French listeners. Data from DiCanio (2012). No Triqui speaker has musical training but a subset (13/20) of the French speakers did. Discrimination is better than predicted at the end of the continuum because listeners were comparing resynthesized natural to non-resynthesized speech.</td></tr>
</tbody></table>
On the whole, French speakers were <i>better</i> at discriminating Triqui tonal pairs along the continuum than Triqui speakers were. This is quite surprising, but once we separate French speakers by their musical background, we find that the non-musicians of the bunch were worse at between-category tonal discrimination than the Triqui listeners (but better at within-category discrimination). Having some musical background (at least 2-3 years) provides a remarkable benefit to your pitch discrimination abilities. Speaking a tone language makes you good at telling apart two particular tones in your language at a categorical boundary between them, but it does not make you magically better at pitch discrimination, apparently.</div>
<div>
<br /></div>
<div>
-----</div>
<div>
There are undoubtedly many other things that could be put here both for universals and <i>no-longer</i> universals. I'm of course very biased here as someone that works on prosody. (I tend to be more interested in the prosodic patterns.) This is intended to be a continually-developing list of things for both my personal memory and for others to contribute to (or argue with). So, any thoughts of things to add are most welcome.</div>
<div>
_____________<br />
1. There are many other sources here that I'm probably missing. I'd be happy to add any that people suggest.</div>
<div>
<br /></div>
<div>
<b>References:</b></div>
<div>
Becker-Kristal, R. (2010). <i>Acoustic typology of vowel inventories and Dispersion Theory: Insights from a large cross-linguistic corpus.</i> PhD thesis, UCLA.<br />
<br />Burnham, D., Francis, E., Webster, D., Luksaneeyanawin, S., Attapaiboon, C., Lacerda, F., and Keller, P. (1996). Perception of lexical tone across languages: evidence for a linguistic mode of processing. In <i>Proceedings of the 4th International Conference on Spoken Language Processing</i>, volume 4, pages 2514–2517.<br /><br />
Byrd, D. and Saltzman, E. (2003). The elastic phrase: modeling the dynamics of boundary-adjacent lengthening.<i> Journal of Phonetics,</i> 31:149–180.<br />
<br />
Cho, T. and Ladefoged, P. (1999). Variation and universals in VOT: evidence from 18 languages. <i>Journal of Phonetics</i>, 27:207–229.<br />
<br />
Chodroff, E., Golden, A., and Wilson, C. (2019). Covariation of stop voice onset time across languages: Evidence for a universal constraint on phonetic realization. <i>Journal of the Acoustical Society of America, Express Letters</i>, 145(1):EL109–EL115.<br />
<br />
DiCanio, C. T. (2012). Cross-linguistic perception of Itunyoso Trique tone. <i>Journal of Phonetics,</i> 40:672–688.<br />
<br />
DiCanio, C. T. (2016). Abstract and concrete tonal classes in Itunyoso Trique person morphology. In Palancar, E. and Léonard, J.-L., editors, <i>Tone and Inflection: New Facts and New Perspectives</i>, volume 296 of <i>Trends in Linguistics Studies and Monographs,</i> chapter 10, pages 225–266. Mouton de Gruyter.<br />
<br />
DiCanio, C., Benn, J., and Castillo García, R. (2018). The phonetics of information structure in Yoloxóchitl Mixtec. <i>Journal of Phonetics,</i> 68:50–68.<br />
<br />
DiCanio, C., Benn, J., and Castillo García, R. (in press). Disentangling the effects of position and utterance-level declination on tone production. <i>Language and Speech</i>, in press. Preprint available <a href="https://www.researchgate.net/publication/342110607_Disentangling_the_effects_of_position_and_utterance-level_declination_on_the_production_of_complex_tones_in_Yoloxochitl_Mixtec" target="_blank">here.</a><br />
<br />
DiCanio, C. and Hatcher, R. (2018). On the non-universality of intonation: evidence from Triqui. <i>Journal of the Acoustical Society of America</i>, 144:1941.</div>
<div>
<br /></div>
<div>
DiCanio, C., Zhang, C., Whalen, D. H., and Castillo García, R. (2019). Phonetic structure in Yoloxóchitl Mixtec consonants. <i>Journal of the International Phonetic Association</i>, https://doi.org/10.1017/S0025100318000294.<br />
<br />
Fletcher, J. (2010). The prosody of speech: Timing and rhythm. In <i>The Handbook of Phonetic Sciences,</i> pages 521–602. Wiley-Blackwell, 2nd edition.<br />
<br />
Fougeron, C. and Keating, P. A. (1997). Articulatory strengthening at edges of prosodic domains. <i>Journal of the Acoustical Society of America,</i> 101(6):3728–3740.<br />
<br />
Gordon, M. K. (2016). <i>Phonological Typology</i>. Oxford University Press.<br />
<br />
Hallé, P. A., Chang, Y. C., and Best, C. T. (2004). Identification and discrimination of Mandarin Chinese tones by Mandarin Chinese vs. French listeners. <i>Journal of Phonetics</i>, 32(3):395–421.<br />
<br />
Jakobson, R., Fant, C. G. M., and Halle, M. (1961). <i>Preliminaries to Speech Analysis: The Distinctive Features and their Correlates</i>. MIT Press.<br />
<br />
Katz, J. and Fricke, M. (2018). Auditory disruption improves word segmentation: A functional basis for lenition phenomena. <i>Glossa,</i> 3(1):1–25.<br />
<br />
Keating, P. (1984). Phonetic and phonological representation of stop consonant voicing. <i>Language</i>, 60:286–319.</div>
<div>
<br /></div>
<div>
Keating, P., Cho, T., Fougeron, C., and Hsu, C.-S. (2003). Domain-initial articulatory strengthening in four languages. In Local, J., Ogden, R., and Temple, R., editors, <i>Phonetic interpretation: Papers in Laboratory Phonology VI</i>, pages 145–163. Cambridge University Press, Cambridge, UK.<br />
<br />
Kingston, J. and Diehl, R. L. (1994). Phonetic knowledge. <i>Language,</i> 70(3):419– 454.<br />
<br />
Ladd, D. R. (2014). <i>Simultaneous Structure in Phonology.</i> Oxford University Press.<br />
<br />
Lisker, L. (1957). Closure duration and the intervocalic voiced-voiceless distinction in English. <i>Language</i>, 33:42–49.<br />
<br />
Lisker, L. (1986). Voicing in English: a catalogue of acoustic features signaling /b/ versus /p/ in trochees. <i>Language and Speech,</i> 29(3):3–11.<br />
<br />
Munhall, K. G. and Löfqvist, A. (1992). Gestural aggregation in speech: laryngeal gestures. <i>Journal of Phonetics</i>, 20:93–110.<br />
<br />
Ohala, J. (1983). The origin of sound patterns in vocal tract constraints. In MacNeilage, P. F., editor, <i>The production of speech</i>, pages 189–216. Springer, New York.<br />
<br />
Peng, G., Zheng, H.-Y., Gong, T., Yang, R.-X., Kong, J.-P., and Wang, W. S.-Y. (2010). The influence of language experience on categorical perception of pitch contours. <i>Journal of Phonetics</i>, 38:616–624.<br />
<br />
So, C. K. and Best, C. T. (2010). Cross-language perception of non-native tonal contrasts: effects of native phonological and phonetic influences. <i>Language and Speech</i>, 53(2):273–293.<br />
<br />
Stagray, J. and Downs, D. (1993). Differential sensitivity for frequency among speakers of a tone and nontone language. <i>Journal of Chinese Linguistics</i>, 21:143–163.<br />
<br />
Stevens, K. N. (2000). <i>Acoustic Phonetics</i>. MIT Press, first edition.<br />
<br />
Titze, I. R. (1994). <i>Principles of Voice Production.</i> Prentice-Hall, Englewood Cliffs, NJ.</div>
<div>
<br /></div>
<div>
Ussishkin, A., Warner, N., Clayton, I., Brenner, D., Carnie, A., Hammond, M., and Fisher, M. (2017). Lexical representation and processing of word-initial morphological alternations: Scottish Gaelic mutation. <i>Journal of Laboratory Phonology,</i> 8(1):1–34.<br />
<br />
Whalen, D. H. (1990). Coarticulation is largely planned. <i>Journal of Phonetics,</i> 18(1):3–35.<br />
<br />
Whalen, D. H. and Levitt, A. G. (1995). The universality of intrinsic f0 of vowels. <i>Journal of Phonetics</i>, 23:349–366.<br />
<br />
White, L., Benavides-Varela, S., and Mády, K. (2020). Are initial-consonant lengthening and final-vowel lengthening both universal word segmentation cues? <i>Journal of Phonetics</i>, 81:1–14.</div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com4tag:blogger.com,1999:blog-8009206815446785752.post-12555723347330947992019-12-02T23:10:00.003+01:002019-12-02T23:10:55.142+01:00Tutorial: Creating pretty spectrograms<!--[if gte mso 9]><xml>
<o:OfficeDocumentSettings>
<o:AllowPNG/>
</o:OfficeDocumentSettings>
</xml><![endif]-->
<!--[if gte mso 9]><xml>
<w:WordDocument>
<w:View>Normal</w:View>
<w:Zoom>0</w:Zoom>
<w:TrackMoves/>
<w:TrackFormatting/>
<w:PunctuationKerning/>
<w:ValidateAgainstSchemas/>
<w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid>
<w:IgnoreMixedContent>false</w:IgnoreMixedContent>
<w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText>
<w:DoNotPromoteQF/>
<w:LidThemeOther>EN-US</w:LidThemeOther>
<w:LidThemeAsian>JA</w:LidThemeAsian>
<w:LidThemeComplexScript>KHM</w:LidThemeComplexScript>
<w:Compatibility>
<w:BreakWrappedTables/>
<w:SnapToGridInCell/>
<w:WrapTextWithPunct/>
<w:UseAsianBreakRules/>
<w:DontGrowAutofit/>
<w:SplitPgBreakAndParaMark/>
<w:EnableOpenTypeKerning/>
<w:DontFlipMirrorIndents/>
<w:OverrideTableStyleHps/>
<w:UseFELayout/>
</w:Compatibility>
<w:DoNotOptimizeForBrowser/>
<m:mathPr>
<m:mathFont m:val="Cambria Math"/>
<m:brkBin m:val="before"/>
<m:brkBinSub m:val="--"/>
<m:smallFrac m:val="off"/>
<m:dispDef/>
<m:lMargin m:val="0"/>
<m:rMargin m:val="0"/>
<m:defJc m:val="centerGroup"/>
<m:wrapIndent m:val="1440"/>
<m:intLim m:val="subSup"/>
<m:naryLim m:val="undOvr"/>
</m:mathPr></w:WordDocument>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="false"
DefSemiHidden="false" DefQFormat="false" DefPriority="99"
LatentStyleCount="375">
<w:LsdException Locked="false" Priority="0" QFormat="true" Name="Normal"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 1"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 2"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 3"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 4"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 5"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 6"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 7"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 8"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="heading 9"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 6"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 7"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 8"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index 9"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 1"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 2"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 3"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 4"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 5"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 6"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 7"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 8"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" Name="toc 9"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Normal Indent"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="footnote text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="annotation text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="header"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="footer"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="index heading"/>
<w:LsdException Locked="false" Priority="35" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="caption"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="table of figures"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="envelope address"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="envelope return"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="footnote reference"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="annotation reference"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="line number"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="page number"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="endnote reference"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="endnote text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="table of authorities"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="macro"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="toa heading"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Bullet"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Number"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Bullet 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Bullet 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Bullet 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Bullet 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Number 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Number 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Number 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Number 5"/>
<w:LsdException Locked="false" Priority="10" QFormat="true" Name="Title"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Closing"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Signature"/>
<w:LsdException Locked="false" Priority="1" SemiHidden="true"
UnhideWhenUsed="true" Name="Default Paragraph Font"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text Indent"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Continue"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Continue 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Continue 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Continue 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="List Continue 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Message Header"/>
<w:LsdException Locked="false" Priority="11" QFormat="true" Name="Subtitle"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Salutation"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Date"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text First Indent"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text First Indent 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Note Heading"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text Indent 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Body Text Indent 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Block Text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Hyperlink"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="FollowedHyperlink"/>
<w:LsdException Locked="false" Priority="22" QFormat="true" Name="Strong"/>
<w:LsdException Locked="false" Priority="20" QFormat="true" Name="Emphasis"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Document Map"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Plain Text"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="E-mail Signature"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Top of Form"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Bottom of Form"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Normal (Web)"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Acronym"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Address"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Cite"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Code"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Definition"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Keyboard"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Preformatted"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Sample"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Typewriter"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="HTML Variable"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Normal Table"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="annotation subject"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="No List"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Outline List 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Outline List 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Outline List 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Simple 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Simple 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Simple 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Classic 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Classic 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Classic 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Classic 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Colorful 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Colorful 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Colorful 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Columns 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Columns 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Columns 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Columns 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Columns 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 6"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 7"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Grid 8"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 4"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 5"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 6"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 7"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table List 8"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table 3D effects 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table 3D effects 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table 3D effects 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Contemporary"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Elegant"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Professional"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Subtle 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Subtle 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Web 1"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Web 2"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Web 3"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Balloon Text"/>
<w:LsdException Locked="false" Priority="39" Name="Table Grid"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Table Theme"/>
<w:LsdException Locked="false" SemiHidden="true" Name="Placeholder Text"/>
<w:LsdException Locked="false" Priority="1" QFormat="true" Name="No Spacing"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading"/>
<w:LsdException Locked="false" Priority="61" Name="Light List"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 1"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 1"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 1"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 1"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 1"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 1"/>
<w:LsdException Locked="false" SemiHidden="true" Name="Revision"/>
<w:LsdException Locked="false" Priority="34" QFormat="true"
Name="List Paragraph"/>
<w:LsdException Locked="false" Priority="29" QFormat="true" Name="Quote"/>
<w:LsdException Locked="false" Priority="30" QFormat="true"
Name="Intense Quote"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 1"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 1"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 1"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 1"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 1"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 1"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 1"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 1"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 2"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 2"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 2"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 2"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 2"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 2"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 2"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 2"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 2"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 2"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 2"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 2"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 2"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 2"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 3"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 3"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 3"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 3"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 3"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 3"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 3"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 3"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 3"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 3"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 3"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 3"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 3"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 3"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 4"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 4"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 4"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 4"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 4"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 4"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 4"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 4"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 4"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 4"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 4"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 4"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 4"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 4"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 5"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 5"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 5"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 5"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 5"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 5"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 5"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 5"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 5"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 5"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 5"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 5"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 5"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 5"/>
<w:LsdException Locked="false" Priority="60" Name="Light Shading Accent 6"/>
<w:LsdException Locked="false" Priority="61" Name="Light List Accent 6"/>
<w:LsdException Locked="false" Priority="62" Name="Light Grid Accent 6"/>
<w:LsdException Locked="false" Priority="63" Name="Medium Shading 1 Accent 6"/>
<w:LsdException Locked="false" Priority="64" Name="Medium Shading 2 Accent 6"/>
<w:LsdException Locked="false" Priority="65" Name="Medium List 1 Accent 6"/>
<w:LsdException Locked="false" Priority="66" Name="Medium List 2 Accent 6"/>
<w:LsdException Locked="false" Priority="67" Name="Medium Grid 1 Accent 6"/>
<w:LsdException Locked="false" Priority="68" Name="Medium Grid 2 Accent 6"/>
<w:LsdException Locked="false" Priority="69" Name="Medium Grid 3 Accent 6"/>
<w:LsdException Locked="false" Priority="70" Name="Dark List Accent 6"/>
<w:LsdException Locked="false" Priority="71" Name="Colorful Shading Accent 6"/>
<w:LsdException Locked="false" Priority="72" Name="Colorful List Accent 6"/>
<w:LsdException Locked="false" Priority="73" Name="Colorful Grid Accent 6"/>
<w:LsdException Locked="false" Priority="19" QFormat="true"
Name="Subtle Emphasis"/>
<w:LsdException Locked="false" Priority="21" QFormat="true"
Name="Intense Emphasis"/>
<w:LsdException Locked="false" Priority="31" QFormat="true"
Name="Subtle Reference"/>
<w:LsdException Locked="false" Priority="32" QFormat="true"
Name="Intense Reference"/>
<w:LsdException Locked="false" Priority="33" QFormat="true" Name="Book Title"/>
<w:LsdException Locked="false" Priority="37" SemiHidden="true"
UnhideWhenUsed="true" Name="Bibliography"/>
<w:LsdException Locked="false" Priority="39" SemiHidden="true"
UnhideWhenUsed="true" QFormat="true" Name="TOC Heading"/>
<w:LsdException Locked="false" Priority="41" Name="Plain Table 1"/>
<w:LsdException Locked="false" Priority="42" Name="Plain Table 2"/>
<w:LsdException Locked="false" Priority="43" Name="Plain Table 3"/>
<w:LsdException Locked="false" Priority="44" Name="Plain Table 4"/>
<w:LsdException Locked="false" Priority="45" Name="Plain Table 5"/>
<w:LsdException Locked="false" Priority="40" Name="Grid Table Light"/>
<w:LsdException Locked="false" Priority="46" Name="Grid Table 1 Light"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark"/>
<w:LsdException Locked="false" Priority="51" Name="Grid Table 6 Colorful"/>
<w:LsdException Locked="false" Priority="52" Name="Grid Table 7 Colorful"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 1"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 1"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 1"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 1"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 1"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 1"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 1"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 2"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 2"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 2"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 2"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 2"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 2"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 2"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 3"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 3"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 3"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 3"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 3"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 3"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 3"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 4"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 4"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 4"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 4"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 4"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 4"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 4"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 5"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 5"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 5"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 5"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 5"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 5"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 5"/>
<w:LsdException Locked="false" Priority="46"
Name="Grid Table 1 Light Accent 6"/>
<w:LsdException Locked="false" Priority="47" Name="Grid Table 2 Accent 6"/>
<w:LsdException Locked="false" Priority="48" Name="Grid Table 3 Accent 6"/>
<w:LsdException Locked="false" Priority="49" Name="Grid Table 4 Accent 6"/>
<w:LsdException Locked="false" Priority="50" Name="Grid Table 5 Dark Accent 6"/>
<w:LsdException Locked="false" Priority="51"
Name="Grid Table 6 Colorful Accent 6"/>
<w:LsdException Locked="false" Priority="52"
Name="Grid Table 7 Colorful Accent 6"/>
<w:LsdException Locked="false" Priority="46" Name="List Table 1 Light"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark"/>
<w:LsdException Locked="false" Priority="51" Name="List Table 6 Colorful"/>
<w:LsdException Locked="false" Priority="52" Name="List Table 7 Colorful"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 1"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 1"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 1"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 1"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 1"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 1"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 1"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 2"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 2"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 2"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 2"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 2"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 2"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 2"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 3"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 3"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 3"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 3"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 3"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 3"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 3"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 4"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 4"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 4"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 4"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 4"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 4"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 4"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 5"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 5"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 5"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 5"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 5"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 5"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 5"/>
<w:LsdException Locked="false" Priority="46"
Name="List Table 1 Light Accent 6"/>
<w:LsdException Locked="false" Priority="47" Name="List Table 2 Accent 6"/>
<w:LsdException Locked="false" Priority="48" Name="List Table 3 Accent 6"/>
<w:LsdException Locked="false" Priority="49" Name="List Table 4 Accent 6"/>
<w:LsdException Locked="false" Priority="50" Name="List Table 5 Dark Accent 6"/>
<w:LsdException Locked="false" Priority="51"
Name="List Table 6 Colorful Accent 6"/>
<w:LsdException Locked="false" Priority="52"
Name="List Table 7 Colorful Accent 6"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Mention"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Smart Hyperlink"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Hashtag"/>
<w:LsdException Locked="false" SemiHidden="true" UnhideWhenUsed="true"
Name="Unresolved Mention"/>
</w:LatentStyles>
</xml><![endif]-->
<style>
<!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;
mso-font-charset:0;
mso-generic-font-family:roman;
mso-font-pitch:variable;
mso-font-signature:-536870145 1107305727 0 0 415 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{mso-style-name:"Normal\,Normal-TNR";
mso-style-unhide:no;
mso-style-qformat:yes;
mso-style-parent:"";
margin:0in;
margin-bottom:.0001pt;
mso-pagination:widow-orphan;
mso-layout-grid-align:none;
text-autospace:none;
font-size:12.0pt;
font-family:"Times New Roman",serif;
mso-fareast-font-family:"Times New Roman";
mso-fareast-language:JA;
mso-bidi-language:AR-SA;}
.MsoChpDefault
{mso-style-type:export-only;
mso-default-props:yes;
mso-bidi-font-size:12.0pt;
font-family:"Cambria",serif;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"MS Mincho";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:DaunPenh;
mso-bidi-theme-font:minor-bidi;
mso-fareast-language:JA;
mso-bidi-language:AR-SA;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.25in 1.0in 1.25in;
mso-header-margin:.5in;
mso-footer-margin:.5in;
mso-paper-source:0;}
div.WordSection1
{page:WordSection1;}
-->
</style>
<!--[if gte mso 10]>
<style>
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:"Cambria",serif;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;
mso-fareast-language:JA;
mso-bidi-language:AR-SA;}
</style>
<![endif]-->
<!--StartFragment-->
<br />
<div align="center" class="MsoNormal" style="text-align: center;">
<div style="text-align: left;">
Phonetic data is no longer just for papers on phonetics. Research using quantitative methods, corpus data, and experimental approaches may involve phonetic data for analytical or visualization purposes. There may also simply be a need to demonstrate visually a phonetic pattern in a linguistics paper unrelated to phonetics. For instance, descriptive grammars are stronger/clearer when phonological argumentation is accompanied with phonetic data showing patterns (Maddieson 2001, Maddieson et al. 2009). The movement to examine more phonetic data within linguistics is motivated by several factors:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
a. It is easier than ever before to show proof of one's observations.</div>
<div style="text-align: left;">
b. A greater focus on spoken language corpora means that one must use tools which analyze the speech signal (not just texts or transcriptions). </div>
<div style="text-align: left;">
c. Laboratory phonology has been incorporated into all areas of phonology.</div>
<div style="text-align: left;">
d. Gradient processes within the phonetic signal are relevant to our understanding of social variation and representations in the mental lexicon.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Yet, despite these changes to the field, linguists (and especially students starting off in linguistics) often have trouble visualizing phonetic data within research. The effect of this is that one might not be able to convey one's message clearly to the audience, casting doubt on the observations. Some of the common pitfalls include: (1) The scaling parameters for displaying the acoustics are incorrect and you can not observe the relevant detail (e.g. dynamic range, F0 range, etc), (2) The text is not correctly aligned with the acoustics, (3) Too much information is displayed (another scaling problem), and (4) No scale is given.<br />
<br />
Drawing well-labelled spectrograms is not difficult and Praat (Boersma & Weenink 2019) possesses several nice tools that allow you to visualize things rather nicely (far better than taking a screen shot of your screen). This tutorial is designed as the first (of perhaps many) which aim to improve how acoustic phonetic data is visualized.<br />
<br />
<b>I. Initial steps: include a textgrid</b><br />
<br />
(1) Open up the sound file that you wish to visualize. In most cases, a reader will not be able to visually inspect anything more than 6-10 segments long in an image. So, make sure that the duration that you wish to image is shorter than this. Otherwise, it is not showing much to a reader.<br />
<br />
(2) Create a textgrid along with the sound file and segment the portions that you wish to visualize. If you are not sure about how to create a textgrid, please see the Praat manual. I have created a simple example here of myself saying the word 'ken' [kʰɛ̃n] (below).<br /><br />(3) Once you have created a textgrid, select the portion of the sound file corresponding to the textgrid and then choose from the File menu "Extract selected textgrid (preserve times)." This will create a textgrid file exactly the size of the spectrogram you wish to display it with.<br /><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSeGEnHIKvTWJv8pj8zInZbVjj_LnUmNgHRFV1p6W06Rqr0lAARk_L1HPVUDZ2BZXUEw2DqP09gTeME7jzwZ42nvlTm4g6qlRAqrjnH-oN-QBKuz503-nUBtjtkgIE64PBtLyHZJH3jmya/s1600/Ken_snapshot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1044" data-original-width="1600" height="417" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSeGEnHIKvTWJv8pj8zInZbVjj_LnUmNgHRFV1p6W06Rqr0lAARk_L1HPVUDZ2BZXUEw2DqP09gTeME7jzwZ42nvlTm4g6qlRAqrjnH-oN-QBKuz503-nUBtjtkgIE64PBtLyHZJH3jmya/s640/Ken_snapshot.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A spectrogram of the word 'ken.'</td></tr>
</tbody></table>
<b>II. Exporting a visible spectrogram</b><br /><br />(4) Praat does not currently allow users to export a spectrogram from a sound file - it is necessary to export a visible spectrogram.<br /><br />(5) To do this, first select the portion of the sound file that you wish to visualize and click 'sel' (select). Then, from the Spectrum menu, select "Extract Visible Spectogram."<br /><br />(6) You should now see a visible spectrogram in the object window of Praat.<br /><br /><b>III. Adding layers to create an image</b><br /><br />(7) The key to creating a nice image is to add objects/details in layers. Praat allows you the ability to add in layers to an image and you may undo multiple layers at a time in the picture window.<br /><br />(8) The things to understand about the picture window are that (a) it will print only in the region that you have selected and (b) it will use any presets you have chosen for Pen/Font. It does not revert to a default. Select a fairly large region for your spectrogram, perhaps a 4x6 image.<br /><br />(9) Now, select the spectrogram in the object window and select "Draw:Paint..." In the dialog window, the option "Garnish" is often pre-selected for you. When you print with the Garnish button selected, Praat will print information about the sound image. You do not want it to do that since we will be adding in elements pertaining to the axes separately, ourselves. So, unselect this (see below).<br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7FQ4l6dzw0NdV-oaMrxH5AtSeHEruWT4YoEQFtHTpX-juGggytMhONxaYLR_5niwAjKt64tMlarAgAcvYqIpiT_sZs6890UDXSvy6stE6iMNEebTX-U3ZoRf4m__9vuW4Tls4gNEbJ7H8/s1600/Screen+Shot+2019-12-02+at+4.05.38+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7FQ4l6dzw0NdV-oaMrxH5AtSeHEruWT4YoEQFtHTpX-juGggytMhONxaYLR_5niwAjKt64tMlarAgAcvYqIpiT_sZs6890UDXSvy6stE6iMNEebTX-U3ZoRf4m__9vuW4Tls4gNEbJ7H8/s640/Screen+Shot+2019-12-02+at+4.05.38+PM.png" width="640" /></a></div>
<br />
(10) This should now produce a spectrogram with no margins in the picture window. That's the first step.<br /><br />(11) Now, from the "Margins" menu in the picture window, select "Draw inner box." This will create margins around the box. Note that the thickness of the margin here can be adjusted under the Pen:Line Width menu in the picture window. However, Praat does not allow you to adjust things after they are drawn - you must do this before you print elements. For now, the preset - 1.0 line width - is sufficient. You should have created something like this below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZKEnIfKHLnFbMoHfftmQHyBGxqV0mYfszU6O5hacL01VcqAooPLHbuIDbRg4OuMdFiHSjO7xbXi4QZszF35hdjJrXQQRh1SOlnHSoJBdFuuetv8Jvym5SNHnhO4tzfvVrTpUl-jRnAkZb/s1600/Screen+Shot+2019-12-02+at+4.16.09+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1070" data-original-width="1531" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZKEnIfKHLnFbMoHfftmQHyBGxqV0mYfszU6O5hacL01VcqAooPLHbuIDbRg4OuMdFiHSjO7xbXi4QZszF35hdjJrXQQRh1SOlnHSoJBdFuuetv8Jvym5SNHnhO4tzfvVrTpUl-jRnAkZb/s400/Screen+Shot+2019-12-02+at+4.16.09+PM.png" width="400" /></a></div>
<br />(12) Now comes the fun part - we will be adding in axes in stages. First, let's add in a y-axis. From the Margins menu, select Marks:Marks left at the bottom of the menu. We can choose to exclude dotted lines for the moment, but Praat recognizes the scale of the image, so it will know that the y-axis should be frequency in Hz.<br /><br />(13) Once you have done this, select "Text left" from the Margins menu. Print "Frequency (Hz)." The resulting image should look like the one below:<br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGVFAHULjt-sZRkKxKD5ay3Ep0bB7mD1YRizDS3KQM9qKFuJyMLeIYJUAAbPvLXFOMukT7AIHp2MINBYvJsMcOcmO7uSHuWIv8N1hSic9Pc-CYV1gZEeJNeWUlKa_ERREvQltYmT2V0OLK/s1600/Screen+Shot+2019-12-02+at+4.21.46+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="920" data-original-width="1562" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGVFAHULjt-sZRkKxKD5ay3Ep0bB7mD1YRizDS3KQM9qKFuJyMLeIYJUAAbPvLXFOMukT7AIHp2MINBYvJsMcOcmO7uSHuWIv8N1hSic9Pc-CYV1gZEeJNeWUlKa_ERREvQltYmT2V0OLK/s400/Screen+Shot+2019-12-02+at+4.21.46+PM.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
(14) We can continue to add in layers this way (including duration on the x-axis), and if you so wished, we could then export this to a pdf document. However, we could also add in text.<br /><br />(15) To add in text, select a portion of the image larger than the box with the spectrogram itself (see below) and then choose the textgrid file from the objects window. Deselect the "garnish" option again and click OK.<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx3w77_dPehPPYjyAOsAGVEOzJkVD_eJJTiD5KMHGwlTu6lwQs4h9bzNhRrcWXEXZwDLk8dxdRlfdpHsDvXovWxEzMjigCNLjK60duxSDiq5c6-MN_EiTUzOOKR_-hMv3nZKTMahoyRchW/s1600/Screen+Shot+2019-12-02+at+4.35.30+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx3w77_dPehPPYjyAOsAGVEOzJkVD_eJJTiD5KMHGwlTu6lwQs4h9bzNhRrcWXEXZwDLk8dxdRlfdpHsDvXovWxEzMjigCNLjK60duxSDiq5c6-MN_EiTUzOOKR_-hMv3nZKTMahoyRchW/s640/Screen+Shot+2019-12-02+at+4.35.30+PM.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
(16) The "show boundaries" option allows us to visualize the segmental boundaries that you have chosen in your spectrogram, but the default line width (1.0) is a bit narrow/thin for visualization. If you want to adjust this, choose Line width from the Pen menu and set it to something larger (like 1.5 or 1.8). Then print the textgrid. <br /><br />If you want to go back to do this, just undo the print option, change the settings, and then print the textgrid again.<br /><br />(17) The result should look something like below.<br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_EgjalGOEG-oGYHxrhhpT1ro1Kb48Xt91LSRnTCHC0YaG8SItp8EwEVEflSU7mo8jlTk88XvUcgPfjhqELmRQ_oOISm8fQye6GwhMhN4y69_IJ1evs3ihwxFo-Wg6v9OY7tOzo5WDg6X/s1600/Screen+Shot+2019-12-02+at+4.31.58+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_EgjalGOEG-oGYHxrhhpT1ro1Kb48Xt91LSRnTCHC0YaG8SItp8EwEVEflSU7mo8jlTk88XvUcgPfjhqELmRQ_oOISm8fQye6GwhMhN4y69_IJ1evs3ihwxFo-Wg6v9OY7tOzo5WDg6X/s640/Screen+Shot+2019-12-02+at+4.31.58+PM.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
(18) The last step we might do is to include some acoustic information. Let's suppose we want to add in formants to our figure. Select the sound file from the object window and choose "Analyze Spectrum: To formant (burg)..." This will create a formant object in your window.<br /><br />(19) Select the original box portion in the picture window again (not the entire portion with text). Now, select the formant object from your object window and click "Draw: Speckle..." and make sure you deselect the "garnish" option. This will create speckles corresponding to your formants. Be sure to set the range of the drawing option to match the range of the spectrogram, i.e. if your sound file is <i>longer</i> than the spectrogram you are visualizing, you will end up with formant values that do not match the image.<br /><br />Note that if you lower the dynamic range, it will only draw formants within that range, i.e. 20 dB = the loudest 20 dB of the speech signal. The output of this should look as below:<br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0NDUm-ztGPZbiDieS62WjUKF6_XEdr2vDuotrwr-6hGSBUQVLCcXv7VqbFcZgszxRJYHj7zeSx_5Y9DPzfiwd9hMpo0cAHjEc4-Med23eUDVJ5MjWyqpOA1S57y2NWuL-TgSGstdXOE1b/s1600/Screen+Shot+2019-12-02+at+4.59.49+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0NDUm-ztGPZbiDieS62WjUKF6_XEdr2vDuotrwr-6hGSBUQVLCcXv7VqbFcZgszxRJYHj7zeSx_5Y9DPzfiwd9hMpo0cAHjEc4-Med23eUDVJ5MjWyqpOA1S57y2NWuL-TgSGstdXOE1b/s640/Screen+Shot+2019-12-02+at+4.59.49+PM.png" width="640" /></a></div>
<br />
(20) We could add in extra layers, e.g. duration on an x-axis under the text, F0 data on the axis to the right of the spectrogram, etc. However, we'll just stop here because I think you probably get the gist of this. The final exported pdf always looks nicer than what appears in the Praat picture window (see below). You can now add in labels (arrows, text) using other software.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgigb_8a8puXG6JXqasXot31OS0RIQPCb8iNfXCXpFjn7S5o6SsFcOUvfjziiSaIsZnrbLbFzFvWH83fXqEgOTCN_C7y1HTx5pTTeGHXpQbryaMhpKZqBwvbeDrcJj69pf1ccidVfqU_uhc/s1600/ken.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1467" data-original-width="1600" height="585" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgigb_8a8puXG6JXqasXot31OS0RIQPCb8iNfXCXpFjn7S5o6SsFcOUvfjziiSaIsZnrbLbFzFvWH83fXqEgOTCN_C7y1HTx5pTTeGHXpQbryaMhpKZqBwvbeDrcJj69pf1ccidVfqU_uhc/s640/ken.jpg" width="640" /></a></div>
<b><u>References:</u></b><br />
Boersma, P. and Weenink, D. (2019). Praat: doing phonetics by computer (version 6.1). Computer program. Retrieved from http://www.praat.org/.<br />
<br />
Maddieson, I. (2001). Phonetic fieldwork. In Newman, P. and Ratliff, M., editors,<i> Linguistic Fieldwork</i>, pages 211–229. Cambridge University Press.<br />
<br />
Maddieson, I., Avelino, H., and O’Connor, L. (2009). The Phonetic Structures of Oaxaca Chontal. <i>International Journal of American Linguistics</i>, 75(1):69–103.</div>
</div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com3tag:blogger.com,1999:blog-8009206815446785752.post-16100731073928737802019-08-17T20:50:00.002+02:002019-08-17T21:09:12.260+02:00Readability in reporting statisticsWithin the past 20 years there has been a bit of a (r)evolution in the quantitative methods used in the speech sciences and linguistics. A renewed focus on experimental research in linguistics and the development of <a href="https://en.wikipedia.org/wiki/Laboratory_phonology" target="_blank">laboratory phonology</a> as a field have contributed to this development. Though phonetics has always been an experimental field, it too has benefitted from a renewed interest in quantitative methods. The availability of free and powerful statistical analysis software, such as <a href="https://www.r-project.org/" target="_blank">R</a>, has improved access to tools. Finally, several books focusing on quantitative methods in linguistic sciences have been published, all of which improve the statistical learning curve.<br />
<br />
Yet with any changes to a field come challenges. Since several types of linguistic data violate the assumptions of ANOVA, what method should you use instead? With several newer methods (multi-level modeling, generalized additive models, growth curve analysis, spine ANOVA, functional data analysis, Bayesian methods, etc), it is often also unclear what statistic to report. If we are concerned about <a href="https://en.wikipedia.org/wiki/Replication_crisis" target="_blank">replicability</a> in our field, how do we ensure that our methods are clear enough to be replicated? And, importantly, how do we communicate these concerns to both novice and experienced researchers that might not be familiar with them? Since so many methods are new (or new to some of us), we are often tempted to include a fancier model without understanding it fully. How do we ensure we understand it enough to use it?<br />
<br />
These issues are all very important, but we must also not lose sight of our duty as scientists to properly communicate our research. It would be great if our research could "speak for itself." It would be great if we could rely on our readers being so engaged in our results that they never got bored or frustrated reading pages and pages of statistical modeling and tests. It would great if we could assume that all readers understood the mechanics of each model too. Yet, our research seldom speaks for itself and readers can be both bored and uninformed. Unless your research findings are truly groundbreaking, you probably have to pay attention to your writing style.<br />
<br />
I'm not an expert in writing or an expert in statistical methods. I teach a somewhat intense graduate course in quantitative methods in linguistics and have been a phonetician for about 15 years (if I include some time in grad school). My graduate education is in linguistics, not mathematical psychology or statistics. But as a researcher/phonetician I am a practitioner of statistical tests, as a reviewer I read many submitted manuscripts in phonetics, and as a professor I frequently evaluate how students talk about statistics in writing. I think that the best way to open up a discourse about how we report statistics in linguistics and whether it is <i>readable</i> or not is to present various strategies and to discuss their pros/cons.<br />
<br />
I should mention that I'll be pulling examples from my own research in phonetics here as well as a few that I've seen in the literature. I am not intending to offend any particular researcher's practice. On the contrary, I feel that it's necessary to bring up some real examples in this discussion (and I've picked some good ones).<br />
<br />
<b>I. The laundry list</b><br />
<b><br /></b>
One practice in reporting statistics is to essentially report all the effects as a list in the text itself. We've all seen this practice, but after digging for an example of it, I was happy to discover that it is not nearly as frequent as I had assumed (or perhaps we've become better writers). So, here's a made-up example:<br />
<br />
<i>There were significant main effects of vowel quality (F[3, 38] = 6.3, p < .001), age (F[6, 12] = 2.9, p < .01), speech style (F[2, 9] = 5.7, p < .001), and gender (F[3, 8] = 3.2, p < .01) and significant interactions of vowel quality x age (F[18, 40] = 2.7, p < .01], vowel quality x gender (F[12, 20] = 2.4, p < .05), but no significant interaction between vowel quality and gender nor between vowel quality and speech style. There were significant three-way interactions between vowel quality x gender x speech style (F[12, 120] = 2.4, p < .05) but no three-way interaction between either... These effects are seen in the plot of the data shown in Figure 3.</i><br />
<br />
Effect, stat, effect, stat, effect, stat, repeat. It almost sounds like an exercise routine. On the one hand, this method of reporting statistics is comprehensive - all our effects are reported to the reader. We also avoid the issue of <i>tabling your statistics </i>(more on this below). Yet, it reads like a laundry list and a reader can quickly forget (a) which effect to pay attention to and (b) what each effect means in the context of the hypothesis being explored.<br />
<br />
If the research involves just one or two multivariate models for an entire experiment, the researcher might be forgiven for writing this way, but now let's pretend that there are eight models and you are reading the sample paragraph above eight times within the results section of a paper. Then you go on to experiments 2 and 3 and read the same type of results section two more times. By the end of reading the paper, you may have seen results indicating an effect or non-effect of <i>gender x vowel quality </i>twenty-four times. It truly becomes a slog to recall which effects are important in the complexity of the model and you might be forgiven for losing interest in the process.<br />
<br />
There is an additional problem with the laundry list method - our effects have been comprehensively listed but the linkage between individual effects and an illustrative figure has not been established. It might be clear to the researcher, but it's the reader who needs to interpret just what a <i>gender x vowel quality </i>interaction looks like from the researcher's figure. Without connecting the <i>specific</i> statistic and the specific result, we both risk over-estimating the relevance of our particular effect in relation to our hypothesis (risking a Type I error) and we fail to guide our readers in interpreting our statistic the right way (producing either Type S or Type M errors). Our practice of reporting statistics can influence our statistical practice.<br />
<br />
<b><u>Tip #1</u>: </b>Connect the model's results to concrete distinctions in the data in the prose itself.<br />
<br />
Now, just what does it look like to connect statistics to the data? and how might we easily accomplish this? To learn this, we need to examine additional methods.<br />
<br />
<b>II. The interspersed method with summary statistics</b><br />
<br />
If it's not already clear, I'm averse to the laundry list method. It's clear that we need to provide many statistical results to the reader, but how do we do this in a way that will engage them with the data/results? I think that one approach is to include summary statistics in the text of the results section immediately after or before the reported statistic. This has three advantages, in fact. First, the reader is immediately oriented to the effect to look for in a figure. Second, we avoid both type S and type M errors simultaneously. The sign and the magnitude of the effect are clear if we provide sample means alongside our statistic. Third, it breaks up the monotony found in a laundry list of statistical effects. Readers are less likely to forget about what the statistic means when it's tied to differences in the data.<br />
<br />
I have been trying to practice this approach when I write. I include an excerpt from a co-authored paper here below (DiCanio et al. 2018). As a bit of background, we were investigating the effect of <a href="http://www.glottopedia.org/index.php/Focus_(information_structure)" target="_blank">focus type</a> on the production of words in Yoloxóchitl Mixtec, a <a href="https://en.wikipedia.org/wiki/Mixtec_language" target="_blank">Mixtec language</a> spoken in Guerrero, Mexico. Here, we were discussing the combined effect of focus and stress on consonant duration.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5sFAFwlRksmss-WuHDPMv6eUC-HvZ79QfqD8FL3eAY0zD0VQdMUU39mLMJRgD5ZHmG4d-OWsQQ-f1jVrqc2Dqzy-cdvXPgnBh2zuAdS8jcZW20tY3aGBVNY8biLuibjDjDgbeeou8ui14/s1600/snippet_stat.jpg" imageanchor="1"><img border="0" data-original-height="920" data-original-width="1104" height="333" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5sFAFwlRksmss-WuHDPMv6eUC-HvZ79QfqD8FL3eAY0zD0VQdMUU39mLMJRgD5ZHmG4d-OWsQQ-f1jVrqc2Dqzy-cdvXPgnBh2zuAdS8jcZW20tY3aGBVNY8biLuibjDjDgbeeou8ui14/s400/snippet_stat.jpg" width="400" /></a></div>
<br />
The statistics reported here are t values from a linear mixed effects model using <a href="https://www.jstatsoft.org/article/view/v082i13" target="_blank">lmertest</a> (Kuznetsova et al. 2017). The first statistic mentioned is the effect of focus type on onset duration. This effect is then immediately grounded in the quantitative differences in the data - a difference between 114 ms and 104 ms. Then, additional statistics are reported. This approach is avoiding Type S and Type M errors and it makes referring to Figure 2 rather easy. The reader knows that this is a small difference and they might not make much of it even though it is statistically significant. The second statistical effect is related to stress. Here, we see that the differences are more robust - 126 vs. 80 ms. Figure 2, which we referred the reader to above, is shown below.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPnHbaezbB9YoXi9WA5bG-lzdwk4DQ1tpDvTtpqFGOdea00-jsaDYDSWWeXBwDmi35Q4mVHrI8LbgFl6_0dOxVe-byygAl5OHJgeO1iECufCwGHSKtSZsltcsmkGf-50zSCn5hKaqqONdK/s1600/snippet_fig2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="901" data-original-width="1600" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPnHbaezbB9YoXi9WA5bG-lzdwk4DQ1tpDvTtpqFGOdea00-jsaDYDSWWeXBwDmi35Q4mVHrI8LbgFl6_0dOxVe-byygAl5OHJgeO1iECufCwGHSKtSZsltcsmkGf-50zSCn5hKaqqONdK/s640/snippet_fig2.jpg" width="640" /></a></div>
<b><br /></b>
While it is rather easy to get some summary statistics for one's data, what do you do when you need more complex tables of summary statistics? I generally use the ddply() function in the plyr package for R. This function allows one to quickly summarize one's data alongside the fixed effects that you are reporting in your research. Here's an example:<br />
<br />
ddply(data.sample, .(Focus, Stress), summarize, Duration = mean(Duration, na.rm=TRUE))<br />
<br />
For a given data sample, this will provide mean duration values for the fixed effects of focus and stress. One can specify different summary statistics (mean, sd, median, etc) and include additional fixed effects. While this may seem rather trivial here (it's just a 2x2 design after all), it ends up being crucially useful for larger multivariate models where there are 2-way and 3-way interactions. If each factor includes more than four levels, a two-way or three-way interaction can become harder to interpret. Leaving this interpretation open to the reader is problematic.<br />
<br />
Now, for the person in the room saying "don't post-hoc tests address this?" I would point out that many of the statistical tests that linguists have been using more recently are less amenable to traditional post-hoc tests. (Is there an equivalent to <a href="https://en.wikipedia.org/wiki/Tukey%27s_range_test" target="_blank">Tukey's HSD</a> for different multi-level models?) Also, if there are a number of multivariate models that one needs to report, the inclusion of post-hoc tests within a manuscript will weigh it down. So, even if certain types of post-hoc tests were to address this concern, they still would end up in an appendix or as article errata and essentially hide a potential Type M or Type S error.<br />
<br />
We've now connected our statistics with our data in a clearer way to the reader and resolved the potential for Type S and M errors in the process. I think this is a pretty good approach. It also manages to treat the audience as if they need help reading the figure because the text reiterates what the figure shows. Is this "holding the reader's hand" too much? Keep in mind that you are intimately familiar with your results in a way that the reader is not <i>and </i>the reader has many other things on their mind, so it is always better to hold the their attention by guiding them. Also, the point is to communicate your research findings, not to engage in a competition of "whose model is more opaque?". Such one-upmanship is not an indicator of intelligence, but of insecurity.<br />
<br />
What are the downsides though? One potential issue is that the prose can become much longer. You are writing more, so in a context where more words cost more to publish or where there is a strict word limit, this method is less good. This issue can be ameliorated by reporting summary statistics just for those effects which are relevant to the hypothesis under investigation. There is another approach here as well - why not just eliminate statistics from the results section prose altogether. If it is the statistics that get in the way of interpreting the relationship between the hypothesis and results, we could just put the statistics elsewhere.<br />
<br />
<b>III. Tabling your stats</b><br />
<br />
Another approach to enhancing the readability of your research is to place the results from statistical tests and models in a table. I'll admit - when I first studied statistics I was told to avoid this. Yet, I can also see the appeal of this approach. Consider that as models have gotten more complex, there are more <i>things</i> to report. If one is avoiding <a href="https://en.wikipedia.org/wiki/Statistical_hypothesis_testing" target="_blank">null hypothesis significance testing</a> or if one is <a href="https://en.wikipedia.org/wiki/Misuse_of_p-values" target="_blank">avoiding p values</a>, a set of different values might need to be reported which would otherwise be clunky within the text itself. At the same time, reviewers have been demanding more replicability and transparency within statistical models themselves. This means that they may wish to see more details - many of which need to be included in a table.<br />
<br />
A very good recent example of this is found in an interesting recent paper by Schwarz et al. (2019) where the authors investigated the phonetics of the laryngeal properties of Nepali stops. I have included a snippet of this practice from this paper below (reprinted with authors' permission).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihNBSsS5iFElAexxowPi9LTVZ1tauckX4nNeZKqkAjVmPdCLost4Qo8Q6MHoWo1fVRvKvdZgGmmzRfARc8ZiQ8ZqgjZDtUBhMyiDyHXl0NslQQo47qD43oMoDxoFUaKAVWk2Sg-Re0NCQx/s1600/schwarz_etal.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1062" data-original-width="1083" height="391" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihNBSsS5iFElAexxowPi9LTVZ1tauckX4nNeZKqkAjVmPdCLost4Qo8Q6MHoWo1fVRvKvdZgGmmzRfARc8ZiQ8ZqgjZDtUBhMyiDyHXl0NslQQo47qD43oMoDxoFUaKAVWk2Sg-Re0NCQx/s400/schwarz_etal.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Snippet from p,123 of Schwarz, Sonderegger, and Goad (2019), reprinted with permission of the authors.</td></tr>
</tbody></table>
The dependent variable in the linear mixed effects model here is VD (voicing duration). The authors refer the readers to a table of the fixed effects. They include <i>p </i>values and discuss the directionality and patterns found within the data by referring the readers to a figure. The paragraph here is very readable because the statistics certainly do not interfere with the prose. The authors have also avoided Type M and Type S interpretation errors by stating the effects' directionality and using adverbial qualifiers, e.g. slightly.<br />
<br />
One general advantage of tabling statistics is that reading one's results becomes more insightful. When done in a manner similar to what Schwarz et al. do above, readers also do not forget about the statistics completely. This is accomplished by commenting on specific effects in the model even though all the statistics are in the table.<br />
<br />
If this is not done, however, the potential problem is that the reader might forget about the statistics completely. In such a case, the risk for a Type M or Type S error is inflated. Moreover, sometimes the effect you find theoretically interesting is not what is driving improvement to statistical model fit. This is obscured if individual results are not examined in the text at all.<br />
<br />
<b><u>Tip #2</u>: </b>Whether tabling your stats or not, always include prose discussing individual statistical effects. Include magnitude and sign (positive or negative effect) in some way in the prose.<br />
<br />
There is, of course, another alternative here - you can always combine an interspersed method <i>with </i>the tabling of statistical results. This would seem to address both a frequent concern among reviewers that they be able to see specific aspects of the statistical model while also not relegating the model to an afterthought while reading. I could talk about this method in more detail, but it seems as if most of the main points have been covered.<br />
<br />
<b>IV. Final points</b><br />
There are probably other choices that one could make in writing up statistical results and I welcome suggestions and ideas here. As phonetics (and linguistics) have grown as fields, there has been a strong focus on statistical methods but perhaps less of an overt conversation about how to discuss such methods in research effectively. One of the motivations to writing about these approaches a bit is that, when I started studying phonetics in graduate school, much of what I saw in the speech production literature seemed to follow the laundry list approach. Yet, if you have other comments, please let me know.<br />
<br />
<u>R<span style="font-family: "times" , "times new roman" , serif;">eferences:</span></u><br />
<span style="font-family: "times" , "times new roman" , serif;">
DiCanio, C., Benn, J., and Castillo García, R. (2018). The phonetics of information structure in Yoloxóchitl Mixtec. <i>Journal of Phonetics</i>, 68:50–68.</span><br />
<span style="font-family: "times" , "times new roman" , serif;"><br /></span>
<span style="font-family: "times" , "times new roman" , serif;"><span style="font-family: "times" , "times new roman" , serif; font-size: small;">Schwarz, M., Sonderegger, M., and Goad, H. (2019). Realization and representation of Nepali laryngeal contrasts: Voiced aspirates and laryngeal realism. <i>Journal of Phonetics</i>, 73:113–127.</span></span><br />
<span style="font-family: "times" , "times new roman" , serif;"><span style="font-family: "times" , "times new roman" , serif; font-size: small;"><br /></span></span>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Helvetica; color: #000000}
</style>
<br />
<div class="p1">
<span style="font-family: "times" , "times new roman" , serif; font-size: small;"><span style="font-family: "times" , "times new roman" , serif;">Kuznetsova, A., Brockhoff, P. B., and Christensen, R. H. B. (2017). lmerTest Package: Tests in Linear Mixed Effects Models. <i>Journal of Statistical Software</i>, 82(13):1–26.</span></span></div>
<u></u>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com1tag:blogger.com,1999:blog-8009206815446785752.post-69176401377853491652019-08-05T21:38:00.000+02:002019-08-05T21:38:26.818+02:00Is it Trique or Triqui?<span style="font-size: large;">Though I am a linguist who has worked on several languages over the years, one of the languages (or language groups) that I have spent the most time studying is <a href="https://en.wikipedia.org/wiki/Trique_language" target="_blank">Triqui</a>. There are three major Triqui languages (Copala, Itunyoso, and Chicahuaxtla) and though the latter two have some degree of mutual intelligibility, the Copala dialect/language is mostly unintelligible to speakers of the other two dialects/languages.<br /><br />There are all sorts of interesting things about these languages and about indigenous languages in Mexico, more generally. However, one of the persistent questions I get asked is about the name of the language itself - "is it Trique [</span><span style="font-size: large;">ˈ</span><span style="font-size: large;">tʰɹike] or Triqui [ˈt</span><span style="font-size: large;">ʰɹiki]?" The answer to this is rather simple - in Spanish used by both Triqui speakers and non-Triqui speakers in Mexico, it's [</span><span style="font-size: large;">ˈ</span><span style="font-size: large;">tɾiki]. So, the closest equivalent in English is </span><span style="font-size: large;">[ˈt</span><span style="font-size: large;">ʰɹiki], with a final [i] sound.<br /><br />But the follow-up question is usually "Why is it spelled with an "e" then?" To understand this, it's necessary to understand a little bit about dialectal differences in the languages and linguistic practice into the 20th century. To begin, the name of the language ostensibly comes from a spanification (or <i>castellanización) </i>of the Triqui phrase /tʂeh³ (k)kɨh³/, 'father/padre + mountainside/monte', meaning something like 'father of the mountain' in the Chicahuaxtla dialect, though this is a bit debatable. There is another word /</span><span style="font-size: large;">tʂːeh³²/ (Itunyoso) or </span><span style="font-size: large;">/</span><span style="font-size: large;">tʂeh³²/</span><span style="font-size: large;"> (Chicahuaxtla and Copala) meaning 'camino' or 'road/path.' So, the name itself may have come from a phrase meaning 'the path of the mountainside.'<br /><br />One thing to notice is that the Chicahuaxtla dialect retains the central vowel /ɨ/ where it has merged with /i/ in the Itunyoso dialect and, in some contexts, with /u/ in the Copala dialect. </span><span style="font-size: large;">So, the word for 'mountainside/monte' retains this vowel in Chicahuaxtla where the word is /k</span><span style="font-size: large;">ːih³/ in Itunyoso Triqui and /kih³/ in Copala Triqui. </span><span style="font-size: large;">This vowel also exists in many Mixtec languages (Triqui is Mixtecan) and is reconstructed for Proto-Mixtec (Josserand, 1983).<br /><br />The first Triqui language to be described was the Chicahuaxtla dialect (Belmar 1897) and he wrote the name of the language as <i>Trique. </i>Now, Belmar was not particularly adept at transcribing many of the nuanced phonetic details of many languages. His tonal transcription is non-existent and he misses many important suprasegmental contrasts. However, he chose "e" here because he heard a difference between /i/ and /</span><span style="font-size: large;">ɨ/ and it was customary at the time to transcribe this latter vowel with "e." This practice goes back to very early Mixtecan/Otomanguean philology - the Dominican friar Antonio de los Reyes (1593) used "e" to transcribe this vowel in Teposcolula Mixtec. So, the six historical Mixtec vowels are, at least in old historical sources, transcribed as /i/ "i", /e/ "ai", /a/ "a", /o/ "o", /u/ "u", /</span><span style="font-size: large;">ɨ/ "e." The IPA certainly did not exist during Belmar's time and this practice is simply an extension of a Mexican philological tradition.<br /><br />Incidentally, the use of 'e' for transcribing mid back unrounded vowels is not limited to languages in Mexico. The romanization of Chinese, called <a href="https://en.wikipedia.org/wiki/Pinyin" target="_blank">pinyin</a>, uses "e" for the vowel /ɤ/, found in many Chinese languages. This practice in fact seems to go back to earlier romanizations of Chinese and, in fact, the earliest grammar of Chinese was <i>Arte de la lengua Mandarina, </i>written by another Dominican friar, <a href="https://en.wikipedia.org/wiki/Francisco_Varo" target="_blank">Francisco Varo</a>. Though, as far as I can tell, he did not use "e" in his romanization of Chinese - that came later.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">The earliest work on Triqui written in English is Longacre (1952) and he must have simply taken the practice of writing the language with an "e" from Belmar and other Spanish sources. Nowadays, it is written with an "i" in Spanish. Though, due to older sources using an "e", such as all work by Hollenbach on the Copala dialect, from 1973 to 1992, the spelling with an "e" has stuck around.</span><br />
<br /><br />References:<br />Belmar, F. (1897). <i>Lenguas del Estado de Oaxaca: Ensayo sobre lengua Trique</i>. Imprenta de Lorenzo San-Germán.<br /><br />Hollenbach, B. E. (1973). La aculturación lingüística entre los triques de Copala, Oaxaca. <i>América Indígena</i>, 33:65–95.<br /><br />Hollenbach, B. E. (1992). A syntactic sketch of Copala Trique. In Bradley, C. H. and Hollenbach, B. E., editors, <i>Studies in the syntax of Mixtecan Languages</i>, volume 4. Dallas: Summer Institute of Linguistics and University of Texas at Arlington.<br /><br />Josserand, J. K. (1983). <i>Mixtec Dialect History</i>. PhD thesis, Tulane University.<br /><br />
de Los Reyes, F. A. (1593). <i>Arte en Lengua Mixteca</i>. Casa de Pedro Balli, Mexico, Comte H. de Charencey edition.<br /><br />
Longacre, R. E. (1952). Five phonemic pitch levels in Trique. <i>Acta Linguistica</i>, 7:62–81.<br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;"><br /></span>
<span style="font-size: large;"><br /></span>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com1tag:blogger.com,1999:blog-8009206815446785752.post-89906960993075315642019-05-03T20:04:00.004+02:002019-05-03T20:04:52.432+02:00Compassion in the academyOne of the difficulties I find in being an academic is the standards that you place on yourself. Many of us have gone from doing excessively well in primary school, high school, and college to excelling in graduate school and beyond. At each stage it can feel like a competition. The academic job market is also a competition - you compete for limited positions at limited universities in limited places you would like to live. Yet, if you love research and teaching, then you find yourself committing to being in the race and you try to consistently hold yourself to a high standard.<div>
<br /></div>
<div>
The sense of competition does not end with getting an academic job either - you compete for grants, for tenure, for papers to be accepted, and for recognition. All of this can wear your spirit down and burn you out. And, frankly, many of us do not want to be "warriors in the fight." We want to be curious and explore our interests and help students become researchers in the process.</div>
<div>
<br /></div>
<div>
One particularly exasperating area of academia is the article review process. Whether I wish to or not, as an author, I often take comments to heart. A criticism of a particular method or point can feel like a criticism on me as a researcher. Replying to reviewers can feel like standing up before a tribunal which is judging all of your perceived defects. It is the place where your harsh, internalized judgment appears to be validated by other people in your field.</div>
<div>
<br /></div>
<div>
In reality though, I have grudgingly learned that my internalized judgments are rarely accurate. Put another way, if a colleague of mine came to see me and verbalized the same self-criticisms, I would be likely to say that they were mistaken. Other people almost always see us better than we see ourselves. You might believe the internalized criticisms though if you struggle with finding self-validation in other ways, such as if you are a minority and feel left out. This can make submitting papers and responding to reviews rather scary. </div>
<div>
<br /></div>
<div>
Submitting your work for publication does not have to feel this way though. So, I began to think about how some of the dynamics of the review process might be changed to be more encouraging. I've compiled these notes below as a way to encourage mindfulness and compassion in academia, as both an author reading reviews and as a reviewer.</div>
<div>
<br /></div>
<div>
<b>1. Praise is just as valid as critique</b></div>
<div>
<br /></div>
<div>
Either as a reviewer or as an author receiving a review, we think of <i>any</i> praise as faint praise. If someone tells us "The topic is really interesting and I like the way in which you analyzed X and Y..." we almost universally are looking for a 'but....' to follow. The positive commentary is instantaneously invalidated. We believe it's inserted just to lessen the blow of the criticism to follow.</div>
<div>
<br /></div>
<div>
Yet, it is equally valid to point out positive aspects of the work as it is to point out areas in need of improvement. Doing so also does not need to involve lowering one's standards for scholarship. As a parallel, consider the comments you might provide on students' homework assignments. If you only ever pointed out problems on the homeworks and ignored praise for doing well, you would probably get labeled a harsh and demanding academic. </div>
<div>
<br /></div>
<div>
We have come to expect that the review process will be all criticism, so we brace ourselves when we receive a review. We open it, put it down, walk away from it for weeks, and then pick it up again when our emotions have subsided. This is a sign that more compassion needs to be part of the process. Incidentally, being mindful in providing and interpreting praise in one's work are significant ways to create gratitude for the process. Wouldn't it be a paradigm shift if we saw peer review this way?</div>
<div>
<br /></div>
<div>
(As a side note, if you find yourself reading this and mentally dismissing the advice, consider that other people might not be as able to brush off criticism as well as you. Then, consider giving empathy a try.)</div>
<div>
<br /></div>
<div>
<b>2. Your pet peeve might not be crucial</b></div>
<div>
<br /></div>
<div>
There is no perfect research in academia. Each paper that is submitted to a journal has its flaws. A large part of what makes scientific discovery move forward is addressing flaws in future studies. Regardless of how methodologically good a paper is, it is easy to find some flaw that you, as a reviewer, might interpret as a critical error. This issue can get easily blown out of proportion if there is little else to criticize in the paper. Framing personal pet peeves in relation to the aspects of the research that are sound provides a more useful perspective for the authors of the paper.</div>
<div>
<br /></div>
<div>
<b>3. The when of the review and asking for help</b></div>
<div>
<b><br /></b></div>
<div>
Sometimes academics review papers when they are exhausted or unable to concentrate. A hallmark of this type of review is an excessive number of questions related to clarity. The paper may otherwise be clear, but the mental state of the reviewer has deteriorated. The tone of the review can be set badly if a reviewer gets a general feeling of unease because they read the paper while tired - even if they return to it the next day refreshed. It is the job of editors to help guide authors through such reviews, especially if questions of clarity come from just one of the reviewers.</div>
<div>
<br /></div>
<div>
What this means in practice is that authors should ask for help. Unless your research is simply rejected out of hand with no review (rare in my field), editors want to see the publication eventually succeed. I have always had a better experience in revising a paper if I have discussed the review with the editors explicitly than when I have skipped doing so. So, ask if the issues of clarity are serious. Ask if a small point has been blown up by a reviewer into something much bigger than it needs to be. Even if the editors agree with all the points raised by the reviewers, they will have helpful advice about how to tackle the points constructively.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-15195047756089295432019-04-08T19:31:00.002+02:002019-04-08T19:33:09.730+02:00The forgotten public universities<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
In her recent article "<a href="https://www.newyorker.com/news/our-columnists/how-i-would-cover-the-college-admissions-scandal-as-a-foreign-correspondent" target="_blank">How I Would Cover the College-Admissions Scandal as a Foreign Correspondent</a>”, Masha Gessen states:</div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
<br class="" /></div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
"Yes, in most of the world, young people go to university in the city where they grew up, but in the United States, I would explain, most young people aspire to “go away” to college, and that means that even a pre-application tour is a costly and time-consuming proposition.”</div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
<br class="" /></div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
I would like to point out that this is most likely incorrect. According to a report by the National Center for Educational Statistics (<a class="" href="https://nces.ed.gov/programs/coe/indicator_cha.asp">https://nces.ed.gov/programs/coe/indicator_cha.asp</a>), undergraduate enrollment at public universities in 2016 was 13.7 million students, while undergraduate enrollment at private universities was 2.7 million students. Public university students outnumber private ones by factor of 5. As a faculty member at a large public university, I can tell you that the majority of the undergraduate body is local. That is, they did not go away to university (or go away very far). So, in fact, most students in the US do indeed go to the university in the state where they grew up. Though a percentage of these students may have strived to attend private universities, most have believed public institutions to be a good deal in financial terms (they cost 1/3 the amount of private universities) and sufficiently good academically. It is the large public universities which teach most students in the US. It is also, incidentally, the large public universities that do much of the federally funded research in the US. </div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
<br class="" /></div>
<div class="" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-size-adjust: auto;">
The recent scandal regarding college admissions touches upon our hope in meritocratic institutions in the US. It leads us to important conversations. Yet, this criticism is itself elitist. It reflects the idea that the only educational systems worth discussing are those which are private and whether intentioned or not, it excludes at least 80% of the students attending universities and colleges in the US. </div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-50790215621548620752019-01-09T05:14:00.001+01:002019-02-10T05:10:28.822+01:00Linguistic common ground as privilege2019 was named <a href="https://en.wikipedia.org/wiki/International_Year_of_Indigenous_Languages" target="_blank">the International Year of Indigenous Languages</a> by UNESCO. My friends and colleagues at the recent <a href="https://www.linguisticsociety.org/event/lsa-2019-annual-meeting" target="_blank">Annual meeting of the Linguistic Society of America</a> (LSA) have been on Facebook, Twitter, and other social media discussing what this means for Linguistics as a field. With respect to publishing, several journals have pushed to emphasize linguistic research on indigenous languages. The LSA's own flagship journal, <a href="https://www.linguisticsociety.org/lsa-publications/language" target="_blank">Language</a>, has put out a <a href="https://journals.linguisticsociety.org/language/index.php/language/indigenous_languages" target="_blank">call for submissions</a> on different indigenous languages of the world. The Journal of the Acoustical Society of America has even put out a <a href="https://asa.scitation.org/jas/info/specialissues/cfp_050319?Track=JASASTNOV2018&utm_source=AIP%20Publishing&utm_medium=email&utm_campaign=9982876_JASA%20Special%20Issues%20Nov%202018&dm_i=1XPS,5XYU4,GSR903,N9TU5,1" target="_blank">call for submissions on under-represented languages</a>.<br />
<br />
There may be other journals too (which I am currently unaware of) attempting to emphasize how work on indigenous languages enhances our knowledge of language more generally, improves scholarship, and, in many cases, can promote the inclusion of ethnic minorities speaking or revitalizing these languages. This is all very positive and, as a linguist and scholar who studies indigenous languages of Mexico, I applaud the effort. <br />
<br />
Will it be enough though? If linguists are serious about promoting the equality of indigenous languages and cultures in publishing, a greater type of paradigm shift needs to take place in what we believe is worthy of scholarship.<br />
<br />
<b>1. Not just a numbers game</b><br />
<b><br /></b>
When you read academic articles in linguistics, chances are that the topic is examined in a language that you know about. This is partly due to speaker population. There is extensive scholarship in English, Mandarin Chinese, Hindi/Urdu, Spanish, Arabic, French, Russian, and Portuguese because <a href="https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers" target="_blank">4.54 billion people</a> speak these as their first or second languages.<br />
<br />
Where linguistic scholarship has developed has also played a strong role. There are 263 million first language speakers of Bengali and 23 million first language speakers of Dutch in the world. Bengali outnumbers Dutch by more than 11:1. Yet, a quick search on Google Scholar for "Bengali phonetics" reveals 4,980 hits, while a simultaneous "Dutch phonetics" search reveals 52,600 hits. A search for "Bengali syntax" reveals 11,800 hits while "Dutch syntax" reveals 180,000 hits. When it comes to academic articles, the numbers are reversed. Here, Dutch outnumbers Bengali by either 10:1 or 16:1.<br />
<br />
Dutch phonetics and syntax are not inherently more interesting than Bengali phonetics and syntax. Bengali has a far more interesting consonant system (if you ask me as a phonetician). Even Bengali morphology, which is far more complex than Dutch morphology, is under-studied relative to Dutch. Dutch speakers just happen to reside in economically-advantaged countries where there has been active English-based scholarship on their language for many years. Bengali speakers do not.<br />
<br />
<b>2. </b><b>Small phenomena in big languages, big phenomena in small languages</b><br />
<br />
A consequence of studying a language that has a history of academic scholarship is that many questions have already been examined. There is a literature on very specific aspects of the sound system of English (look up "English VOT", for instance) and Dutch morphology (look up "Dutch determiners", for instance). If linguists wish to study these languages and make a contribution, they must take out their magnifying glass and zoom in on specific details of what is already a restricted area.<br />
<br />
To a great degree, the field of linguistics respects this approach. Scholarship is enhanced by digging deeply into particular topics even in well-studied languages. Moreover, since many members of the field are familiar (at least passively) with the basic analyses of phenomena in many well-studied languages, linguists zooming in on the particular details benefit from shared common ground. Resultingly, linguists are able to give talks on very specific topics within the morphology, syntax, phonology, or pragmatics of well-studied languages. One can find dissertations focusing on specific types of constructions in English (small clause complements) or specific morphemes in Spanish (such as the reflexive clitic 'se'). This is the state of the field. Linguists all agree that such topics are worthy of scholarship.<br />
<br />
But imagine if you were asked to review an abstract or a paper where the author chose to zoom in on the specific details of a particular syntactic construction in Seenku (a Mande language spoken by 17,000 people in Burkina Faso, see work by <a href="http://www.dartmouth.edu/~mcpherson/" target="_blank">Laura McPherson</a>) or how tone influences vowel lengthening in a specific <a href="https://en.wikipedia.org/wiki/Mixtec_language" target="_blank">Mixtec language</a> (spoken in Mexico). These are minority and indigenous languages. Many linguists would agree that these topics are worthy of scholarship if they contribute something to our knowledge of these languages and/or to different sub-disciplines of linguistics, but where do we place the bar by which we judge?<br />
<br />
In practice, linguists often think these topics are limited in scope - even though they are no more limited than topics focusing on the reflexive clitic 'se' in Spanish. A consequence of this is that those working on indigenous languages must seek to situate their work in a broader perspective. This might mean that the research becomes comparative within a language family or that the research is a case study within a broader survey on similar phenomena. Rather than magnifying more deeply, if they want their work to be considered by the field at large, linguists working on indigenous languages often take the "go wide" approach instead.<br />
<br />
Note that this is not inherently negative. After all, we should all seek to situate our work in broader typologies and compare our findings to past research. It's just that the person working on the Spanish reflexive clitic is seldom asked to do the same. Their contribution to scholarship is not questioned.<br />
<br />
<b>3. </b><b>Privilege and a way to move forward</b><br />
<br />
For the most part, academic linguists believe that all languages have equal expressive power. It is possible to express any human idea in any language. Linguists also believe (or know) that language is arbitrary. De Saussure famously argued that the relation between the signified and the signifier is arbitrary. In other words, it is equally valid to express plurality on nouns with an /-s/ suffix (in English) or a vowel change (in Italian and Polish). No specific relation is better than another in a different language. If we take these ideas seriously, research on certain languages should not be more subject to scrutiny than research on other ones.<br />
<br />
Whether intentioned or not, both people and languages can be granted <a href="https://en.wikipedia.org/wiki/Social_privilege" target="_blank">privilege</a>. Scholars working on well-studied languages benefit from a shared linguistic common ground with other scholars which allows them to delve into deep and specific questions within these languages. This is a type of academic privilege. Without this common ground, scholars working on indigenous languages can sometimes face an uphill battle in publishing. And needing to prove one's validity is a hallmark of institutional bias.<br />
<br />
So, how do we check our linguistic privilege in the international year of indigenous languages? As a way of moving positively forward into 2019, I'd like to suggest that linguists think of the following questions when they read papers, review abstracts/papers, and attend talks which focus on indigenous languages. This list is not complete, but if it has made you pause and question your perspective, then it has been useful.<br />
<br />
<i>Question #1: What languages get to contribute to the development of linguistic theory? Which languages are considered synonymous with "Language"?</i><br />
<br />
If you have overlooked an extensive literature on languages you are unfamiliar with and include only those you <i>are</i> familiar with, you might be perpetuating a bias against indigenous languages in research. "Language" is not synonymous with "the languages I have heard of." Findings in indigenous languages are often considered "interesting footnotes" that are not incorporated into our more general notions of how we believe language works.<br />
<i><br />Question #2: Which phenomena are considered "language-specific"?</i><br />
<br />
There is value to exploring language-specific details, but more often than not, phenomena occurring in indigenous languages are considered exotic or strange relative to what is believed to be typical. Frequently, judgments of typicality reflect a bias towards well-studied languages.<br />
<br />
<i>Question #3: Do you judge linguists working on indigenous languages or articles on indigenous languages by their citation index? (h/t to Laura McPherson)</i><br />
<br />
Citations of work on indigenous languages are often lower than citations of work on well-studied languages. In an academic climate where one's citation index is often considered as a marker of the value of one's work, one might reach the faulty conclusion that an article on an indigenous language with fewer citations is poor scholarship.<br />
<br /><i>Question #4: Do you quantify the number of languages or the number of speakers that a linguist works with?</i><br />If a linguist studies one or two indigenous/minority languages, do you judge their knowledge of linguistics/language to be lesser than that of someone who does research on one or two well-studied languages? If so, you are privileging well-studied languages.<br /><br />
I'd like to specifically note that I am not a sociologist of language or a sociolinguist. There are undoubtedly others who have probably worked on this question.Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-42830807563959644402018-12-30T15:23:00.001+01:002018-12-30T15:25:06.514+01:00What is phonetics? A 20 minute guide for academics<span style="font-size: large;">As a phonetician, I often get so absorbed within my own area of study that I fail to notice other perspectives. My field is devoted to the study of speech sounds. It is important to humanity, to science, and to knowledge, but so are many other fields which I may not even recognize as distinct research areas in their own right. To get beyond this, it is important to try to educate the public and, in particular, other academics outside one's field.</span><br />
<span style="font-size: large;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://upload.wikimedia.org/wikipedia/en/5/50/Siri_on_iOS.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-size: large;"><img border="0" data-original-height="420" data-original-width="237" height="200" src="https://upload.wikimedia.org/wikipedia/en/5/50/Siri_on_iOS.png" width="112" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: large;">Figure 1: Siri cost Apple like $300 million dollars to create and involved speech </span><span style="font-size: large;">recording (phonetics), speech processing (phonetics), and speech annotation (phonetics).</span></td></tr>
</tbody></table>
<span style="font-size: large;">Telling the public that phonetics is an important field is easy. People accept that speech sounds are important things to study. Many people have opinions about the sounds of language. Ask almost anyone their opinions about different dialects and they will immediately voice them (their opinions, that is). Tell them about technology like Siri or Alexa and it is not much of a stretch to get them to realize that people had to think about speech acoustics and analyzing speech signals in order to create these things.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Trying to educate other academics about phonetics is a rather more difficult task, however. Academics are a proud group composed of people who make a living being authorities on arcane topics. Tell them that you study a topic that they believe they know about (like language, ahem) and they will be highly motivated to voice their opinion, even though they may know as much about it as the average non-academic. Frankly, academics are terrible at admitting ignorance. I'll admit that I struggle with this too when it comes to areas that I think I know about. In response to this, I have created a short guide to phonetics as a way to tell other academics two things: (1) phonetics is an active area of research and (2) there is a lot we do not know about speech.</span><br />
<span style="font-size: large;"><br /></span>
<b><span style="font-size: large;">I. Starting from Tabla Rasa</span></b><br />
<span style="font-size: large;">Let's start with what phonetics is and is not. Phonetics is the study of how humans produce speech sounds (articulatory phonetics), what the acoustic properties of speech are (acoustic phonetics), and even how air and breathing are controlled in producing speech (speech aerodynamics). It has nothing to do with phonics, which is the connection between speech sounds and letters in an alphabet. In fact, it has little to do with reading whatsoever. After all, there are no letters in spoken language - just sounds (and in the case of sign languages, just gestures).</span><br />
<span style="font-size: large;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<span style="font-size: large;">So imagine a world where you have to think about language but are unable to refer to the letters of your alphabet. This is, in fact, one of the motivations for the <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet</a>, or the IPA. Consonant sounds are represented using the IPA and are principally defined in three ways:</span><br />
<ol>
<li><span style="font-size: large;"><b>Voicing</b> - whether your vocal folds (colloquially called your "vocal cords") are vibrating when you make the speech sound.</span></li>
<li><span style="font-size: large;"><b>Place of articulation</b> - where you place your tongue or lips to make the speech sound.</span></li>
<li><span style="font-size: large;"><b>Manner of articulation</b> - either how tight of a seal you make between your articulators in producing the speech sound or the cavity that the air flows through (your mouth or your nose being the two possibilities).</span></li>
</ol>
<span style="font-size: large;">Vowel sounds are a bit harder to define, but phoneticians distinguish them in terms of (a) how open your jaw is, (b) where your tongue is, and (c) what your lips are doing.</span><br />
<div>
<span style="font-size: large;"><br /></span></div>
<div>
<span style="font-size: large;">Why define speech this way? First, is scientifically accurate/testable. After all, the same sound should be produced in a similar way by different speakers. We can measure exactly how sounds are produced by imaging the tongue moving; or by recording a person and looking at an image of the acoustics of specific sounds. The figure below shows just one method phoneticians can use to examine how speech is produced.</span><br />
<span style="font-size: large;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaJpQh9Sj1jhNOF2R_E9E8_uBCJvmDCEkS71_21hFvce8odHa6xVuFzAZNS88j02pz8ytBszs1j2YWXr_8YeIT61SCUdq1Ijyf-U7AJks1rHZz5Rmk1wY4wmj0Z0-U-upS2u2bEx1Ry2oY/s1600/Mielke_etal_image.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-size: large;"><img border="0" data-original-height="809" data-original-width="1139" height="283" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaJpQh9Sj1jhNOF2R_E9E8_uBCJvmDCEkS71_21hFvce8odHa6xVuFzAZNS88j02pz8ytBszs1j2YWXr_8YeIT61SCUdq1Ijyf-U7AJks1rHZz5Rmk1wY4wmj0Z0-U-upS2u2bEx1Ry2oY/s400/Mielke_etal_image.jpg" width="400" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: large;">Figure 2: An ultrasound image of the surface of the tongue, from Mielke, Olson, Baker, and </span><span style="font-size: large;">Archangeli (2011). Phoneticians can use ultrasound technology to view tongue motion over time.</span></td></tr>
</tbody></table>
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Second, this way of looking at speech is also useful for understanding grammatical patterns. When we learn a language, we rely on regularities (grammar) to form coherent words and sentences. For a linguist (and a phonetician), grammar is not something learned in a book and explicitly taught to speakers. Rather, it is tacit knowledge what we, as humans, learn by listening to other humans producing language in our environment.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">To illustrate this, I'll give you a quick example. In English, you are probably familiar with the plural suffix "-s." You may not have thought about it this way, but this plural can be pronounced three ways. Consider the following words:</span><br />
<span style="font-size: large;"><br /></span>
<br />
<table align="center">
<tbody>
<tr>
<td><span style="font-size: large;"><b>[z] plural</b> </span></td>
<td></td>
<td><span style="font-size: large;"><b>[s] plural</b> </span></td>
<td></td>
<td><span style="font-size: large;"><b>[ɨz] plural</b> </span></td>
</tr>
<tr>
</tr>
<tr>
<td><span style="font-size: large;">drum - drum[z] </span></td>
<td></td>
<td><span style="font-size: large;">mop - mop[s] </span></td>
<td></td>
<td><span style="font-size: large;">bus - bus[ɨz] </span></td>
</tr>
<tr>
<td><span style="font-size: large;">rib - rib[z]</span></td>
<td></td>
<td><span style="font-size: large;">pot - pot[s]</span></td>
<td></td>
<td><span style="font-size: large;">fuzz - fuzz[ɨz]</span></td>
</tr>
<tr>
<td><span style="font-size: large;">hand - hand[z]</span></td>
<td></td>
<td><span style="font-size: large;">bath - bath[s]</span></td>
<td></td>
<td><span style="font-size: large;">wish - wish[ɨz]</span></td>
</tr>
<tr>
<td><span style="font-size: large;">lie - lie[z]</span></td>
<td></td>
<td><span style="font-size: large;">tack - tack[s]</span></td>
<td></td>
<td><span style="font-size: large;">church - church[ɨz]</span></td>
</tr>
</tbody></table>
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">In the first column, the plural is pronounced like the "z" sound in English. In the IPA this is transcribed as [z]. In the second column, the plural is pronounced like the "s" sound in English - [s] in the IPA. In the third column, the plural is pronounced with a short vowel sound and the "z" sound again, transcribed as [ɨz] in the IPA. </span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Why does the plural change its pronunciation? The words in the first column all end with a speech sound that is <i>voiced</i>, meaning that the vocal folds are vibrating. The words in the second column all end with a speech sound that is <i>voiceless</i>, meaning that the vocal folds are <i>not</i> vibrating. If you don't believe me, touch your neck while pronouncing the "m" sound (voiced) and you will feel your vocal folds vibrating. Now, try this with while pronouncing the "th" sound in the word "bath." You will not feel anything because your vocal folds are not vibrating. In the third column, all the words end with sounds that are similar to the [s] and [z] sounds in place and manner of articulation. So, we normally add a vowel to break up these sounds. (Otherwise, we would have to pronounce things like <i>wishs</i> and <i>churchs, </i>without a vowel to break up the consonants<i>.</i>) What this means is that these changes are predictable; it is a pattern that must be learned. English-speaking children start to learn it between ages 3-4 (Berko, 1958).</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Why does this rule happen though? To answer this question, we would need to delve further in to how speech articulations are produced and coordinated with each other. Importantly though, the choice of letters is not relevant to knowing how to pronounce the plural in English. It's the characteristics of the sounds themselves. Rules like these (<i>phonological</i> rules) exist throughout the world's languages, whether the language has an alphabet or not - and only about 10% of the world's languages even have a writing system (Harrison, 2007). Unless you are learning a second language in a classroom, speakers and listeners of a language learn such rules without much explicit instruction. The field of <i>phonology</i> focuses on how rules like these work across the different languages of the world. The basis for these grammatical rules is the phonetics of the language.</span><br />
<span style="font-size: large;"><br /></span>
<b><span style="font-size: large;">II. Open areas of research in phonetics</span></b><br />
<span style="font-size: large;">The examples above illustrate the utility of phonetics for well-studied problems. Yet, there are several broad areas of research in phoneticians occupy themselves with. I will focus on just a few here to give you an idea of how this field is both scientifically interesting and practically useful.</span><br />
<span style="font-size: large;"><br /></span>
<i><span style="font-size: large;">a. Acoustic phonetics and perception</span></i><br />
<span style="font-size: large;">When we are listening to speech, a lot is going on. Our ears and our brain (and even our eyes) have to decode a lot of information quickly and accurately. How do we know what to pay attention to in speech signal? How can we tell whether a speaker has said the word 'bed' or 'bet' to us? Speech perception concerns itself both with <i>what</i> characteristics of the sounds a listener must pay special attention to and <i>how </i>they pay attention to these sounds. </span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">This topic is hard enough when you think about all the different types of sounds that one could examine. It is <i>even</i> harder when you consider how multilingual speakers do it (switching between languages) or the fact that we perceive speech pretty well even in noisy environments. Right now, we know a bit about how humans perceive speech sounds in laboratory settings, but much less so in more natural environments. Moreover, most of the world is multilingual, but most of our research on speech perception has focused on people who speak just one language (often English).</span><br />
<span style="font-size: large;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJLUTC1QKIYZmPqNkZagLGJ9LLhrIk_jSFNU1ygVtcNjUYsTVohV14uvR8iDrT71CmLJ2adX3gGsgO3YDcgKgF_eSmtXyZjD0ZI-tPGPlAymSkeJZO-KvfXAPQBzCU1aakKOOXk_tqO1k7/s1600/spectrogram.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-size: large;"><img border="0" data-original-height="699" data-original-width="1600" height="276" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJLUTC1QKIYZmPqNkZagLGJ9LLhrIk_jSFNU1ygVtcNjUYsTVohV14uvR8iDrT71CmLJ2adX3gGsgO3YDcgKgF_eSmtXyZjD0ZI-tPGPlAymSkeJZO-KvfXAPQBzCU1aakKOOXk_tqO1k7/s640/spectrogram.jpg" width="640" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: large;">Figure 3: A speech waveform and spectrogram. Here we see the phrase <i>"to go without water for"</i> spoken by a native English</span><br />
<span style="font-size: large;">speaker reading from a text. The words are labelled below the spectrogram along with the sounds using the IPA. There </span><span style="font-size: large;">are no pauses in the speech signal but humans are able to pull out individual words when listening to speech.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: large;"><br /></span></div>
<span style="font-size: large;">There is also a fun fact relevant to acoustics and perception - there are no pauses around most words in speech! Yet, we are able to pull out and identify individual words without much difficulty. To do this, we must rely on <i>phonetic</i> cues to tell us when words begin and end. An example of this is given in Figure 3. Between these five words there are no pauses but we are aware of when one word ends and another begins.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">How are humans able to do all of this so seamlessly though? and how do they learn it? Acoustic phonetics examines questions in each of these areas and is itself a broad sub-field. Phoneticians must be able to examine and manipulate the acoustic signal to do this research.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Is this research useful though? Consider that when humans lose hearing or suffer from conditions which impact their language abilities, they sometimes lose the ability to perceive certain speech sounds. Phoneticians can investigate the specific acoustic properties of speech that are most affected. Moreover, as I mentioned above, the speech signal has no pauses. Knowing what acoustic characteristics humans use to pick apart words (<i>parse words)</i> can help to create software that recognizes speech. These are among a few of the many practical uses of research in acoustic phonetics and speech perception.</span><br />
<span style="font-size: large;"><br /></span>
<i><span style="font-size: large;">b. Speech articulation and production</span></i><br />
<span style="font-size: large;">When we articulate different speech sounds, there is a lot that is going on inside of our mouths (and in the case of sign languages, many different manual and facial gestures to coordinate). When we speak slowly we are producing 6-10 different sounds per second. When we speak quickly, we can easily produce twice this number. Each consonant involves adjusting your manner of articulation, place of articulation, and voicing. Each vowel involves adjusting jaw height, tongue height, tongue retraction, and other features as well. The fact that we can do this means that we must be able to carefully and quickly coordinate different articulators with each other. </span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">To conceptualize this, imagine playing a piano sonata that requires a long sequence of different notes to be played over a short time window. The fastest piano player can produce something like 20 notes per second (see <a href="https://www.youtube.com/watch?v=As-7MXiCok0" target="_blank">this video</a> if you want to see what this sounds like). Yet, producing 20 sounds per second, while fast, is not that exceptional for the human vocal tract. How do speakers coordinate their speech articulators with each other?</span><br />
<span style="font-size: large;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj431-6Uu3z_kOGaqMTLkKvEitOzJG1HKLzB94UC18zPApD2eicfJzS9fiXxDcnhqlVei4qiI5KUvgLsPoMIqOblU5Dd40JrtNFgVJXhlb-afoJLm8rOMPhplsgnIQ1IvkMyRI7LM5_JwCH/s1600/EMA_data_Korean_sample.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-size: large;"><img border="0" data-original-height="1401" data-original-width="1413" height="633" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj431-6Uu3z_kOGaqMTLkKvEitOzJG1HKLzB94UC18zPApD2eicfJzS9fiXxDcnhqlVei4qiI5KUvgLsPoMIqOblU5Dd40JrtNFgVJXhlb-afoJLm8rOMPhplsgnIQ1IvkMyRI7LM5_JwCH/s640/EMA_data_Korean_sample.jpg" width="640" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: large;">Figure 4: Articulatory movement from electromagnetic articulography, which involves gluing sensors on the articulators and tracking their motion in real time. Waveforms of the acoustic signal are shown above, followed by an acoustic spectrogram. The three lower panels reflect vertical movement of the back of a speaker's tongue (TB - top), the front region of the a speaker's tongue (TL - middle), and the lower lip (LL - bottom).</span></td></tr>
</tbody></table>
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Phoneticians that look at speech articulation and production investigate both how articulators move and what this looks like in the acoustic signal. Your articulators are the various parts of your tongue, your lips, your jaw, and the various valves in the posterior cavity. The way in which these articulators move and how they are coordinated with one another is important both for understanding how speech works from a scientific perspective and extremely useful for clinical purposes. One of the reasons that this is important is that movements overlap quite a bit.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Since we are familiar with writing, we like to think that sounds are produced in a clear sequence, one after the other, like beads on a string. After all, our writing and even phonetic transcription reflects this. Yet, it's not the truth. Your articulators overlap all the time and moving your lips for the "m" sound in a word like "Amy" overlaps with moving your lips in a different way for the vowels.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">To provide an example, in Figure 4, a Korean speaker is producing the (made-up) word /tɕapa/. The lower panels show just when the tongue and lips are moving in pronouncing this word. If you look at the spectrogram (the large white, black, and grey speckled figure in the middle), you can observe what looks like a gap right in the middle of the image. This is the "p" sound. Now, if you look at the lowest panel, we observe the lower lip moving upward for making this sound. This movement for the "p" happens much earlier than what we hear as the [p] sound, during the vowel itself. Where is the "p" then? Isn't it after the /a/ vowel (sounds like "ah")? Not exactly. Parts of it overlap with the preceding and following vowels, but parts of those vowels also overlap with the "p." In the panel labelled TLy, we are observing how high the tongue is raised. It stays lowered throughout this word because it needs to stay lowered for the vowel /a/. So, the "a" is also overlapping with the "p" here.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">Overlap in speech is the norm and sometimes speakers move their articulators in ways that are unexpected. You might struggle to coordinate your articulators in a particular way when you are learning new sounds in a first language (as a child) or new sounds in a second language (as a child or adult). You also might have difficulty producing sequences of sounds due to a range of physical or cognitive disorders. By looking at speech articulation, phoneticians are able to examine what is typical in speech and also what is atypical.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">One fun way to examine what speakers can do is to have them speak really quickly or give them tongue twisters. As mentioned earlier, speech can be really fast. The Korean speaker above produced 5 speech sounds in just 400 milliseconds (12 sounds per second) and she was speaking carefully. When speakers speed up, phoneticians can both determine where difficulties arise and how different movements must be adjusted relative to one another.</span><br />
<span style="font-size: large;"><br /></span>
<span style="font-size: large;">References:</span><br />
<span style="font-size: large;">Berko, J. (1958). The child's learning of English morphology. <i>Word</i>, 14(2-3), 150-177.</span><br />
<span style="font-size: large;">Harrison, K. D. (2007). <i>When languages die</i>. Oxford University Press.</span><br />
<span style="font-size: large;">Mielke, J, Olson, K, Baker, A, and Archangeli D (2011) Articulation of the Kagayanen interdental approximant: An ultrasound study. <i>Journal of Phonetics </i>39:403-412.</span><br />
<span style="font-size: large;"><br /></span>
</div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-24817697048078037412018-12-30T04:10:00.001+01:002018-12-30T06:51:49.136+01:00Is Grover swearing? No, it's in your ears.<span style="background-color: white; font-family: "times" , "times new roman" , serif; font-size: large;">Twitter and Reddit users are up in arms lately over the latest case of phonetic misperception (remember "Laurel" and "Yanny"?). This time it concerns the love-able Grover from Sesame Street who, if you watch the clip below, is either saying "that sounds like an excellent idea" or "that's a f*ckin' excellent idea." Did Grover drop the F-bomb on Sesame Street?<br /><br /><a href="https://twitter.com/EvanEdinger/status/1078358697921966081" target="_blank">https://twitter.com/EvanEdinger/status/1078358697921966081</a><br /><br />As a phonetician, these types of misperceptions are sometimes fun because they force you to carefully listen to what people (in this case, Grover's voice) are doing as they produce speech very quickly. Phoneticians focus on the transcription and, more often, careful analysis of speech. Speech is fast, speech is messy, and when the conditions are right, one can <i>misperceive </i>one sound for another.</span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;"><br />What is even more difficult in a case like this is that Grover is always speaking quickly. He's the puppet constantly on his quadruple espresso. So this means that many of the sounds you expect to hear in certain words are actually quite different. Vowels can be cut short and sound very different. Consonants can be deleted entirely. Both of these cases are what linguists call <i>phonetic reduction.</i> To understand why you hear the F-word instead of "like an", we must understand a little bit about how sounds reduce.<br /><br /><b>Reduction</b><br />If you were speaking very carefully, you pronounce "That sounds like an..." as [ðæt saʊndz laɪk ə</span><span style="background-color: white;">n], where each vowel is carefully produced and each of the consonants at the end of "sounds" are pronounced distinctly. Yet, humans are rarely this clear. Moreover, if we were always this clear, our speech would be quite slow. Life is short and so becomes our speech.</span></span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;"><br />In reality, we do not pronounce this phrase this way. One thing that English speakers will do is to <i>reduce</i> the final consonants in 'sounds.' Instead of pronouncing each of the /n/, /d/, and /z/ sounds (yes, it's more like a "Z" here - spelling is deceptive), people will pronounce just the /n/ and the /z/. </span><span style="background-color: white;">We do this all the time.</span><span style="background-color: white;"> </span><span style="background-color: white;">A word like "friends" has no "d" sound. This pattern leaves us with [</span><span style="background-color: white;">ðæt saʊnz laɪk ən], with one sound missing.</span></span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><br /></span>
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;">Grover takes reduction a few steps further than this, but his manner of pronouncing words is not very different from what other English speakers do when speaking quickly. Instead of pronouncing the vowel /aʊ/ (the vowel in "ouch"), he reduces this vowel down to something like the vowel in 'sun' /sʌn/. This might seem weird to you, but try saying "that sun's nice" and "that sounds nice" quickly after each other. They might in fact be hard to distinguish. The same thing happens with the vowel in 'like' - it's pronounced more like the vowel in 'luck.' So, now we have gone to a phonetic sequence of </span><span style="background-color: white;">[</span><span style="background-color: white;">ðæt sʌnz l</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">k ən].</span><span style="background-color: white;"><br /><br />That alone is not enough to make you hear the F-bomb, but Grover's voice does two additional things that many English speakers have been doing for some time. First, he does not pronounce the "n" in the word "sounds." The "n" sound is a nasal consonant and many English speakers just nasalize their vowels in a context like the word "sounds." Essentially the "n" is no longer a consonant, but its character is now on the vowel. So, going further, we've now gone to </span><span style="background-color: white;">[</span><span style="background-color: white;">ðæt sʌ̃z l</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">k ən] (the squiggly line over the vowel is the phonetic transcription for nasalization). </span><span style="background-color: white;"><br /><br />The second thing that Grover does is to pronounce what is normally a "z" sound as an "s" sound. American English speakers do this all the time. Try saying the words 'fuzz' and 'fuss.' The words sound different (hint - the vowel is longer in one case), but the final "z" and "s" are often both pronounced like [s]. So, moving along, now we've gone to </span><span style="background-color: white;">[</span><span style="background-color: white;">ðæt sʌ̃s l</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">k ən]. But how do you get an "f" here?<br /><br /><b>From [sl] to [f] - the big jump</b><br /><br />In running speech, there are no pauses. Words blend right into each other. This is why it's possible to mishear "kiss the sky" as "kiss this guy" (as in the famous Jimi Hendrix song). So, in reality, Grover is pronouncing </span><span style="background-color: white;">[</span><span style="background-color: white;">ðætsʌ̃sl</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">kən], with no pauses. However, something funny happens in the sequence between the "s" sound and the "l" sound. The "s" sound is a voiceless consonant, meaning that your vocal cords are not vibrating when you pronounce it. Try saying the "s" sound while touching your neck and then the "z" sound while doing the same. You can feel your vocal cords vibrate in the "z" sound but not in the "s" sound.<br /><br />When a voiceless sound like [s] precedes a voiced consonant like "L" [l], it can cause the voiced consonant to become voiceless. Phoneticians and phonologists call this <i>voicing assimilation.</i> English speakers make the "L" sound voiceless in words like "play" [pl̥eɪ] (the dot under the consonant indicates that it is voiceless). Try saying "play" and holding the "L" sound. It should not sound like a typical "L" sound to you (and if you say "puh-lay", you're cheating). The "L" is voiceless here because the "p" sound is voiceless. Grover's voice did this in the clip - he says </span><span style="background-color: white;">[</span><span style="background-color: white;">ðætsʌ̃sl̥</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">kən...].</span></span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><br /></span><span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;">But why does this sound like "f"? A voiceless "L" sound actually sounds an awful lot like 'f' - it shares a lot more of the acoustic characteristics with "f" than it does with other sounds that you are used to. It is possible to hear [s</span><span style="background-color: white;">l̥] as [f] as a result. However, this misperception is in your ears. If you are not used to listening for these sorts of phonetic sequences, especially when people (or muppets) are speaking quickly, then you might mis-hear these sequences.</span></span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;"><br />That brings us to the big leap. Take a look at the phonetic differences between Grover's utterance and a sequence with the F-bomb in it:<br /><br />[</span><span style="background-color: white;">ðætsʌ̃sl̥</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">kən...] - 'that sounds like an' - Grover's speech</span></span><br />
<span style="font-family: "times" , "times new roman" , serif; font-size: large;"><span style="background-color: white;">[ðætsʌ̃f</span><span style="background-color: white;">ʌ</span><span style="background-color: white;">kən...] - 'that's a f*ckin' - speech with the F-bomb</span></span><span style="background-color: white;"><span style="font-family: "times" , "times new roman" , serif; font-size: large;"><br /><br />The only differences here between the two phrases is in the initial consonants and, for reasons described above, listeners are likely to mishear such sequences. Grover, in my estimation, is a perfectly well-behaved muppet. Though, he should maybe cut down on the coffee consumption.</span></span><br />
<span style="background-color: white;"><span style="font-family: "times" , "times new roman" , serif; font-size: large;"><br /></span></span>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com5tag:blogger.com,1999:blog-8009206815446785752.post-47702418630442358992018-12-21T19:42:00.002+01:002018-12-21T19:42:13.095+01:00Pitfalls in phonetic descriptions in phonetics courses<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">In teaching phonetics, I have always required students to submit a final project. This was my experience as a student studying phonetics (as an undergraduate and as a graduate student) after all. The project is a phonetic description of a language that the student is unfamiliar with. Students work with a speaker, practice their transcription skills, analyze their data, and examine some of the acoustic properties of the language.</span><br />
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"><br /></span>
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">I do phonetic description as part of my research, so I like the project idea. Yet I realize that this type of project isn't for everyone. Students often struggle with it and every semester that I teach phonetics, I get both good projects and ones which miss the mark. Among the problems that I encounter are the following:
a. Students do not understand that one must establish contrasts before you analyze the phonetic properties of the language. </span><br />
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"><br /></span>
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">Establishing contrasts requires that students have a little background in phonology, but typical phonetics courses do not require much in the way of phonology. One solution here might be to require more background before taking phonetics, but at a major public university where enrollment is a concern in higher-level courses, being more selective is sometimes not an option.</span><br />
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"><br /></span>
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;">b. Students do not understand the point of spectrograms.
Students will include pages of spectrograms in a final paper with no explanation of what the images are supposed to reflect at all. I think this is a specific case of a more general issue that I will call "the instagramification of prose." The image does not speak for itself. You must guide the reader through it. Otherwise, it just occupies space. One solution to this might be to devote more time in the semester to reading the literature and writing.
c. With vowels, anything goes.
Students will produce a cursory description of the vowel system because consonants are easier for them. They might even plot an acoustic vowel space that looks extremely odd but will forge ahead and ignore the fact that it does not match their transcriptions. I don't know immediately how to solve this.
d. Bad ears. I hate to say it.
I want to encourage students to pursue projects where they analyze the phonetics of Xhosa or Danish or Zapotec. However, some students just struggle to hear phonetic contrasts. They can hear an aspirated/unaspirated contrast among stops but might not distinguish between different back vowels, e.g. [o] vs. [ɔ] or [ʊ] vs. [ɯ]. Then they choose a tough language for their project. Do you lead such students away from more phonetically difficult languages because you feel they will struggle too much or does doing so discourage such students? If you include more listening exercises in the semester and the students still do poorly on them, does this help them or hurt them?</span><br />
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"><br /></span>
<span style="background-color: white; color: #1d2129; font-family: system-ui, -apple-system, system-ui, ".SFNSText-Regular", sans-serif; font-size: 14px; white-space: pre-wrap;"><br /></span>Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-35449480984774972862016-01-13T00:46:00.002+01:002016-01-13T05:27:08.411+01:00Segmenting running Mixtec speechMy research falls within two fields: fieldwork and phonetics. I am enamored with the languages that I study but also enamored with investigating the fine details found in these languages. One major area where there is overlap between fieldwork, or more specifically, documentation, and phonetics is in corpus phonetic research.<br />
<br />
Corpus phonetics is usually considered an area of phonetics moreso than an area of corpus linguistics; the methods are phonetic methods (mostly), while corpus linguists frequently concern themselves with textual materials and not with the raw speech signal. When phoneticians want to investigate aspects of the speech signal, either from experiments or from a corpus, it is often useful to (a) have a transcription of the speech signal and (b) segment individual sounds or syllables. The former is obviously useful for the purpose of knowing what you're looking at (and being able to go back to it) and the latter is useful for any tool which automatically extracts acoustic measures from the speech signal. It is possible (and common) nowadays to write short programs that will measure aspects of these individual segments very quickly.<br />
<br />
Segmentation is usually done in Praat, a program for viewing, analyzing, and processing acoustic recordings. A text file is saved along with the sound file with which, when both are opened together, one can view a time-aligned segmentation of words/segments in the speech signal. As part of research on my <a href="http://www.nsf.gov/awardsearch/showAward?AWD_ID=1603323&HistoricalAwards=false" target="_blank">NSF grant</a>, we are doing corpus phonetic research on both Itunyoso Triqui and Yoloxóchitl Mixtec (YM), two endangered languages spoken in Southern Mexico. Right now, we are (a) segmenting speech from YM and (b) evaluating a program we are developing which will automatically segment speech from this language. After we have improved this program, we will be able to extract phonetic data from a large corpus of over 100 hours of YM speech and answer scientific questions about both the language's phonetics and speech production more generally. This is corpus phonetics.<br />
<br />
Yet, the process of segmentation is not without problems and it is these problems that I wish to write about here. When segmentation is done with careful speech, it is usually a fairly straighforward to segment the consonants and vowels that are produced in the speech signal. Observe Figure 1, below.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirY5C_e6wD3xUt-XWZkhBujI_4wlwPwe9I_5TQy2WcTyDTz2iMYv61UDlCuJSqekilAujXuXmf-oX1Y3LWPYSMgTsOCX8RHW6oFikxeOPrWJVxsnJGum_vii549n7dq-noV5EmSeLeXCRJ/s1600/triqui_segmentation.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirY5C_e6wD3xUt-XWZkhBujI_4wlwPwe9I_5TQy2WcTyDTz2iMYv61UDlCuJSqekilAujXuXmf-oX1Y3LWPYSMgTsOCX8RHW6oFikxeOPrWJVxsnJGum_vii549n7dq-noV5EmSeLeXCRJ/s640/triqui_segmentation.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8px;">Carefully produced Triqui sentence /a3chinj5 sinj5 cha3kaj5/ [a³tʃĩh⁵ sĩh⁵ tʃa³kah⁵], 'The man asked for a pig.'</span></td></tr>
</tbody></table>
<br />
For those of you unfamiliar with segmenting spoken language, the first thing you might notice is that there are actually no pauses between the words, shown below the acoustic signal. This is as true of careful speech as connected speech. Yet, here, the boundaries between vowels and consonants here are fairly easy to spot. There is silence in the initial portions of the two affricates [tʃ], "ch", that distinguish them from adjacent vowels, silence in the initial portion of the stop [k], and noise in the production of the fricative [s]. The only thing here that <i>might</i> be difficult to parse is the aspiration that appears at the end of certain vowels (transcribed with "j" here, following a Spanish convention). This is left unparsed.<br />
<br />
As it turns out, parsing Mixtec speech is much harder than this. The language doesn't have aspirated vowels like Triqui does and the consonant inventory, as a whole, is much smaller. However, Mixtec is inordinately fast (approximately 7-9 syllables/second in running speech) and most of the consonants that would otherwise be easy to segment, e.g. /s, ʃ, t, tʃ, k, kw/, undergo lenition. This means that they can be realized as [z, ɦ, ð, j, ɣ, ɣw], respectively. All of these realizations are voiced and make parsing substantially more difficult. An example is given below.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrxk4_YqdUnlUnZ18NpgjzKw1qdv5j65wQ7ojcd5iZKMHz91znh9yn7oItpotBo4Ol1Bk7VYTRyfC1OHfw1hIa-kjHfp-kXv4mJ1WhbU0QLyoT8no_b12_YdLLJKs4z05dcSZkMzW6b0KH/s1600/segmentation_YM.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrxk4_YqdUnlUnZ18NpgjzKw1qdv5j65wQ7ojcd5iZKMHz91znh9yn7oItpotBo4Ol1Bk7VYTRyfC1OHfw1hIa-kjHfp-kXv4mJ1WhbU0QLyoT8no_b12_YdLLJKs4z05dcSZkMzW6b0KH/s640/segmentation_YM.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Running Mixtec speech; sentence /tan3 ka4chi2 sa3ba3=na2 ndi4.../ [tã³ ka⁴tʃi² sa³βa³=na² ⁿdi⁴] 'Then they said half of them, and...'</td></tr>
</tbody></table>
The initial [t] here is easy to spot - it involves silence and it is released into the vowel. However, the following /k/ in the word /ka4chi2/ is difficult to discern in spectrogram (this is actually a fairly clear example), because it is produced as a frictionless continuant rather than a stop. The same is true of /tʃ/ (labelled "JH"), which is produced as frictionless continuant ([ʒ]) rather than as an affricate. The /s/ above is produced as [z] and the "b" as [w], a bilabial glide. In this latter case, it is extremely difficult to locate a clear set of boundaries between the adjacent vowels [a] and the bilabial glide. However, one hears the glide in the acoustic signal and it appears that some weakening of F3 amplitude corresponds to this percept.<br />
<br />
The net result of this is a speech signal that rarely includes a loss of voicing and that is frequently difficult to examine. Is the "w" above deleted? If it is deleted, is this now a long vowel? These are difficult questions to answer just from the acoustic signal. This fusion of speech events is not specific to Mixtec either; we know that the speech involves overlapping gestures produced for different consonant and vowel sounds. Thus, things always overlap to a certain degree.<br />
<br />
Yet, the patterns of lenition above are still rather notable. Perhaps the voicing of the consonants here is helpful to listeners; as there is no contrast in voicing in the language, voicing the consonants allows tone to be carried on consonants as well as adjacent vowels. Since tone is so important in Mixtec as a marker of aspect and person, such a possibility is a plausible hypothesis, but one that remains to be tested. For the time being, parsing Mixtec is hard.Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-8160743988339206772015-07-28T05:22:00.002+02:002015-07-28T05:22:13.347+02:00The hard business of trying to specify allomorphs in FLEx<span style="font-family: Georgia, Times New Roman, serif;">While a substantial part of my research is on the phonetics and phonology of different Otomanguean languages, I have been working on the morphophonology of the Itunyoso Triqui language for many years. Ever since I first started my work on the language, I was fascinated by the many ways in which a single verb root, for instance, could have a multitude of forms when one includes aspectual prefixes and personal enclitics.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<div>
<span style="font-family: Georgia, Times New Roman, serif;">One of the most notable things about Triqui morphology is just how much tone plays a role in marking different distinctions. Take the verb /a³chi³/ 'to peel', for example. There are four possible tonal shapes of stems, shown below (note "j" is /h/, "h" is /ʔ/, and a post-vocalic "n" in the final syllable marks contrastive vowel nasality):</span></div>
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQVyRPFo8GyuX2XtMNatg0umyHK9kFegvB1bjVqprFtzu5OKVi-VtWu26B49yC_Uz6vRxuipde82F6_dDlNJGCSZcQXPxphI50miaI_vKzcz1AOBC1EQuf-KGSNWmaO9Wr3xQfraR0bL79/s1600/Example_morphology_blog.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-family: Georgia, Times New Roman, serif;"><img border="0" height="162" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQVyRPFo8GyuX2XtMNatg0umyHK9kFegvB1bjVqprFtzu5OKVi-VtWu26B49yC_Uz6vRxuipde82F6_dDlNJGCSZcQXPxphI50miaI_vKzcz1AOBC1EQuf-KGSNWmaO9Wr3xQfraR0bL79/s320/Example_morphology_blog.jpg" width="320" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-family: Georgia, Times New Roman, serif; font-size: small;">Table 1: Stem shapes of verb /a³chi³/ 'to peel.'</span></td></tr>
</tbody></table>
<span style="font-family: Georgia, Times New Roman, serif;">This particular paradigm displays some common patterns in Triqui morphology. First, the 1st person singular is marked by a change in tone (to /5/) and involves the insertion of a coda "j" /h/. Second, the 2nd person singular is marked by tone raising to /4/ before the clitic. Third, the perfective prefix on vowel-initial stems is just /k-/. Fourth, the potential prefix involves prefixation of /k-/ and a change of tone on the initial syllable of the root. </span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">The result of these processes is five possible stem shapes: /a³chi³, a³chij⁵, a³chi⁴, a²chij⁵, a²chi³/, marked in bold above. </span><span style="font-family: Georgia, 'Times New Roman', serif;">Each of these morphological processes can be described well enough. However, things start to get rather messy when we wish to include additional verbs. Note the verb /a³chinj⁵/ 'to request' below.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJYpNWemoO4yJQWuSmHURJMtUBLpqa3HdJULT4Tao2oGYBYXGGhhMXovGlzpgj0ttocz0_Rt7U527IXQN6HpO2lCxbt4GzfhThcAuxTwS2XBqzNau_LMmXizzXXt4Hz0f4RbUbJ-dBkDE1/s1600/Example_pedir.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-family: Georgia, Times New Roman, serif;"><img border="0" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJYpNWemoO4yJQWuSmHURJMtUBLpqa3HdJULT4Tao2oGYBYXGGhhMXovGlzpgj0ttocz0_Rt7U527IXQN6HpO2lCxbt4GzfhThcAuxTwS2XBqzNau_LMmXizzXXt4Hz0f4RbUbJ-dBkDE1/s400/Example_pedir.jpg" width="400" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-family: Georgia, Times New Roman, serif; font-size: small;">Table 2: Stem shapes of verb /a³chinj⁵/ 'to request.'</span></td></tr>
</tbody></table>
<span style="font-family: Georgia, Times New Roman, serif;">We notice different patterns here. Instead of inserting a coda "j" /h/ to mark first person, we delete it from the root and change tone /5/ to /43/. Since the verb stem already has a high final stem tone, we do not observe any tone raising before the 2S clitic /=reh¹/. However, the form of the potential is rather different. Like in the habitual or unmarked form of the verb, we find that the coda "j" /h/ is deleted, but the entire stem changes its tone to /2/. This change is not particular to the 1S either - it occurs with all other persons in the potential, as the example with the 3SM clitic demonstrates. As a result of these processes, </span><span style="font-family: Georgia, 'Times New Roman', serif;">we have four possible stem shapes for the </span><span style="font-family: Georgia, Times New Roman, serif;">verb in Table 2: /a³chinj⁵, a³chin⁴³, a²chin², a²chinj²/.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">I won't begin to provide a full analysis of the tonal morphology in Trique here (but see DiCanio, forthcoming). Rather, I wish to focus on two particular patterns and to discuss how they might be analyzed from a practical point of view. The first pattern is the marking of the 1S. This involves either the insertion of a coda "j" if it is not present on the stem or its deletion if it is present. Such a process is called a <i>morphological reversal</i> or <i>exchange rule </i>(see Inkelas, 2014). Tonal changes co-occur with this process for verbs with upper register tones (DiCanio, forthcoming), but we will not focus on these here.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">The second pattern involves the way in which the potential aspect is marked. For certain verbs, it is marked by a change to tone /2/ on the syllable to which the prefix is attached, as in Table 1. On other verbs, it is marked by a change to tone /2/ on every syllable of the stem, as in Table 2. In such cases, the 1st person clitic no longer involves a tone change since the tone on the stem is now /2/, which belongs to the lower register. (Incidentally, one might describe this as a case of morphological opacity, where stage 1 prefixal/aspectual morphology bleeds the conditions for the application of clitic tone raising.)</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">At least segmentally, the 1S clitic is easy enough to characterize, though how might one go about marking such forms in a digital lexicon/dictionary like <a href="http://fieldworks.sil.org/flex/">FLEx</a>? One procedure might be to mark each and every 1S form, e.g. include /a³chij⁵/ 'peel.1S' as a variant of /a³chi³/ 'peel.' </span><span style="font-family: Georgia, 'Times New Roman', serif;">While certain of the morphological patterns are motivated by phonological well-formedness constraints (DiCanio, forthcoming), listing the variants in a table or paradigm as above provides a useful framework for describing the morphological patterns within the Triqui lexicon. </span><br />
<span style="font-family: Georgia, 'Times New Roman', serif;"><br /></span>
<span style="font-family: Georgia, 'Times New Roman', serif;">This "listing" approach is the one that I currently use. However, doing this </span><span style="font-family: Georgia, 'Times New Roman', serif;">is rather time-consuming, as all words in the Triqui lexicon undergo this very regular alternation (though the tonal processes are rather complex). Doing this also </span><span style="font-family: Georgia, 'Times New Roman', serif;">loses the broader generalization of the rule. Moreover, there is currently no neat way of including paradigms within FLEx; one must specify additional forms as variants or allomorphs derived via a rule.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br />Another approach might be to create a phonological rule within FLEx's phonological grammar. However, the only available way to encode such rules is via a classical rewrite rule. This would produce rules of the form: Vh > V /_# ; and V > Vh /_#. Yet, there is no way to connect this particular rule with the set of morphological processes that it affects. It is an alternation that is primarily used for marking the 1st person singular (though similar alternations also mark previously-mentioned 3rd person discourse referents and derive nominal forms from quantifiers).</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">The same possibilities seem to be relevant for the potential aspect marking. It is either specified in a paradigm or it can be derived via a rule. However, a new problem presents itself when one considers the latter possibility. For those verbs, as in Table 2, which undergo an entire stem change to tone /2/ with the potential aspect, what is the phonological environment for a rewrite rule? It is the entire word's tonal melody. </span><span style="font-family: Georgia, 'Times New Roman', serif;">FLEx currently provides no way of separating the stem's tonal shape from the stem itself as one might do with an autosegmental representation. Thus, FLEx is unable to make sense of a string like /ka²chin²/ 'request.POT.1S.' when it comes to morphological parsing.</span><br />
<span style="font-family: Georgia, 'Times New Roman', serif;"><br /></span>
<span style="font-family: Georgia, 'Times New Roman', serif;">This problem is compounded by the nature of Triqui morphology when one considers the interaction between the potential aspect and 1S marking mentioned above. If there are a specific set of rewrite rules for the 1S clitic, one must specify that the tonal part of the alternation does not apply if the stem has undergone a change to the potential aspect. </span><span style="font-family: Georgia, 'Times New Roman', serif;">I currently know of no solution as to how one might resolve these issues within a FLEx lexicon.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span><span style="font-family: Georgia, Times New Roman, serif; font-size: x-small;">References:</span><br />
<div>
</div>
<br />
<div class="page" title="Page 2">
<div class="layoutArea">
<div class="column">
<span style="font-family: 'SFRM1000'; font-size: 10.000000pt;">DiCanio, C. (forthcoming) Tonal classes in Itunyoso Triqui person morphology, in </span><span style="font-family: 'SFTI1000'; font-size: 10.000000pt;"><i>Tone and Inflection</i></span><span style="font-family: 'SFRM1000'; font-size: 10.000000pt;"><i>,</i> Empirical Approaches to Language Typology series, Mouton de Gruyter, Palancar, Enrique and Léonard, Jean-Léo
(eds).</span><br />
<span style="font-family: 'SFRM1000'; font-size: 10.000000pt;"><br /></span>
<span style="font-family: 'SFRM1000'; font-size: 10.000000pt;">Inkelas, Sharon (2014) <i>The interplay of Morphology and Phonology</i>. Oxford Surveys in Syntax and Morphology. Oxford, UK.</span><br />
</div>
</div>
</div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com1tag:blogger.com,1999:blog-8009206815446785752.post-45738871597114548522015-07-24T20:05:00.001+02:002015-07-24T20:11:14.556+02:00The healthy and unhealthy vocal fries<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;">There has been much discussion in the news media lately about the phenomenon known as "vocal fry" and its use among English-speaking women in the United States. Vocal fry refers to the irregular vibration of one's vocal folds and it is normally produced with low pitch. <span style="line-height: 18.7199993133545px;">In </span><a href="http://www.npr.org/2015/07/07/420627143/filmmaker-and-speech-pathologist-weigh-in-on-what-it-means-to-sound-gay" style="line-height: 18.7199993133545px;">an interview with Terry Gross</a><span style="line-height: 18.7199993133545px;">, Susan Sankin, a speech-language pathologist stated that vocal fry is harmful to one's vocal folds. In a </span><a href="http://www.npr.org/2015/07/23/425608745/from-upspeak-to-vocal-fry-are-we-policing-young-womens-voices?utm_source=facebook.com&utm_medium=social&utm_campaign=npr&utm_term=nprnews&utm_content=20150723" style="line-height: 18.7199993133545px;">follow-up piece</a><span style="line-height: 18.7199993133545px;"> on 7/23/15 on NPR, she maintains this view, stating</span></span></div>
<blockquote class="tr_bq" style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="background-color: white; font-family: Verdana, sans-serif; line-height: 28.9999580383301px;">...I have heard ENTs say that it can cause damage. And for a lot of the languages where it's a habitual pattern - as you develop from a young age, that's how you're training and using your vocal cords. And I think when you start to fall into that pattern later on, I think that it can cause some damage. Again, I'm not a doctor, so I can't say that I've looked at people's vocal cords and I've seen it, but I have heard ENTs say that they do notice that it can cause damage. And sometimes the jury is out on that as well.</span></blockquote>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;">Just what is behind this notion that vocal fry may be damaging for one's vocal folds? After all, what we're calling "vocal fry" is used <a href="http://www.sciencedirect.com/science/article/pii/S0095447001901470">in many languages</a> to contrast meaning among words, just like one might contrast the words 'heed' and 'hid' by their vowel sounds. It <span style="line-height: 18.7199993133545px;">is also ubiquitous throughout the languages of the world to mark boundaries between phrases. How can something that is so common be considered a vocal pathology?</span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif; line-height: 18.7199993133545px;">To answer this question, it's necessary to first make a distinction between speech articulation and speech acoustics. Speech articulation involves what you do in your oral cavity to produce speech sounds. Speech acoustics involves what sounds you hear that convey a linguistic message. Phonetics involves the study of both these things and phoneticians are interested in understanding how certain articulations produce certain acoustic characteristics. One can more easily investigate this relationship for sounds with un-hidden articulations. For instance, the 'p', as in 'pan', is made with the lips. One can see them close when this sound is produced and observe silence in the acoustic signal while one's lips remain closed. </span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><span style="line-height: 18.7199993133545px;">The same thing is <i>not</i> true for the vocal folds though. </span><span style="line-height: 18.7199993133545px;">When it comes to the vocal folds, it's often a rather messy business to investigate what they are actually doing. They're quite small (just about 1 - 2.5 cm in length, depending on one's sex) and taking a video recording of them moving during speech involves inserting a small camera attached to a wire through one's nostrils to hang near the upper portion of one's pharynx (throat) and peer downward. As you might imagine, many people object to having foreign objects inserted into their noses.</span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;">One way around this is to just look at the acoustic signal and interpret what the configuration of the vocal folds must be. People don't object nearly as much to being recorded as to having wires inserted into their noses. <span style="line-height: 18.7199993133545px;">Moreover, plenty of other articulations have consistent acoustic consequences. For instance, lowering one's tongue and jaw during speech changes the acoustic resonances of the oral cavity in a rather consistent manner. So, the theory goes, one can rely on the acoustics of the speech signal to tell us what the speech articulators are doing. So far, so good.</span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;">While this method is fairly robust, there's something problematic about it with the vocal folds. What is called "vocal fry" involves irregular vibration of the vocal folds (see below, taken from <a href="http://christiandicanio.blogspot.com/2014/06/vocal-fry-doesnt-harm-your-career.html">a previous post</a>). In the figure here, one notices the irregular vocal fold vibrations on the right. Each glottal pulse is individually stronger (has higher amplitude) but the timing between each is erratic. To quote a well-known linguist, this voice quality sounds like "a stick being dragged along a fence."</span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWhn-btE9wTfYAYESIEb5E6USbFvflTUhrAJzC3oKZGg8vR5fijhyphenhyphensjYeiKQfKquLGP7WRFm8TpydJozEyuidOdCj_2LmBZlm9OmuWUoSgsZ8qHVyHqEBF5dGsX4eEXHBPA0Y-DdEjtr4-/s1600/creak_example_marked.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Verdana, sans-serif;"><img border="0" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWhn-btE9wTfYAYESIEb5E6USbFvflTUhrAJzC3oKZGg8vR5fijhyphenhyphensjYeiKQfKquLGP7WRFm8TpydJozEyuidOdCj_2LmBZlm9OmuWUoSgsZ8qHVyHqEBF5dGsX4eEXHBPA0Y-DdEjtr4-/s640/creak_example_marked.jpg" width="640" /></span></a></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><br /></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif; line-height: 18.7199993133545px;">But, to return to our main interest, what is the articulation that gives rise to this acoustic pattern. The term "vocal fry" refers not to the articulatory configuration, but to one's perception of the acoustics. As it turns out, there are many things that can produce the type of vocal fold vibration that we observe above. Much like a wheel that is fastened too tightly, if one constricts the larynx (where the vocal folds sit), it is harder for the vocal folds to vibrate regularly. Since the vibration of the vocal folds requires consistent airflow from the lungs, if one runs out of breath at the end of a sentence, the vocal folds also do not vibrate so regularly.</span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><span style="line-height: 18.7199993133545px;">For people who have developed vocal fold nodules, brought on by laryngeal cancer or other pathologies, the vocal folds also <a href="http://www.sciencedirect.com/science/article/pii/S0892199711001020">do not vibrate so regularly</a>. </span><span style="line-height: 18.7199993133545px;">Clearly, the same acoustic pattern matches a number of different articulatory configurations. Yet, </span><span style="line-height: 18.7199993133545px;">all of this irregular vibration is described with a cover term, "vocal fry." </span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><span style="line-height: 18.7199993133545px;">So, if one were to observe vocal fry in different speakers, what could one conclude? </span><span style="line-height: 18.7199993133545px;">While there is independent evidence for the health of speakers in a clinical setting, the notion that vocal fry is pathological is a case of <i>the symptom getting confused with the cause</i>. Since we rely on the acoustic signal to tell us about articulation, we associate the presence of a certain characteristic of the acoustic signal with an articulatory pathology. In other words, vocal fry must be pathological, right? No, in fact </span><span style="line-height: 18.7199993133545px;">this is a classical logical error (affirming the consequent).</span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><span style="line-height: 18.7199993133545px;"><a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=552144&fileId=S095267570600087X">Research on the production of voice quality across languages</a> has shown that speakers use a number of different configurations to constrict the larynx and produce what is known as "vocal fry." Acoustically, and only acoustically, these might appear similar to pathologies that produce irregular vibration of the vocal folds. Yet, the cause of the irregular vibration is different. </span><span style="line-height: 18.7199993133545px;">The articulation of the vocal folds is difficult to examine. So, researchers have assumed aspects of their configuration on the basis of what the acoustic signal says. Yet, this only works insofar as there is not a one-to-many association between the acoustic signal and the articulatory mechanism involved. </span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif;"><span style="line-height: 18.7199993133545px;">The problem is, we </span><i style="line-height: 18.7199993133545px;">do</i><span style="line-height: 18.7199993133545px;"> have a many-to-one relationship when it comes to voice quality. Thus, one can not just infer on the basis of one part of the acoustic signal what articulation is involved. Speech-language pathologists, like Susan Sankin, might heed this before they label "vocal fry" as damaging to one's vocal folds. It's not the voice quality that is damaging, but this misunderstanding of cause and effect.</span></span></div>
<div style="color: #333333; line-height: 18.7199993133545px; margin-bottom: 1.2em; margin-top: 1.2em;">
<span style="font-family: Verdana, sans-serif; line-height: 18.7199993133545px;">What does this mean for the <a href="http://www.businessinsider.com/speech-trend-vocal-fry-is-normal-2014-8">young women</a> whose vocal fry is singled out as being unhealthy and damaging for their careers? It's the attitudes and knowledge about women's voices that needs to change, not the voices themselves.</span></div>
Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0tag:blogger.com,1999:blog-8009206815446785752.post-17731599512322315702015-07-06T01:11:00.002+02:002015-07-06T01:11:37.826+02:00Being cooperative is not evidence of confirmation biasA few days ago, the New York Times posted a <a href="http://www.nytimes.com/interactive/2015/07/03/upshot/a-quick-puzzle-to-test-your-problem-solving.html?hp&action=click&pgtype=Homepage&module=second-column-region&region=top-news&WT.nav=top-news&abt=0002&abg=0"> piece</a> which argued that confirmation bias is a common failure of human thinking. Confirmation bias is the idea that one tends to interpret new facts in terms of one's existing preconceptions.<br />
<i><br /></i>
The author of the study, David Leonhardt, discusses confirmation bias by way of a mathematical example where the reader is asked to guess the rule determining the sequence "2, 4, 8" by testing additional examples. Thus, one can type in sequences like "4, 8, 16" or "10, 95, 387" and see if they follow the same rule as the sequence "2, 4, 8." If one enters a sequence like "4, 8, 16" into the boxes in Leonhardt's article, one receives a confirmation that it also follows the same rule as that which produced "2, 4, 8."<br />
<br />
So, just what is this rule? Leonhardt states:<br />
<br />
<i>"...most people start off with the incorrect assumption that if we’re asking them to solve a problem, it must be a somewhat tricky problem. They come up with a theory for what the answer is, like: Each number is double the previous number."</i><br />
<i><br /></i>
The true rule, Leonhardt explains, is not that each number is double the previous number, but rather that each subsequent value is greater than the preceding value. That people assume the former rule is taken as evidence for confirmation bias. As stated, <i>"Not only are people more likely to believe information that fits their pre-existing beliefs, but they’re also more likely to go looking for such information."</i><br />
<br />
However, it strikes me that there are other, rather sensible reasons that people will assume the former rule that Leonhardt does not consider. One is found among the the well-known <i>maxims of conversation,</i> created by the famous philosopher of language, <a href="https://en.wikipedia.org/wiki/Paul_Grice">Paul Grice</a>. These maxims, well-known to any introductory linguistics student, state that conversation is guided by constraints of quantity, quality, relation, and manner. As a default, we assume that speakers will give only enough information, be truthful with it, be relevant to the topic, and be clear, respectively. When speakers deviate from these expectations, we are annoyed with the conversation. In such cases, we might state "<i>He was long-winded." </i>or "<i>He kept going off on tangents."</i> Our ability to follow these maxims demonstrates our cooperation within a conversation. Hence, they fall under what Grice terms the <i>cooperative principle.</i><br />
<br />
Grice's maxim of <i>quantity</i> states that one should not make his/her contribution more informative than required. Thus, when someone asks for directions to a particular room in a building, one does not expect the speaker to provide instructions on how to open a door nor the history of certain rooms that the listener will likely pass. If additional details are provided, our interpretation is that they must somehow be relevant (incidentally, another maxim). So, just what might Grice have to do with Leonhardt's example here?<br />
<br />
Consider the initial example that he provides: 2, 4, 8. The reader's expectation from this example is that it is as informative as necessary. If the author chose a sequence where each subsequent value is double the previous value, then this must be relevant to the question. After all, the expectation is that the author has provided this information and it must be important. When we hear that the rule is, "Haha!", not what we assumed, our reaction is one of surprise. Why provide this particular example if any random sequence, like 1, 5/3, 9, would have sufficed?<br />
<br />
Providing too much information in this way would seem to be a case of conversational deception. The listeners/readers are led astray believing that the author was following the maxim of quantity and relevance when, in fact, the example was intended to be overly informative. So, are 78% of those who participate in this particular task guilty of confirmation bias? Perhaps some are, but Leonhardt would be wise to consider that most people are guided not by the expectation that the problem is tricky, but rather by the expectation that the author's example does not provide too much information. An entirely different outcome would be produced if the example were as simple as "1, 2, 3."<br />
<br />
*Incidentally, the rule "<i>Each number is double the previous number." </i>is just a more specific case of the rule "<i>Each number is greater than the previous number." </i>The first entails the second.<br />
<br />
<br />Christianhttp://www.blogger.com/profile/00108470058202380066noreply@blogger.com0