This web page lays out what we will do to prepare for Monday's class. Please submit a "reading reaction" that remarks on something you experienced trying this out. (For once, it's okay if it's not a deep or critical thought; comments about the process itself or specific results you got are fine.)
1. Download and install Praat
- If you don't already have Praat installed on your computer, or you would like to upgrade to a newer version, go to http://www.praat.org and follow the download instructions for your operating system.
- If you've never used Praat before, see Handout #2 from the Ling 520 Praat resources page for a quick overview.
2. Learn how to use the "OT Learning" software
- Go to the Praat help menu (click at right side of top menu bar in Objects window).
- Click on "Praat intro". Scroll down and click on "OT learning".
- Read carefully through sections 1-5. (The shortcuts in section 6 and the additional issues in section 7 are also interesting if you are curious.)
- Much of the terminology should be familiar from reading Boersma & Hayes (2001); note that you can control values for ranking value, plasticity, noise, etc.
- Note that "disharmony" (in sec 2.4 of the OT learning manual) is what B&H called the "selection point"; it is ranking value as adjusted by noise on one invocation of the grammar.
- The grammar type that you will be using is "OTGrammar" -- this covers both classic and stochastic OT.
3. Replicate one of the analyses in Boersma & Hayes (2001)
- Pick one of the GLA analyses from B&H (2001) and try to repeat the analysis yourself in Praat. Don't forget to use the information in Appendix A when necessary.
- Do your results look like B&H's? If not, what do you think might have caused the difference?
4. Going beyond Boersma & Hayes
Choose (at least) one of the following two ideas to pursue:
- This semester, we have seen a few analyses involving constraint rankings of the general type depicted in example (7) in Boersma & Hayes (2001), which is a type of situation that the GLA cannot model. One is the analysis with floating constraints in Nagy & Reynolds (1997). Other papers (Anttila, Kostakis, ...) have some similar analyses.
- Pick one of these and feed the constraints and violation profiles, etc., as given by the original author into the GLA.
- What happens? Does the GLA learn some kind of grammar? If so, does the grammar generate the right kinds of outputs? Or is it weird?
---OR---
- Think about what a stochastic OT grammar that incorporates StyleSensitivity might look like (not the learning algorithm, just the ranking values + noise + ...etc. in the adult grammar). Be concrete:
- Make up a scenario with particular specific variable forms (borrow a scenario from one of the readings if you like) and think about what different constraints/rankings are involved in order to get those variable forms (again, it's fine to stick close to one of the readings).
- Now, assume that the "non-stylistic variation" is handled by a B&H-type system. How would we have to add StyleSensitivity to this to get the right kinds of variable rankings?