sábado, 25 de agosto de 2018

Full circle: Summer Course in English Phonetics...again!

I'm halfway through my PhD and things are getting really exciting. This also means that I'm digging deeper into the areas of knowledge that are more closely related to my research topic, and I have had to leave a few other areas aside. Pronunciation teaching, unfortunately, had been one of those things I hadn't been doing as often as I would have liked...until now!

The usual disclaimers before we continue: this post may get a bit emotional at first, but feel free to skip the first bit if you really can't be bothered with my trip down memory lane!
(Disclaimer 2: these are my own personal views and do not represent the views of SCEP organisers)

Four years ago, when I started this blog, I wrote a bit of a naive post on my experience at the UCL Summer Course in English Phonetics. I attended SCEP in 2010, and it was a life-changing experience for me, as I got to get a glimpse of the type of academic life I wanted to be part of, and let's face it, who can forget their first time ever in the UK, especially if you are travelling exclusively to do Phonetics! (What's not to love, right!?)

So in 2018, eight years later, I was invited to be a tutor in SCEP, and for me, this has been a blessing in many ways. Having the opportunity to teach where I was once a student (I did it at my school, and of course, I also lectured many years at the "Joaquín" I miss so much!) is really a way to give back a little something of all the fabulous input and care I received and to be grateful for all I have been given.

Apart from having a chance to pour some of my pron-passion in, SCEP was also about to give me another great gift: that of revisiting my practice.  Even though I've always been a reflective teacher, and I've been far more critical of myself than of anyone else, when you are not doing what you used to do for such a long time (I've been away from pron teaching almost for 2 years after a decade doing pronunciation and intonation teaching), and when you can take some distance (literally) from it, you start to see things in a different light. Planning my SCEP sessions, knowing I would be teaching students from different walks of life with different L1s and expectations, was quite a challenge, as it meant having to be well prepared, and yet flexible (which, for a control freak like myself, did feel a bit daunting). So each tutorial became an exercise in finding my old pron-teaching self, and reconciling it with my new research perspectives and newly acquired knowledge. It felt good to have the best of both worlds!

However, I feel that the most scary part of the experience was to face my own prejudice: I thought I would not be able to live up to the expectations of my students, as I thought that presumably, attendees would (rightly) expect to be taught not only by expert phoneticians or experienced pronunciation teachers (which I am), but also, perhaps, by L1 speakers of English. I feel really privileged to have been invited to share in my expertise and to have been trusted with the job, as I find it still a bit hard to shake off my non-native speaker feelings of inadequacy (which, funnily enough, I don't project when I'm teaching, as I feel my job is to empower my students as I help them work on their accents. Funny how we can help and encourage others to get better while still battling our self-consciousness about our own shortcomings or fears). Everyone at SCEP has been really respectful and appreciative of my work, including those L1 speakers of English I was lucky to have as students, so all in all, I can consider my prejudice now officially dead and buried.

So it's been really great to see that tutorials were spaces where we all had something to give as a community, where people helped each other with tips and tricks, as we worked on dialogues and role plays, sang to intonation tunes, or tried vocal warm-ups. I was really happy to get my two L1 speakers of English ready to share their perception of how certain sounds or tones "gut-sounded" to them, while I offered my own technical knowledge of phonetics and my trained ear. At least from my own perspective as a tutor, tutorials are part of the "magic" of SCEP, and mine were a real joy (...and while I type this a part of me will inevitably fear my students did not see it this way....but in my heart of hearts, I really hope they enjoyed the sessions and learned as much as I have!).

But enough about me! Let's talk a bit about the course, shall we?

***
Sorry! Change of plans. I need to take a detour.
UCL SCEP is turning 100 next year. Anyone who has ever heard of Daniel Jones, A.C. Gimson, J.D. O'Connor, John Wells, Michael Ashby, among many other well-known phoneticians (and linguists in general! I need to mention J.R. Firth!), will understand the relevance of UCL to the world of Phonetics, the great contribution to Phonetics as we know it today, in spite of the different, multiple and varied dimensions that make up present-day phonetics.

Extract from The English Language's Tourist Guide to Britain, by Crystal and Crystal, where the mythical Gordon Square 21 is mentioned.

At the famous 20-21 Gordon Square, where Daniel Jones and J.R.Firth (among other famous phon-celebrities) did their magic.

So even though perhaps there is not so much left of that Phonetics world we used to know  at UCL now, since the interests, concerns, and departments are now different, SCEP retains a lot of what we all remember and love about Practical Phonetics, and I am particularly really grateful for all the work Geoff Lindsey, Joanna Przedlacka, and the wonderful administrator Molly Bennett are doing to keep it running and getting better every year.

****
SCEP 2010 was similar, yet significantly different in a number of ways to the SCEP I found this year.

The structure of SCEP remains the same, with a pronunciation lecture + tutorial, and an intonation lecture + tutorial - 4 hours of hard work every morning!-, and after lunchtime, an ear-training session, and then another lecture (including talks on contrastive analyses between different L1 phonological inventories, the guest lecture, presentations on present-day SSBE and also on speech acoustics, and reflections and resources on pronunciation teaching). I personally love the "school timetable", as it keeps the course dynamic, and the fact that you share activities with the whole cohort as well as with your own small "community" and your two different tutors, makes for the vibrant and positive atmosphere that I experienced throughout SCEP.

The pronunciation lectures are a bit more varied than in the past, and it's a welcome addition that we would have a lecture on Global Englishes, as well as lectures on changes to SSBE. As someone who spent years obsessed with teaching something like "present-day" SSBE in Argentina, looking at the vowel charts we were offered at lectures and in the handbook felt like a triumph: it was amazing to finally get to see in black and white, and disseminated in lectures, those lovely now fronted GOOSE and FOOT vowels, a higher THOUGHT, a lower TRAP, monophthongised SQUARE and NEAR. Hearing so much about the glottal stop and L-vocalisation at SCEP made me feel grateful I had embraced them in my classes in Buenos Aires. Some of the ear-training sessions I sneaked in at were also truly great exercises on perception skills, reminiscent of the Phonetics Hour I have at York where we do all these impressionistic transcriptions and use the IPA as a tool for representation of what is said (rather than focusing on transcription rules of what we "ought to" hear, which is a point I tried to make during the Q&A panel).

The intonation lectures exploited many of the materials and ways of looking at intonation that UCL academics have traditionally developed, so it came as no surprise that there would be references to Wells, and O'Connor & Arnold and to their well-known contributions to intonation teaching. Those who know about my intonation passion/obsession will know I would have loved to make my own intonation syllabus for the course and tweak a few things, but it's impossible to cram so much intonational content in just 10 days, so I obviously understand that we may need to fall prey to simplifications at times, and students did find them very useful!. I think that the energy and clarity that Jane, Geoff, Sam, and Kate put into the intonation lectures really helped students to get a good grip of something that many people find so difficult to understand.

In terms of academic celebrities, it was of course very exciting to see some familiar SCEP 2010 faces, like Geoff Lindsey (course director), Joanna Przedlacka (associate director), Jane Setter, Kate Scott, Margaret Miller, Inger Mees, Paul Carley, and Masaki Taniguchi (or should I say, the official photographer?).

Geoff delivered a fascinating lecture on vowels and the vowel space with admirable clarity and lovely metaphors. Jane contributed to the course with a refreshing perspective on accents by presenting on ELF and Global Englishes, as well as guiding the more practical intonation lectures (And of course, she was in charge of the end-of-course singing, with artistic/composing contributions from Inger Mees and Kate Scott, and the lyrics and background guitar by Tim Wharton, who was not tutoring this year). 

Joanna got us all wow-ing with a recording of Jones and Firth in 1933 and a comparison with present-day SSBE. Kate was in charge of more functional accounts of intonation -referring to syntactic concerns, a few remarks on affect, and also social rituals-, and also delivered a lecture on phonotactics which I really appreciated, as it invited reflection on how rhythmic needs may bring about changes to citation forms at syllable level. Margaret presented on Spanish vs English phonemic inventories, and also shared her collection of pronunciation- and intonation-related misunderstandings.

Shanti Ulfsbjorninn made a great introduction to transcription, phonemes, and allophones. I especially liked the reference to runes and other scripts and the comparison to the IPA and forms of representation, as well as the very clear explanation of surface and underlying representations.

I personally enjoyed the energy of Paul Carley and the content of his lectures: his very clear introductions to consonants, and a very passionate discussion of pronunciation teaching -including a bit of myth debunking and a couple of uncomfortable truths- were a great contribution to the course.

Sam Wood presented some of English's basic nucleus placement rules, and John Harris discussed changes to SSBE as well as a key distinctions between accents in terms of phonological processes (which made me very Yorkshire-sick = as in homesick, of course with all the references to the FOOT-STRUT merger and lack of BATH broadening...see Wells (1982) or the Dialect Blog for more info on these!)

Mark Huckvale (https://speechandhearing.net/) entertained and instructed us through a really didactic introduction to speech acoustics. I have to admit I was envious of his clarity, and as a fan of his software, I was sooo glad I was sitting in that lecture! 

Another really welcome addition was the guest lecture by Richard Cauldwell. Having a great talk on listening, and on how phonology may inform the teaching of listening in a way which is different to the models we can use to teach production was truly something very valuable to those EFL teachers attending SCEP. Like many of the other lectures in the course, Cauldwell's presentation shook the firm ground of many traditional ways of teaching and even of prescriptive views of applied phonetics, and the fact that students in the tutorials the following days were likening several phenomena to the Greenhouse, or to the Jungle, also shows how relevant and significant this lecture was to participants.

Credit: Masaki Taniguchi
The very last taught SCEP slot is always a Q&A. I kind of cheekily accepted the challenge to join in Jane, Inger, and Luke and tackle all those end-of-course queries. To be honest, I wasn't expecting a personal question, so when asked about what led me to get into phonetics, my reflection on how getting into phonetics and pronunciation teaching may have been a result of a personal "trauma" (tongue-in-cheek) of having to undergo speech therapy as a kid because I could not produce the alveolar trill may have been a bit shocking to hear to the audience, but I think no one can deny that I absolutely love phon and pron and I don't think I'll ever regret having taken this path.

Interesting questions included issues of transcription and levels of detail (in this respect, I would have liked to refer people to the introduction to the Illustrations of the IPA, as well as to chapter 3 of Ogden's Introduction to English Phonetics), syllabification criteria (the words "extra" and "selfish" being a case in point - had to put my phonology hat on to deal with that one and refer to the Maximal Onset Principle, as well as related issues of Legality and Sonority I had no time to mention), the usefulness of cardinal vowels, and monophthongisation of central diphthongs (I did not have time to answer this one, but someone mentioned it was difficult for Spanish speakers to monophthongise SQUARE, and added to the comment by Luke on perhaps speakers actually finding the non-rhotic aspect of it tricky, I would say it's a matter of teaching a slightly lower [e]-like sound: in non-technical terms, asking Spanish speakers to start from a Spanish [e], smile a bit more broadly and lower their jaws slightly to produce a long [ɛ] sound.)
Image credit: Masaki Taniguchi
I think the tutorials are at the very centre of SCEP and shape the overall students' experience. There was a fantastic group of tutors making this possible, whose knowledge of different areas of expertise and practice and research interests, ranging from accent coaches, to phon lecturers, to sign language researchers, made SCEP a really rich experience.
With two fellow "new additions" to SCEP, Luke Nicholson (@ImproveAccent) and Alex Rotatori (@Alex_rotatori) - Picture by Alex. 
But obviously the very core of SCEP is the collective of participants. There was a fabulous group of students, enthusiastic and inquisitive, and at least in my groups, eager to learn and reflect on pronunciation and intonation. Great participants and enthusiastic staff really do make one feel that SCEP is a celebration of phonetics!
SCEP 2018 participants

You will have to come to SCEP to learn about the lectures, but if you want a sneak peek of some of the fun resources used during the lectures in general, I could mention videos of MRIs (of course!), vowel space charts with all these lovely "updated" vowels, some poems (Humpty Dumpty and Betty Botter making special appearances), Cantonese YouTubers, Masaki's Tone Gymnastics, The Music Man Song, Ben Crystal's OP rendering of Romeo and Juliet, among others. (What these were used for, I shall not reveal!)
***

I feel deeply grateful for the chance of tutoring at SCEP. I personally tried to make the most of my tutorials to share with my students a bag of tips and tricks, my ears and my expertise, and I learned a lot about the phonology of other languages, as well as about the way L1 speakers of English react to and perceive certain phenomena. I have tried to make people enthusiastic about my research interests in the intonation sessions, showing ways in which intonation contributes to the construction of meaning and social action alongside the whole range of verbal, non-verbal and pragmatic resources, and by showing a few tools that may help us visualise intonation on the go (with the possibilities of on-the-spot visualisation that Huckvale's WASP and AMPITCH allow). And in turn, students came up with super interesting questions, and observations, and they were all ready to support each other in their production, giving each other their own tips and help, which I think made for this community feeling we created. As with any other teaching experience, you end up realising you have actually learned a lot as you taught!

I personally feel that SCEP is, in many ways, moving with the times, becoming an even more inclusive and diverse space where the whole scope of phonetics is welcome and explored, upholding the importance and seriousness that the study of phonetics requires, providing teachers and professionals alike with an intense overview of the vast phonetics world, and keeping alive the important phonetics story and legacy that UCL is known for.





domingo, 25 de febrero de 2018

Event report: "Pronunciation: The Missing Link" - Chester Uni, Feb 17, 2018 - Part 2


Hello, again! I said I would devote an entire post to my presentation at the PronSIG event in Chester (see full report here) , and I had not yet done it because I'm currently drowning in deadlines, and on the other hand, as this presentation was part of an article I've been writing, I did not yet want to give it all away. I have used my own data to show the phenomena in question in the presentation, and I cannot distribute those videos or snapshots because of ethical reasons (though I have permission to show them in presentations with an anonymising filter, which I did), which is another reason I won't be posting my slides publicly, though I'll add a few captures.



This time I cannot make any disclaimers regarding the representation of other people's presentations, as I'm summarising my own, but there area few points I would need to make: 1) even though this is my own work, I don't claim that my points here are"innovative", or "unique", because I'm sure there must be a lot of people around researching this.  2) And yes, this is about intonation teaching, the theoretical background we develop, how much it responds to real-life interaction, and how we can teach English intonation for interaction, and it is based on L1 English (someone might then want to record conversations using E as a LF and do something similar across different L2/FL Englishes)

3) Another remark I always make at the beginning of my presentation is that, at least in Argentina, to the best of my knowledge, the best work on English intonation and speech genres from a SFL perspective has been (and still is) carried out at Universidad Nacional de La Pampa, in a project including researchers and EFL teachers Lucía Rivas, Miriam Germani, and the rest of the team (sorry I cannot name you all). I have collaborated briefly as part of the project, and I'm very grateful to have worked with them, as it's always great to exchange ideas with a group of like-minded academics who want to see intonation teaching theory evolve.

4) And my final disclaimer: because my PhD research is based on language in social interaction, and I listen to everyday conversation all day long, there is a lot I get to notice about "language in the wild" that I cannot do justice to when I make a presentation for ELT. My selection of points to make in this presentation responds to the need that English language teachers have to be presented with theory on intonation that can inform their teaching. Some of us might want to go further and pursue an MA or a PhD because we want to engage in linguistic research, but many teachers will not, and should not be in a position to want to. So if any of my readers is a discourse or conversation analyst, yes, they will surely gasp in horror when they see some of my simplifications, but I am very respectful of the teachers I have trained and train, and because I am a teacher myself, I know we need something to hold on to, even if it is a "half truth". So this is me using my researcher role informing my teacher trainer role to show other language teachers how research in language in the "real" world can help them develop theory for intonation that can empower students across different speech repertoires, and not theory based in introspection or decontextualised examples.

5) And yes, I wrote this in one sitting while I was having my Sunday breakfast because my blog is of secondary importance at the moment (Sorry! Forgive the typos, etc etc)

The framework

I use Systemic Functional Linguistics to inform my view on genre, and as we know, different generic manifestations may have different configurations, which illustrate and also construct different social purposes. Even though texts may exhibit different levels of genre hybridity and blending, there are features that make generic types recognisable, even at a glance.



So I asked the audience to take a quick look at four written texts, and identify what generic types they would associate those to. They did so quickly and successfully, even without having read the text. That already says a lot about the features that as members of a culture we associate social purposes to.

I then did something I once did in my Discourse Analysis and Phonetics II lessons: I played some snippets of spoken genres, in a distorted form, and asked the audience to identify them. That was harder, especially because of the audio distortion, but the audience guessed quite well. So then, beyond the lexico-grammar, there are features that make spoken text types recognisable as members of a group, as Tench (1996) illustrates:

Prosodic features in different speech genres (Tench, 1996)

The next step is to problematise the role of written vs spoken as a dichotomy...


...and to see these uses of language as part of a continuum, with texts illustrating greater levels of "spokenness" and "writtenness" (Eggins 2004; Mortoro, 2012 > class notes, Flowerdew & Miller, 2005). We can see spokenness and writtenness, and monologism as dialogism as extremes in a cline, and the organisation of different speech genres (and yes, I know that there are many other ways in which we could organise those types in the continuum, this is just one approximation) dependent on a number of features including levels of pre-planning, use of formulaic expressions, possibilities for readjustment, contingencies, and whether the goals and trajectories are clear from the start or not:


The truth is that in many ELT lessons, when we say we "do speaking", we are probably working on the monologic, more written-like types of text and even when we "do conversation", we are mostly teaching lexico-grammatical formulas that contextualise more monologic types of texts (let's be honest: how many times in ordinary interaction do we preface our talk with "In my opinion/view..." or "Firstly...Secondly...". In my 8 hours of conversational data....zero!).

Some lit-review and data-based findings 

So the core of my presentation was the comparison of two bigger groups of "genres": narrative, and expository (which, as we all know, are made of a number of "sub-genres", each differing in their possible stages and lexicogrammatical configurations...see Eggins & Slade, 2007).  I presented snippets of more monologic types of stories (as in the introduction to a TED talk), and conversational stories (from my own data), and did the same with more expository texts (more difficult to trace in ordinary interaction). I presented a few generalisations as to possible differences we may encounter in terms of narrative and then expository texts (there is a whole body of literature on this, and yes, I went all simplistic because of the purposes of the presentation,  heading towards practical advice for teaching....before any conversationalist starts rolling their eyes!), putting together my knowledge of Discourse Analysis and Conversation Analysis:
 










My idea was to show how bundles of features may differ in more written- and more spoken-like texts, and how we can teach them for monologue, and for interaction, in particular, how we can teach our EFL learners to be co-interactants, and not, monologists.



I discussed, based on data, a few generalisations that we can make in the light of David Brazil's account of intonation sequences and combinations, with rising tones (the "loop" symbol) accompanying contextualising, background information, and falls moving discourse forward (the "play" symbol). These patterns are more common in the Orientation and Complication stages of narratives (later stages are generally made of greater "play" sequences), and they were found in both monologic, and conversational stories. 

In general, the Orientation stage was found to be "neater" and more clearly patterned in monological stories, versus conversational stories, where the background actions and the contextualisation in the Orientation stage left slots for listeners to produce their "go-ahead" response upon their opening, and then continuers and markers of affiliation (Stivers, 2008, and others), not to mention how recipient activity may "derail" the story, leading tellers to find a moment to make their way back into the story.


I also discussed the role of acccentuation/tonicity: Event sentences (Gussenhoven, 1984) are often used to make the surprise or contingent nature of the "remarkable event" of the Complication stage explicit (I have also found passive and causative sentences to be doing that in my conversational data).

I made some generalisations as to how we can teach intonation for interaction, and to show how level tones (the "pause" icon) and pauses can also be used by our students to engage their recipients in conversation, and create opportunities for realistic interaction, which will always put students in a position to have to re-adjust, and co-create, instead of thinking of speaking as the clash of two independent, non-related, interactional projects by two separate people (as many of the interactions we hear among students in international exams seem to be!). Tonality blocks of background or foreground information with suitable tones, and the use of pitch height, can also inform the recipients of what their legitimate spots for response incoming could be, and what type of sympathetic/agreeing/etc response can be expected from them (Again, CA/IL people, if you are reading this, just cover your eyes for the slide that follows!):



In the same way, I examined patterns and configurations in expository texts in both "monologic" and conversational data.

Some ideas

I then moved on to present ideas to create opportunities for students to use intonation patterns for both more "monologic" and more "dialogic" types of texts:

Creating opportunities for the use of tone in signoposting & contextualising in more monologic expository text types.

A template for the planning of spoken narrative, together with associated lexico-grammatical & intonational features

Creating opportunities for co-construction in interactional "expositions" (descriptions of states of affairs, or procedures...)

Final remarks

I know this write-up may look confusing and it can hardly give you access to what I demonstrated in my presentation, as I'm not including the data snippets that illustrate all this here. I also know that many of the things I said were not clear enough for all the members of the audience, as I am aware my enthusiasm often leads me to be overgenerous and overwhelming, and people get drowned in my enthusiastic rant, and I end up saying more that people can process in a limited amount of time (one of the things it's high time I learned to overcome!). But at least I wanted to show you a preview of how my current research on prosody in interaction may, at some point, find its way back into language teaching. I wanted to also once again uphold what my research keeps confirming:  intonation choices are co-built in interaction, and they reveal, and simultaneously, co-construct, context and social action. And all theorising on it, in my very humble view, should bear that into consideration.




***

I would like once again to thank PronSIG, and Mark Hancock, for inviting me to share all this with the audience at the "Pronunciation: The Missing Link" event. I would also like to say thanks to Dr. Ogden and Dr. Szczepek-Reed for their advice during my brainstorming period for this presentation, which I think ended up being something different from what I originally envisioned. And of course, a massive thank you to the Department of Language & Linguistic Science for supporting my research in every way.


domingo, 18 de febrero de 2018

Event report: "Pronunciation: The Missing Link" - Chester Uni, Feb 17, 2018 - Part 1

Last Saturday I left the East to cross the Pennines and after a three-hour train journey I arrived in the wonderful and picturesque city of Chester.
Lovely Chester (credit: MNC)

PronSIG were holding a new event at the University of Chester: "Pronunciation: The Missing Link". It was a small but really friendly event, and the audience was really keen, so the atmosphere was right for us pron-thusiasts to share our passion, quandaries, and ideas.
(Credit: Catarina Pontes)


As many of you may have inferred from my posts, I take this whole pronunciation teaching thing really seriously, and I have always been affected by the tension between what I know about pronunciation and intonation in the real world, and what happens in the classroom, and what teachers need to know in order to make all this "real life mess" accessible to learners. I was happy to be in this event, because the talks were all about problematising many "set truths" in ELT, while still providing solutions that fit the reality of our classrooms.

I will be writing up a small report on the event, but if you want to see how it developed, you might want to check the #pronsigchester hashtag, where all the live-tweeting went on. As usual, all potential errors in understanding the claims of the presenters are my own. 

The first in line was the always great Richard Cauldwell, with "Pronunciation and Listening: The Case for Divorce". Richard reminded us of his great metaphor for the world of sounds out there: The Greenhouse, the Garden, and the Jungle. He also refreshed our memory in terms of how the Careful Speech Model, which is privileged in ELT in the teaching of pronunciation for production, does not hold for learners' perception of the "mush" of speech. This is why for the teaching of listening we need to embrace the Spontaneous Speech Model, a model that relishes the sometimes unruly (at least in terms of prescriptivist rules) nature of the "sound substance". The sound substance differs from the "sight substance" in a large number of ways, and traditionally, the teaching of listening has focused on the "logic of meaning", guiding learners to fill in the gaps based on meaning concerns, rather than on decoding the sound substance. Cauldwell invites us to put ourselves in our students' shoes and see how their hearings may actually be "reasonable hearings" in terms of the sound substance (one of the many examples was that of learners hearing "peoples" for "pupils". The speaker was producing GOOSE fronting all the way and yes, no doubt about the fact that it could have been, indeed, within the logic of the sound substance, been a "reasonable hearing". The logic of meaning and grammar would not have allowed it, of course). Richard then presented a number of cases of processes of connected speech ("streamlining processes"), as always going beyond the neat rules of assimilation and elision that we see in textbooks, and introducing a nice catalogue of processes and wordshapes we find for words like "certainly", "obviously", and many others.  Many  of these points will be tackled in Richard's upcoming book, "A Syllabus for Listening - Decoding".
Richard presented a workshop in the afternoon, but I am afraid I was in another session, so I cannot report on it. I know there was a lot of "mouth gymnastics" involved in the production of different soundshapes...

Gemma Archer was the next presenter, and her focus was on pronunciation assessment. It was an interactive presentation, and there was reflection on the number of reasons why teachers may not do pronunciation assessment in the classroom beyond box-ticking forms in speaking exams. We were invited to analyse different types of pronunciation assessment (passages, minimal pairs...), and their strengths and weaknesses. I would like to focus on the fact that by far one of the most widely criticised aspects was the fact that many pronunciation tests are based on reading aloud, which, as we know, is an altogether different cognitive activity. We know that having a set reading test allows for  the narrowing down of what we want to assess, and uniformity in the type of output we get from our learners, but it is always worth remembering that reading aloud is not speaking. A few interesting alternatives were presented: the use of Diapix, and of story boards, to elicit less controlled speech, while still making sure that some of the exponents of what we want to assess are there. It was not mentioned in the presentation, but my favourite form of less controlled pronunciation "test" or practice is role play, and for intonation, at least, Barbara Bradford's Intonation in Context is fabulous.

(I was up next, but I will be writing a separate blog post on my presentation.)

Catarina Pontes led a presentation called "Five Reasons why pronunciation must be included in your lessons". In a pronunciation event, this would sound like preaching to the converted, but as it was planned as an interactive presentation, it ended up being a very useful and engaging forum. Participants shared ideas of activities and resources they used in their classrooms, and some teachers voiced concerns connected to experiences encountered with students, such as reference accent issues, and the exposure to different accents.

Annie McDonald presented some ideas to help students decode spoken language better by listening in chunks. Annie presented a number of mondegreens and how they can be analysed to see what kind of processes students have engaged in to make sense of the sound substance. She moved on to discuss how the regular listening lesson primes students into making sense from their content schemata but does not teach them to decode the actual stream of speech so that next time they are encountered with a similar instantiation, they can recognise it. Annie tried an informal experiment: she selected a few sentences that students were meant to decode word for word, and she worked out her students' percentages of success (quite low). The following lesson, students listened to the text again and then they were able to decode some chunks of speech more successfully. Annie recommended the use of YouGlish and TubeQuizard to look up regular chunks of word clusters so that students can listen to the many manifestations of the same combination of words. Like Richard Cauldwell in the morning, Annie played collections of words/word clusters together, which enables students to get a taste of inter-speaker variation in the producition of the same lexical sequences.

Mark Hancock presented a number of interesting activities to make the teaching of tonic stress "simple". We all know what a nightmare the system of tonicity can be, and personally I always feel soooo guilty teaching the "rules", as I know tonicity is so context-dependent and there are so many exceptions. However, Mark Hancock succeeded in presenting some small contexts for participants to decide on where to put the nuclear accent (which he called tonic stress, as in many other pieces of work). There were some interesting debates, as some participants produced alternative versions (oooh, a flashback to my Phonetics 2 lessons!), and as others were not perhaps aware of where they themselves were placing the  nuclear stress (something at times I notice I may have trouble with myself when I analyse my Spanish speech). The activities presented were truly interactive and easy to apply in our lessons, and they centered around the following areas (I'm using the technical names here because I'm a phon-freak, but Mark was very careful in his simplification of these): deaccentuation of Given info, contrastive focus, intonational idioms/fixed tonicity, and stress shift. All in all, an interesting overview of tonicity with simple activities that I personally believe can help English learners become aware of tonic stress.

I once again want to thank PronSIG and Mark Hancock for having invited me to be part of the event. It's been a delight to go back to my first teaching pronunciation love, and to be around experienced teachers who have so much to share, and who also need someone to tell them that some issues are indeed difficult but that there are ways out. I'm really pleased to see how the teaching of listening is evolving, and how teachers are not being undermined or treated with contempt when it comes to how complex pronunciation can be, and how many thorny issues and sides to it there are. In my humble opinion, finding ways to simplify things for teaching should not mean making people feel dumb (as I have been made to feel in some contexts in the past), and in an event like this, it's clear that no one is treating pronunciation teaching lightly. So I am really happy to be involved in this joint quest for truth and teachability and to be able to share it with like-minded people.

My next post will be about my presentation. Personally, and given the feedback I got during lunch and some of my own hunches as I was designing it, it was also an unexpected surprise to realise how my research can actually inform English language teaching practices when it comes to the teaching of intonation for conversation (and not for monologue.....brace yourselves for a rant in an upcoming post!), so who knows...once my thesis is ready in two years' time...

martes, 2 de enero de 2018

Bliss & Fear: Teaching an Intro to Phonetics seminar to L1 speakers of English

Hiya! Sorry about my blogging silence. In part it was due to my having finally embraced the fact that I am no longer a pronunciation teacher, nor a lecturer in Applied Phonetics, apart from the fact that this context I am in has humbled me in many ways, and I no longer feel entitled to an opinion in many issues. I guess I have become even more aware of all the things I don't know, and of all the stuff there is out there to learn. However, in this new world I am in, I think I can position myself as an experienced teacher in Applied Phonetics, at least as someone who has held that role for a decade in Higher Education and has learned a lot from success and failure, and at the same time, as a student (re/un)learning a lot of Phonetics, so I think that perhaps the next posts will be written in that "capacity", if you wish.

This time, I will be writing (in one sitting, as usual, so forgive the typos) a few reflections on the most exciting challenge I've faced this last term: teaching a seminar on Intro to Phonetics to three groups of students, most of whom speak English as their L1. I would like to compare this experience to my teaching experience in Buenos Aires, and share with you how I felt in this (terrifying) journey, and what I have learned.

***
This term at York, I have been in charge of three of the nine seminars in Intro to Phonetics and Phonology for undergrads (mostly L1 speakers of English) in their first year of their BA. We've got students from different BA programmes, including degrees in different languages, and in Language and Linguistics. In Buenos Aires, I taught Phonology (though in practice it was Phonetics AND Phonology) at a Translation programme to Spanish-speaking students with a B2 level of English (I've taught a few other courses, most of them at Teacher Training programmes but I'll be discussing Phon1 as it's the one whose content mirrors my module here at York).

At York students have a lecture every week, taught by the module convenor, and then a couple of days later they come to seminars with their homework and reading (hopefully) done. Apart from these two compulsory hours a week, students can come to backup sessions or office hours for questions or extra practice. My students in Buenos Aires at the Translation programme had three running compulsory hours a week, comprising theory and practice on articulatory phonetics, phonology, transcription, and ear-training.

My task here at York is to help students bridge the gap between theory and practice in the topics of the week, and to guide them in the procedural part of the course, particularly transcription, ear-training, and applied theory. The first term (10 weeks) is all about Phonetics, and the topics covered  include:  the anatomy of speech; transcription types; the description of cardinal vowels and vowels in general, and the classificatIon of different consonant groups spanning the whole IPA chart; allophonic variations in certain contexts and across accents; an introduction to different visual representation types; a few bits on acoustics; and ways of studying phonetics instrumentally and experimentally. Students are asked to take in a full textbook (the lovely "An Introduction to English Phonetics" by Richard Ogden) in two and a half months, and become familiar with and competent in the use of jargon in order to explain articulation. Apart from self-correcting quizzes on the class website, students were assessed during this first term through an essay, in which they had to describe the articulatory sequence (in detail) of their pronunciation (whatever their accent) of the word "pudding". When they are back from their Christmas break, they will have a test on transcription, ear-training, and theory, to bring this first part of the module to a close.

If I look back on my Phonetics and Phonology I courses in Buenos Aires (8-month-long modules),  there were quite a few coincidences in terms of content: even though over there we focused only on General British (with a few remarks on General American), my students learned the classification of vowels and consonants, explained articulatory processes, learned transcription rules and skills, did ear-training/dictation tasks, and engaged in production quite a lot, since the improvement of their pronunciation of English was, in part, the underlying goal of the module.

My Buenos Aires students were assessed in different ways, including recordings of pronunciation practice materials, phonemic dictations and transcriptons, and tests on theory, most of which were mostly related to recognition, and with some exercises devoted to explaining phenomena (the Translation courses were more limited in content than the Teacher Training programmes, where perhaps the accounting of theory was done more thoroughly).

In seminars at York, we discussed sagittal sections and animations of articulation to identify sounds in different languages, drew diagrams illustrating manner of articulation, tried a few simple transcriptions of words in different languages, attempted narrow transcriptions of different accent variations, and we also worked on the production of cardinal vowels and other sounds, to build proprioceptive awareness as a tool towards better perception as well.

***

A first big difference between by Bs As and my York experience was that in Buenos Aires, I would only mostly focus on a single variety of English, whereas at York, I have had to up my teaching skills for not only different accents of English and Englishes, but also, for the phonetics of sounds in different languages, especially in view of the work that as linguistic researchers students may have to do to describe languages and language change when doing fieldwork, for example.

Another key difference was that at York, most of my students could hear the difference between vowel contrasts of English (perhaps not so much between cardinal vowels at first, or vowel contrasts in other languages), and I hardly needed to make any point of spelling-to-sound relationships. So, for example, in terms of a FLEECE-KIT contrast, all my York students needed to know, was probably the symbols used to represent what they produced or heard, whereas my Buenos Aires students needed to learn associated spellings, and of course, be trained in perceiving them as different from the Spanish i-like target, and from each other. 

Whereas in my own courses in Argentina students needed to integrate the whole awareness-perception-symbol-spelling package, at York it has mostly been a perception-to-symbol challenge, and the building of proprioceptive awareness of what they themselves, as L1 speakers of the language, are producing. I would say my English-speaking students struggled more transcribing what they themselves were producing, more than anything else. It was fun to produce speech sequences in slow motion to identify aspiration, devoicing, anticipatory rounding, not to mention the comparison between different starting points for diphthongs, and TRAP and STRUT varieties among different students in the class (it was fascinating!). I discovered while doing this how all my teaching of pronunciation of L2 had equipped me with tools that my English-speaking students could use to make sense of their own pronunciation, believe it or not!

Another fascinating and scary difference lay in our (my students' vs my) experience of English. I, of course, have the teaching expertise and the theoretical knowledge of Phonetics, but I certainly do not have the experience in accents and in everyday English that my students here at York have. At home, it was perhaps easier to be in "control" of things, since my experience of English and my knowledge and awareness of phonetics was, in general, vaster than that of my students, and we were on an equal footing as Spanish-speaking learners of English. My task here at York forced me to juggle my knowledge from years of reading and teaching with what my students thought they'd heard, to what I think they could have meant they heard. All of this, plus, my getting to grips with their own accents (a huge variety in each group), which also posed decoding challenges on my part every time a question was asked (oh, yes, I had to tune-in very quickly to their accents to make sense of their questions!). 

***
In spite of the L1-L2 differential, both my students of Phonetics in Buenos Aires and those in York had similar sets of difficulties in the process:
  • getting to know the symbols and associating them to particular sounds
  • building proprioceptive awareness of what they are producing
  • becoming familiar with jargon and using it appropriately
  • making sense of technical texts
  • writing cohesively and coherently
***

I know I should not make a big deal out of this, but being able to project my slides and show animations and play IPA audio files on the spot for everyone to see/hear, as well as having the opportunity to type IPA symbols, or to show a sagittal section instead of drawing it, has made a big difference. Back in Buenos Aires, thanks to the lack of government investment in educational institutions, booking a projector was virtually impossible (two or three for the whole college!), and I would spend a lot of precious time during my sessions writing transcriptions on the board, or drawing diagrams (with the markers I'd buy with my own money, and getting a harsh voice due to dust whenever I had to clean the chalkboard in some classrooms).  Not to mention the fact that we had no internet connection in some of our colleges, so web resources had to be set as homework. Should I also mention that students would have had no access to up-to-date bibliography if it hadn't been for their lecturer's (let's call it) "good will"?
Yes, my students in Buenos Aires did cognitively "record" a lot of things faster because they were always copying from the board, and engaging in some sort of live-processing of content that some of my slide-staring students at York may not be doing, but the time I have in my hands now to help students experience and see things and read up-to-date stuff rather than to have them copying things from the board, is something I am really grateful for.


***
My experience of having taught Phonetics before has been an advantage despite my linguistic "disadvantage". It has allowed me to sequence the tasks in a way that I think helped students understand the science behind the theory (I'm convinced that it's all about the way we grade content, after all), and it made it possible for me to predict and anticipate some difficulties that students were going to have (which, self-fulfilling prophecy or not, they did have). I obviously do not have control over all the aspects of course as I did back home, as the lectures are planned and delivered by someone else, the seminar tasks are already set (I did add a lot of things of my own, as I could not help myself!), and even though I mark their exam papers, I have no role in the design of assessment. So in a way, I am delivering someone else's "vision" of how Phonetics should be taught, and even though it's been a challenging thing for me, it's also a good way to learn to see the world and the subject differently (after all, I am on the other side of the world now!)

Of course, I think my undeniable strength as a tutor is my passion. I love teaching, and I love Phonetics, so I think that I may have managed to pass that on a little, with my quirkiness and my cheerful slides, and my constant "could you say that again, please?" to my students, as I attempted to draw their attention to differences among the accents in the room, grinning with fascination as I heard them say the words.

It all goes to show that I have learned an awful lot of Phonetics from my students, to be honest, and I think that on my part (based on the good ratings they gave my teaching at the end-of-term feedback surveys), I have made Phonetics accessible and a little bit more understandable to them.

I'll be back at teaching in a couple of weeks, doing Phonology this time (I have to admit I'm not as excited as I was with Phonetics, which I like better....sorry!), and I hope I can have an even better experience helping students appreciate and understand this fantastic world. And as I do that, learn even more Phonetics from them.

miércoles, 25 de octubre de 2017

Brief colloquium report: The Phoneme: new applications of an old concept

Today I poked my head out of my screen to take a break and attend this very interesting talk by Dom Watt, whose Advanced Phonetics student I'm lucky enough to be at the moment.
Here is the abstract:

And below is my own summary, written in one sitting (as usual!). As always, any inaccuracy or misapprehension of what was presented is entirely my fault. Hope this all makes sense to you!

The talk had the notion of phoneme at the centre, and all the debates existing around its "existence". The first minutes of the talk were a nice overview of the "phoneme" and related notions and ideas leading to it through time: from the contributions of the sanscrit author Patañjali in the 2nd century recognising abstract categories of sound that present variability at the physical level, and the first Icelandic grammarians in the 12th century, to the writings of Sapir in the 1920 and the "phoneme slices" that people claim to have in their languages.

More modern discussions ensued of what the phoneme came to be understood as have been developed by Duriche-Desgenetes (1873), Luis Havet, Badoin de Courtenay (1871) with psychophonetics and physiophonetics, and of course, Henry Sweet in the 1870s and Daniel Jones already in 1911.  In the US in the early 20th century, the notion of phoneme came to surface thanks to Bloomfield.

A few definitions of phoneme were revisted by Watt, especially those by Jones (1957), Watt (2009), and a very quote by Pike (1947): "Phonetics gathers raw material. Phonemics cooks it".

A very useful metaphor to discuss phonemes and allophones was recalled by Dom, that of Clark Kent and Superman as being in complementary distribution, and Superman and Spiderman for example being two different allophones of two different phonemes. (It reminded me that I used to refer to phonemes as any of us, and allophonic variants as us in our roles and attires: at school, at a party....Lately I've turned to Johnny Depp as the phoneme, and his million characters as his allophones, his "realisations" in films...)
Other interesting comparisons were introduced, such as the grapheme-allograph relations in Arabic, or even the number of ways we can represent a certain letter, say "A", which poses a very interesting question: what is the boundary that makes a certain sound no longer the same, how much can variation be stretched, what is the boundary?

Alternative analysis of the phoneme included Trubetzkoy's (1939) phonemic oppositions grounded in phonetics, formal notions of phonemes as bundles of features, as those put forward by Jakobson, Fant and Halle in 1952, based on acoustic analyses of instantaneous "time slices" (somehow looking for the centre of events in the signal). Watt also mentioned a game-changer, the work of Chomsky and Halle (1968), that abandons binarity and allows for phonetic gradation with the introduction of articulatory features in their description.

Watt continued the presentation by referring to the debates on the nature and existence of the phoneme that included quotes from Ladd (2013:370) and Dresher (2011:241). The work by Fowler, Shankweiler and Studdert-Kennedy (2016), who revisit a paper they themselves wrote in 1967, was given special attention, since it provides nine forms of evidence of the existence of the phoneme as an entity, including issues like phonemic awareness, adult visual word recognition, the presence of systematic phonological and morphological processes, the existence of speech errors (spoonerisms), and the fact that co-articulation, as was previously claimed, does not really eliminate the presence of a phoneme.

Of course, as Dom remarks, when we look at MRIs, spectograms and waveforms, we may not so easily be able to see discrete units, but machines seem to be programmed to see the signal as composed of chunks. It was interesting to see a cochleargram, because as Watt pointed out, it does show perhaps more continuity than a wide-band spectogram, for instance.

The second part of the talk discussed phonemes in phonetic work done through speech technology, for forensic and also sociophonetic purposes. It discussed some of the findings by (the absolutely brilliant!) PhD student Georgina Brown, who has adapted the ACCDIST programme by Mark Huckvale in UCL into Y-ACCDIST as part of her PhD research. One of the achievements of Y-ACCDIST is the use of the software for speaker comparison even when the data are not necessarily comparable (ACCDIST works well when all speakers have read the same text). I cannot fully do justice to this part of the talk because there are some technical bits that I am not familiar with, and I don't have a head for statistics, but I'll report on what I could follow:
Some examples of the use of the programme were presented, which include the measurement of distance between possible pairs of phonemes through what is known as a Feature Selection process, in which several features are left out to focus on the ones which are most relevant or less redundant, and that helps modelling.
Comparisons across speakers were run through the programme, and Y-ACCDIST was able to assign speakers to a particular accent with almost 90% accuracy. It was interesting to hear that the programme was more accurate when particular features (and not the whole) set was compared, and also when human intervention in the filtering of features to be compared was added to the speaker accent allocation process.
All in all, Watt concludes, the discoveries of the application of tools like Y-ACCDIST and the evidence provided in Fowler et al suggest that it is too premature to declare the demise of the phoneme.
The question period was interesting, and it included comments on issues like the fact that perhaps many approaches to speech analysis begin from the notion of the phoneme but fail to see what happens in naturalistic speech and what participants themselves feel is relevant, and that there is considerable phenomena that cannot be explained through the notion of the phoneme. There is always a search for robustness in experimental settings that fails to see that what should be more robust is what is actually done in natural situations.

All in all, a fascinating talk, with a lot of food for thought. If you ask me, does the phoneme exist? I would say that it's like magic, you feel it's there but at times you cannot pinpoint the actual trick that makes it work.

domingo, 8 de octubre de 2017

Brief conf report: English UK North Academic - University of Liverpool, October 7th.

Yesterday I got on the train from York to Liverpool (in what ended up being an endless 3 1/2 train-train-bus journey...yes, transport may also fail in Britain!) to attend and present at the English UK North Academic conference (programme here).

It was a really friendly, welcoming environment of teachers of English working in the North of the UK, and there must have been over 100 attendees.  I would like to very briefly report on three of the talks, and then comment on my own presentation as well.

Michaela Seserman from the University of Liverpool discussed the tools she uses in her EAP courses to do pronunciation work. Michaela discussed some important questions we need to ask prior to deciding to use certain apps, and also weighed some pros and cons of each. Seserman proposed a form of integration of the in-built voice recognition systems that smartphones currently hold, the tools that Quizlet offers, and the messaging possibilities of the WeChat platform. Even though it was perhaps not very clear how pronunciation improvement actually came to happen, the idea of teacher and students exchanging audio recordings for practice and dictation via mobile messaging is a very appealing one. As Michaela pointed out, these are tasks that learners can also spontaneously decide to do outside class.

Russell Stannard, the TeacherTrainingVids guy, showed how screen capture software (he recommends SnagIt but there are free alternatives available) can help you give better feedback on written work. So a teacher may videocapture a student's written assignment and give feedback (as we might do face-to-face), by highlighting areas of the essay, for instance, and making oral comments on it, or showing the assignment instructions on screen to point out what may not have been addressed. It reminds me of the type of recorded feedback I used to give my students, and I agree with Russell that this whole idea of personalising feedback and having a sort of "conversation" with the student and the material really does make a difference. It's a way of "being there" when you cannot "be there", while also showing students we care for them individually and that we can address each of their specific strengths and challenges -which in writing we may fail to do clearly, or which may be misinterpreted-.

I particularly enjoyed the workshop on corpus linguistics by Dr. Vander Viana from the University of Stirling. Vander showed us some easily accessible corpora (sorry, readers, but I cannot ensure that this will be freely accessible to you in your context/country) and search engines that we can use to help our students test the frequency, acceptability and likelihood of their lexical choices when writing, or speaking. We discussed collocations, colligation, and semantic prosody (which apparently in corpus ling is different from how we understand it in SFL!), and we reflected on the claim that we actually process speech in an "idiomatic" way (not referring to idioms, but to chunks....it was such a great intro point to my own talk later, to be honest!). Most of the cited material came from Sinclair (1991); McCarthy et al (2005); and Tognini-Bonelli (2001), and you can read Viana's work if you visit his webpage.

I was invited to make a presentation thanks to the generosity of Mark Hancock, who put my name forward (I've thanked him publicly many times, but I believe we should always be grateful to the ones who do nice things for us)...and to make it even better, he got me a PronPack t-shirt! And also thanks to Nigel Paramor.
Even though I know my stuff, it is always a bit intimidating to stand in a room full of native speakers of English who teach English and theorise about their own language. I know it is a silly fear, but I know many other non-native teachers of English will sympathise. Anyway.

My talk ("Intonation building blocks for more comprehensive speaking skills training") was based on the type of speaking tasks that I designed for my Lab 3 and Lab 4 lessons at ISP Joaquín V González and Profesorado del Consudec during the last few years. Some background: most of the work that is done during the final two Applied Phonetics modules ("Lab") in teacher training in Buenos Aires (at least) is related to the application of phonological theory to the production of different speech genres (and for this, I am grateful to Prof. Silvina Iannicelli, because I got my first lecturing post filling in for her at ENSLV SBS in 2006, and she had a course planned along a sequence of texts ranging from rehearsed to more spontaneous text type production, and that sort of sparked my interest in the prosodic configuration of speech genres). As I became a bit more experienced, one of the things that usually made me uncomfortable about the type of work we did in these courses was that most of the tasks were based on reading, and there was always an assumption that intonation patterns were easily/automatically transferrable to spoken situations of language use. (It's a bit like doing a million fill-in-the-gap past tense exercises, and then expecting students to automatically and spontaneously use the simple past in their written or spoken stories.)
Plus, at times we also forget that reading aloud is an ability in itself, and that reading aloud as a result of previous imitation of a recorded model of the same text is also another type of ability that activates other skills and requirements. These are highly useful and valuable steps in the process, but they do not amount to, nor ensure, that the students will appropriate intonation patterns. In my tutoring experience, I have had students producing English fall-rises in reading and Spanish rise-falls on the exact same phrase, on a similar context, when speaking.

So at some point in my tutoring/lecturing history, I decided to change that a little, and to use reading aloud as one of the steps of the process, but then also create opportunities for use in slightly more spontaneous speech tasks in a way that ensures that students need to use certain intonation patterns that have been found to have a certain regularity in specific speech genres, or in connection to certain lexico-grammatical structures.

So, back to EUKN: My talk was about speech genres, and how several speech genres have higher degrees of "writtenness" in them (Eggins, 1994), and how these have perhaps more easily predictable and stable patterns of intonation and chunking; whereas more interactive genres challenge the intonational descriptions as we know them (such in the case of "list intonation", or the intonation of questions).

I put forward the metaphor of building blocks as a means of proposing that for some speech genres, it is useful to see information units (and some lexicogrammatical collocations) as part of the same block that students can monitor as a whole as they plan their next block (rather than worrying about putting together a string of words, one after the next, when they talk).

I have followed a process that goes from the breaking of the dichotomy between spoken vs written texts, into a continuum of levels of writtenness-spokenness (as SFL scholars have done for a couple of decades), and the use of a building block metaphor consisting of LEGO type blocks that occur in more written-like spoken genres (where the blocks have a set role, position, and the final goal is clear), and TETRIS blocks that we may encounter in more interactive texts (where trajectories are built as we go along, and there are lower levels of pre-planning.

I will only be able to share a few of my slides, as I am writing an article/resource on the whole notion and application of intonation blocks (and I'm also seeking psycholinguistic and further classroom evidence), and I owe the English UK North attendees the preview of the full set of slides (because I have authorised EUKN to do so).

Some comments on challenging, through corpus-study, the notions of "question intonation", and "list intonation". How intonation in real life as manifested in different speech genres does not easily exhibit the intonation patterns described in ELT textbooks.



Reflection upon the fact that we generally don't do speaking training in an integrated manner, as we may do with written genres.
Possible (though never definitive, nor exhaustive, nor always fixed, because language use.... ;) ) organisation of different speech genres along a cline.
The building block metaphor I propose to inform lexico-grammatical, sequential and intonational choices.


An example of a production task (which probably we have done in our lessons a million times!) that we can exploit to teach step-ups in pitch, and contrastive accent.
Examples of lexico-grammatical blocks in initial position that do anticipatory work. These have been found to be quite consistent in LEGO types of texts (the ordering and tone choice works differently in interactive texts)
Example of an outline for student production of short conversational stories that focus on grammatical choices and the preparatory (loop) or advancing (increment) contextualisation by rising or falling tones (respectively)

Example of ways in which we can contextualise reported speech through level tones and contrastive stress in TETRIS-like situations of language use (though also common, with direct speech, in speeches, or lectures, LEGO text types)

Examples of ways in which we can create opportunities for use of level tones in conversational lists (vs counting, or sequences of steps where lists may be found to have rising tones)

During the presentation, I systematised briefly some of these (basically, it was like teaching my whole Phonetics 2 syllables in 50 minutes!) and presented a number of activities to illustrate how we can generate opportunities for use of these building blocks, and then, of course, it is up to every teacher to find ways of helping students monitor their spoken texts, block by block. 

I am sure that the idea of working on speech chunks is not new, or revolutionary, but I wish to emphasise how intonation can be an active, essential, part of each of these blocks of processing and production, and how the notion of a block can contribute to students' awareness that linguistic structures work together, making different contributions in the contextualisation of meaning and structural organisation of speech.

(And the refs!)



All in all, this was a really enjoyable event, and very special for me, as I haven't been teaching for a year (starting this week again, yay!) and I spent this whole year trying to find an excuse to write down the principles and ideas that informed my integrative intonation teaching methods when I was lecturing in Buenos Aires. Hope they make sense to you!

(And now...back to my research. Enough productive procrastination!)

P.S.: this post somehow opened up a chest of memories for me, and I forgot to acknowledge another lecturer, Prof. Claudia Gabriele, who in her own way showed me that there are ways of "creating opportunities" for practice of intonation. I was her Lab assistant for a few years, and I was particularly inspired by her use of role plays and other speaking tasks for a more natural application of intonation patterns. Sorry about this unintentional omission in the original post.