Mic Select:
Volume:

Paralanguage

Vocal features or seggregates accompanying speech and contributing to communication and the impression of listening and not generally considered part of the language system so not conveying meaning, as vocal quality, loudness, glottal stops and tempo: sometimes also including facial expressions and gestures do.

Nearly all languages have such sounds that aid in the flow of a conversation and comfort, indicating attentivenes.

We have all confirmed what others say with a frequent use of mhm, uh-huh or if you are a Scandinavian or Irish, a wheezy “ya” or “uh” made while breathing in, rather than out.

The impotance of these sounds are often underestimated, particularly by software engineers.

“We use the sounds to show that we are listening and that the message from the person we are talking to is getting across. It creates a common ground within the conversation,” says Mattias Heldner - Stockholm University phonetics professor.

When the lack of any visual information makes either party unsure of wether they are being understood such as on the telephone or radio these non-verbal assurances are sometimes critical and in use of two way radio explicit words such as over adn over and out have been made mandatory.

Heldner has led a project called Prosody in Communication, which ended last year. In linguistics prosody is the study of the rhythm, stress, intonation and the melody of speech. Linguists think that melody and rhythm are so vital to communication that children learn it before learning to ever utter a word.

We are humming

Hedner explains that most languages have sounds that aid in the flow of a conversation.

Some are listed in dictionaries, others are just sounds.

Heldner calls it humming when we say “mmm”, “mhm” or “uh-huh” in a conversation.

“Maybe ‘mmm’ should be in the dictionary. It has a big function in a conversation,” says Heldner.

Heldner and colleagues analysed 120 Swedish conversations lasting a half hour each to see how the sounds were used.

The researchers noticed that the conversational partners have a tendency to mimic the person they are speaking with. Their hummings were in the same pitch.

They also have seen that this humming arises as interplay between the conversational partners.

The Swedish researchers also looked into how long people wait for such feedback.

“The person talking gives room for the humming with occasional halts,” says Heldner, and pauses a little so that the journalist can also let out an “mmm-hmm.”

Waiting for those who don’t understand

But the listener in a conversation already knows before the break in speech what is coming - perceives that it is time for humming.

“The melody rises in the last syllable before the speaker’s pause. We also signal this to the other person with eye contact,” says Heldner.

Jan Svennevig, a professor of linguistics at the University of Oslo (UiO), doesn’t wholly agree.

“As a rule, we don’t wait for the humming. It often comes while we talk. The person speaking only waits when he thinks the other person isn’t picking up what’s being said. It’s to ensure that the message has been received,” he says.

Norwegian variations

Jan Svennevig says Norwegians have many of these sounds too. He prefers to call them acceptance signals.

“We actually need to say ‘mhm’ when the other speaks for a long time, to signal that they can continue,” says Svennevig.

His colleague Hanne Gram Simonsen, another professor of linguistics at UiO, concurs.

“I think these sounds are totally necessary. We have become so used to them that we get nervous when they are left out,” says Simonsen.

Keeping the ball in your court

Such sounds are not always confirming, explains Svennevig. They just say something about how the communication is progressing. If you get a “huh?” that means it isn’t going so well.

Other sounds like this are signals to keep the verbal ball in your court, to continue speaking if you don’t want to let the other have a turn yet. You can utter an “uhhh” or “ehhhh”. But these are not confirmations.

The sounds mustn’t be used inappropriately. Actually such sounds and this conversational behaviour are not supposed to be given real attention, or be consciously noticed. If the sound deviates from the pitch in the conversation, or comes at the wrong time, it will disturb the discussion.

“The person who hums incorrectly can be perceived as a nuisance,” says the Swedish researcher Heldner.

Making robots more human

The finer aspects of perception are among the things that separates a machine from a fellow interlocutor of skin and bone.

The goal of Heldner’s research is to create computers which can converse with us in a human way. But it’s currently impossible to program a computer with all the nuances found in a normal conversation.

“We have far to go before computers comprehend as much as people. But we can get them to act as if they understand,” says Heldner.

Computers have already been talking to us for some time in games and in customer service centres. Heldner thinks a little humming interjected in the right places will make it easier to conduct conversations with them.

Service telephones should be able to gather information better if the computers ask open questions and let the customer talk, following up with the occasional “mhm”.

Heldner and his colleagues plan to do further research on the unspoken parts of a conversation – things that happen face to face. This includes the direction of eye focus, head motions, facial expressions and the ways we breathe. It could be much harder to make machines that can duplicate these aspects of conversations.

Reference: sciencenordic.com "mhm and other sounds help conversations" - Ida Kvittingen


Non-Lexical Conversational Sounds in American English

Sounds like h-nmm, hh-aaaah, hn-hn, unkay, nyeah, ummum, uuh, um-hm-uhhm, um and uh-huh occur frequently in American English conversation but have thus far escaped systematic study.

This article reports a study of both the forms and functions of such tokens in a corpus of American English conversations.

These sounds appear not to be lexical, in that they are productively generated rather than finite in number, and in that the sound-meaning mapping is compositional rather than arbitrary.

This implies that English bears within it a small specialized sublanguage which follows different rules from the language as a whole.

This functions supported by this sub-language complement those of main-channel English; they include low-overhead control of turn-taking, negotiation of agreement, signaling of recognition and comprehension, management of interpersonal relations such as control and affiliation, and the expression of emotion, attitude, and affect.

Source: Non-Lexical Conversational Sounds in American English - Nigel Ward


Backchannel (linguistics)

In linguistics, backchannels are listener responses in a primarily one-way communication.

These can be both verbal and non-verbal in nature, and are frequently phatic expressions, primarily serving a social or meta-conversational purpose, rather than involving substantial two-way communication.

The term "backchannel" was designed to imply that there are two channels of communication operating simultaneously during a conversation.

The predominant channel is that of the speaker who directs primary speech flow.

The secondary channel of communication (or backchannel) is that of the listener which functions to provide continuers or assessments, defining a listener's comprehension and/or interest.

Due to research development in recent years, backchannel responses have been expanded to include sentence completions, requests for clarification, brief statements, and non-verbal responses and now fall into three categories: non-lexical, phrasal, and substantive.

A non-lexical backchannel is a vocalized sound that has little or no referential meaning but still verbalizes the listener's attention.

In English, sounds like "uh-huh" and "hmm" serve this role.

Phrasal backchannels most commonly assess or acknowledge a speaker's communication with simple words or phrases (for example, "Really?" or "Wow!" in English).

Substantive backchannels consist of more substantial turn-taking by the listener and usually manifest as asking for clarification or repetitions.

The term was coined by Victor Yngve in 1970, in the following passage: "In fact, both the person who has the turn and his partner are simultaneously engaged in both speaking and listening.

This is because of the existence of what I call the back channel, over which the person who has the turn receives short messages such as 'yes' and 'uh-huh' without relinquishing the turn.

Backchannel communication is present in all cultures and languages, though frequency and use may vary.

Confusion or distraction can occur during an intercultural encounter if participants from both parties are not accustomed to the same backchannel norms.

Source: Backchannel_(linguistics) - Wikipedia, the free encyclopedia


Nodding, aizuchi, and final particles in Japanese conversation:

How conversation reflects the ideology of communication and social relationships

It has been noted that Japanese differs markedly from languages like English and Mandarin in the use of head nods and aizuchis (short utterances roughly equivalent to English “uh huh” and “yeah”).

In Japanese conversation, such behaviors are extremely frequent, and their placement is often unexpected from the viewpoint of speakers of languages like English and Mandarin.

For example, these behaviors often occur in non-transition relevant places.

Sometimes aizuchis can even be uttered by the turn-holder.

In such cases, the conventional technical terms such as “back-channel”, “continuer”, and “reactive token” are hardly applicable.

Furthermore, the turn-holder often actively elicits aizuchis from the listener.

Final particles, which are very frequent in spoken discourse, play an important role in the elicitation.

Finally, there is a discussion of how the Japanese ideology of communication and social relationships may provide motivations for the above phenomena.

Source: Journal of Pragmatics - ScienceDirect