Emotion through Motion!

•October 5, 2011 • Leave a Comment

The act of touching has always been part of our society. Every day, we see or hear the word touch in movies or songs that are being played. No matter where we go, interacting with others through touch is inevitable. From the mere act of brushing shoulders with someone on the jeep to the act of holding someone’s hands tightly, touching can never be removed in our society today. Even in Facebook, the act of “poking” is possible, for the creators recognize the fact that as human individuals, we need the feeling of touch, even if it is virtual touching for that matter.

We learned in class that emotions are best detected through visual as well as auditory means. In fact, a lot of researches have already showed that indeed, our eyes and our ears are the best ways for us to interpret the different sorts of emotions that other people are feeling. One can use the facial expressions, as well as the differences in pitch, tone and content of a person’s speech to interpret these emotions, but is it possible for us to detect emotions only through the perception of motion and touch?

There aren’t really a lot of researches that discusses the relationship between touch and emotions. In fact, when I was first faced with the aforementioned question, I didn’t immediately think of how it was possible. But fortunately, some researchers made it possible.

In one experiment by Hertenstein, Holmes, McCullough and Keltner (2009), touching in relation to emotions was studied. The researchers basically tried to see whether emotions presented by an individual can be clearly perceived another through the mere act of touching. They got 248 participants, each individual randomly paired with another. For each pair, the experimenters assigned one as an encoder and another as a decoder. For the encoder, his first task was to think about how he wanted to communicate the following eight emotions: anger, disgust, fear, happiness, sadness, sympathy, love and gratitude. Afterwards, he was asked portray each emotion by making contact with the decoder’s body through touch. Of course, he was instructed to touch only the body parts that were deemed appropriate, for the decoder might feel sexually harassed if he was touched someplace else. On the other hand, the decoder’s task was to choose from the choices of the eight emotions he felt was being presented to him.  In addition, a choice of none of the above was also given. Throughout the entire experiment, the decoder was blindfolded and the encoder was reminded not to speak with at all.

Their results show that the emotions of anger, fear, disgust, love, gratitude, and sympathy, happiness and sadness were significantly detected and decoded accurately by the decoders. It is important to note that the last two emotions of happiness and sadness were previously not decoded at significant levels. Though the research doesn’t give a possible reason for why the results of their experiment vary with the previous studies, it is important to note that this research has still given new contributions to the field of emotions and perception.

The guy probably feels love.

The researchers were able to prove there are specific tactile behaviors associated with each of the emotions. In short, there are specific types of touching which serve as cues for the perception of different emotions. For example, “fear was communicated by holding the other, squeezing, and contact without movement, whereas sympathy was communicated by holding the other, patting, and rubbing.” It shows that the tactile system is very complex and that there are different factors which can affect this perception, such as intensity, temperature, duration, location, and velocity. The research also shows the possibility of equipotentiality, which refers to the “idea that the same type of touch can be assigned very different meanings or consequences”. This is a further implication of how the mind is very complex and how it can process very complex stimuli.

Though I’ve said that I couldn’t really imagine how emotions can possibly be perceived through touching, this research has proven otherwise. It just shows how the brain can perceive through the use of the different senses. Even if there are different ways on how the perceptual process begins, there is still a possibility that the result will be the same.

What better way to describe this research than through the the lyrics of a song? This song, “Sometimes when we touch, the honesty’s too much, and I have to close my eyes..” is probably the best embodiment of this study. The individual had to close his eyes through the use of the blindfold in order to feel the touch of the other, and this touch caused him to feel the person’s true emotions. So the next time someone sings this song, you’ll be able to remember the link between emotions and touching.

 

Reference:

Hertenstein,M. J.,  Holmes,R., McCullough, R. Keltner, D. (2009). The communication of emotion via touch. Emotion, 9(4), 566–573.

My Heart’s a Stereo, It Beats for You so Listen Close

•October 4, 2011 • Leave a Comment

On September 29, 2011, one of my childhood dreams came true. It came 10 years after but it was still such a sweet and unbelievably epic experience. I finally got to watch Westlife live. Westlife has been my all time favorite boy band! I can break into any of their songs in an instant and I can even name all their songs by album and order. What I love most about Westlife aside from the fact that their songs are all so easy to sing (a big deal for a frustrated singer like me) is that all their songs emit a certain memory and emotion from my growing up years. Just like what Nicky, one of the members of Westlife, said in their concert “The 23 year-old receptionist in our hotel told me that she made it through grade school listening to Westlife.”… I feel exactly the same way.

Whenever “When You’re Looking Like That” plays, I get nostalgic about the first time I had eye contact with my crush in Rockwell mall. He was a high school sophomore and I was just in grade 5 but I was instantly smitten by his killer charm. Every time I hear “World of Our Own”, I remember our school fair and the time my crush requested for this song for another girl. I get sentimental whenever I hear “Fool Again” and “If I Let You Go” because it brings me back to the time all 40 of us were belting this song out on our way to Antipolo for a class fieldtrip. I’m pretty sure we gave our teachers and chaperones a migraine from too much Westlife singing. Nostalgia is an affective process that can accompany autobiographical memories. Nostalgic moments can be triggered by anything, be it an old photo, a crumpled note on scratch paper and in this case, music.

 

front row action during the Westlife concert in Manila

In Barrett et al. (2010)’s study on music evoked nostalgia. They pointed out that the heuristic model for both context-level and person-level contribute to the nostalgic experience. Context-level constructs refer to aspects of relationship between a person and a particular song. This involves as well the attributes of a person’s experience while listening to the specific song. On the other hand, person-level constructs refer to the individual differences between listeners. For example, some people are more prone to experience nostalgia and people react differently with others on a song in relation to their personality traits.

Nicky, one of my favorite members of the Westlife

In the study conducted by Barrett et al. (2010), they looked for the interaction between context-level and person-level constructs. They let 226 students from the University of California, Davis listen to 30 15-second music excerpt. These songs were downloaded from iTunes’ list of top 100 pop, hip-hop and R&B songs during the time the participants were aged 7-19 years old. They were then asked to answer a survey to measure their mood, nostalgia proneness, arousal and familiarity.

 

No One in Araneta has Swagger Like Westlife

The results of their experiment showed that songs that are more autobiographically salient, familiar and arousing made them feel more nostalgic. The participants also felt more nostalgic when the song emitted positive and negative emotions. Interestingly, the study showed that positive emotions greatly influenced nostalgia more than negative ones. Love, being the most popular nostalgia trigger was followed by sadness.

try putting your itunes on shuffle and see how good it is at figuring out what emotion to emit

 

Maybe the reason why I love music is not because I am fond of the different beats that excite my eardrums but because songs usually emit memories that are sometimes forgotten. These memories, whether they may be good or bad, have in their own way affected the person that I am today. I guess it’s safe to say that music, in its own twisted way, really does reflect life.

Currently the song that emits the most positive nostalgic moment 

Reference:

Barrett et al. (2010). Music-evoked Nostalgia: Affect, Memory and Personality. Emotion, 10(3), 390-403.

photos and videos from:

personal collection of Westlife Gravity Tour 2011

http://macamour.com/blog/2008/04/28/new-ipod-itunes-ad/

youtube.com

 

Can you hear me?

•September 28, 2011 • Leave a Comment

A couple of years ago, two of my youngest cousins, who are actually identical twins, were diagnosed with a hearing disorder. It was first discovered by my aunt when she noticed how the twins didn’t respond whenever she called them or made any noise. Because they were just infants back then, the family first thought that the reason why they didn’t respond is because they still didn’t know their name considering their age and cognitive ability. My aunt then tried something different. She called the twins using her voice and tried different sounds to see if they would respond in any way. Both of them didn’t. But when she slammed the door (which created a very loud sound), both the twins were startled and reacted to it. The family then consulted a doctor and they were recommended to wear a hearing aid.

Hearing aids are electronic devices that are mainly composed of four things namely (1) one or more microphones, (2) an amplifier, (3) a receiver, (4) a battery. More advanced hearing aids also include (1) a computer chip, (2) on-off switch, and (3) a program that allows the user to choose among various listening environments. Hearing aids work by picking up and amplifying sounds. This process allows the user to hear what they would normally not hear since these sounds are increased in volume, and therefore better communicated. Basically, hearing aids work when sounds are gathered by the microphone and are then converted into electrical impulses. The amplifier increases the volume of these sounds, then the receiver coverts these electrical impulses back to sound waves and are redirected to the user’s ear.

Mini Hearing Aid

Hearing loss is considered to be one of the most common birth defects around the globe, affecting about 3 out of 1,000 new born babies. Hearing loss not only affects babies, but can actually affect any person at any age. One of the most common forms of hearing loss is presbycusis, meaning ‘old hearing.’ This is usually caused by factors in relation to aging.

Elderly with hearing aid

Hearing loss may be hereditary, or may be caused by an illness or an injury. Some of these are infections, head injuries, lack of oxygen, and diabetes. Also, listening to very loud sounds like music through the headphones or in factories may cause damage to the ears. Some types of hearing impairments are temporary. These may be cured through medications or surgery. But permanent damage to the ears cannot be cured. People with permanent hearing impairments then use hearing aids.

In earlier years, it was believed that hair cell regeneration was not possible after birth. Hair cells are neurons in the cochlea (a structure located in the inner ear) and are called the auditory receptor cells. These hair cells are primarily responsible for auditory transduction and the perception of pitch. It was only in the 1980s when scientists were able to experiment on mature bird cochlea, and found evidence that there was growth of hair cells in their cochlea after damage. Moreover, other researchers also discovered that the new hair cells were functional and restored hearing (including balance sensation). Various researches are ongoing to find if hair cell regeneration may be present in mammalian cochlea, and most especially, in humans. Hearing aids cannot fully alleviate hearing impairments, thus researchers are hopeful that in studying hair cell regeneration in humans, people with such deficiencies will be cured and function normally.

Hair Cell Regeneration: Is it possible?

Having a functional auditory sensation has proven to be very important for our survival. Apparently, there is a term known as the auditory fitness for duty (AFFD) which refers to the possession of hearing abilities sufficient for safe and effective job performance. Employers use AFFD tests to assess the auditory sense of people to assure that they are functional. This is crucial in some jobs where hearing is very critical to safety and performance. The most common jobs where normal hearing abilities are vital are in operation of motor vehicles or aircraft, mining, firefighting, law enforcement, and in military.

Fireman

It is very important that we know how to protect our ears. To end this blog, I would like to share some of the ways we could take good care of our ears and prevent hearing impairments. Studies have shown that some foods can actually lower the risk of age-related hearing impairments such as sweet potatoes, almonds and salmon. In this study, they found that people who ate the most carotenoid-rich food, fish high in omega-3 fatty acids and food with high vitamin E content reduced the risk of hearing loss by half.

Almonds

Salmon

Sweet potato

Other ways to prevent hear loss are the following (1) Get hearing checked once in a while, (2) Use noise-cancelling headphones, (3) Pay attention to your genes, (4) Quit smoking, (5) Turn down the volume, (6) Use protection like earplugs and earmuffs when around loud noises, and (7) Use sound absorbing materials like rubber mats under noisy kitchen appliances.

Earmuffs

Written by: Justine Ng

References:

Subtitles: The Reason for Understanding

•September 21, 2011 • Leave a Comment

With only three weeks left until the end of the semester, I find myself being crushed by alternate waves of panic and procrastination. As to the former, this comes whenever I catch myself thinking of the looming and increasing pile of academic requirements I have yet to accomplish. The latter creeps up on me at the oddest moments—halfway through a chapter of Comparative Anatomy of the Vertebrates, scrolling through a journal article, and just seconds after I open up a new Word document to start working on a paper.

Honestly, these are the days that I feel nostalgic for summer. Summer was the time I spent countless hours glued to the bed watching movies and television shows on my laptop. It was during this period that I discovered foreign treats, so to speak. I got hooked on the UK version of The Office and tried watching a couple of episodes of Skins as well. There was also a time I went on an Australia’s Next Top Model marathon. For the aforementioned shows, one thing holds common: Everything was spoken in an accented version of English, a foreign language to my native Filipino self. Now, even if I consider myself to be with adequate comprehension of the English language, when the accents of the speakers were factored in the speech of the people on the screen, there were times that I found myself being unable to understand the words spoken. Perhaps, I would have been able to catch the words better had subtitles been used. However, research has shown that not all subtitles may aid in speech perception.

Ricky Gervais, star of the BBC America's "The Office"

The girls of the fourth cycle of "Australia's Next Top Model"

Mitterer and McQueen (2009) investigated whether subtitles in foreign audiovisual media (e.g., television shows and movies) support perceptual learning of non-native, regionally-accented speech. Specifically, in their study, the two hypothesize that the subtitles used should be in the foreign language itself and not the native language of the listener. They believe that because subtitles in the language of the media presented indicate which words are being spoken, it’s possible that the subtitles will boost speech and lexically-guided learning about accented foreign speech sounds. With the combination of the foreign language subtitles and the heard speech sounds, listeners are believed to better adapt to the talker’s unusual speech; thereby, allowing them to understand the talker better.

To test the idea in the previous paragraph, 121 participants from the subject pool of the Max Planck Institute for Psycholinguistics were gathered for this research endeavor. All were native speakers of Dutch and possessed good command of spoken and written English. Additionally, they had not been to Scotland or Australia for longer than two weeks; thus, were unfamiliar with Scottish and American English. Participants watched either the 25-minute long fifth episode of the first season of Kath & Kim—an Australian sitcom—or a shortened 25-minute version of Trainspotting—a movie about a Scottish drug addict. For both videos, participants were exposed to one of three conditions: no subtitles, with Dutch subtitles, or with English subtitles.

After viewing the video, participants were tasked to repeat back 80 audio excerpts spoken by the main characters seen on the screen. These were called old items. Another set of 80 audio excerpts, called new items, were taken either from unused parts of Trainspotting or from another episode of Kath & Kim, specifically the second episode of the first season. In total, 160 audio excerpts must be repeated by the participants, but they need not imitate the accent of the speaker. The old and new items were presented in random order. Repetition accuracy was scored by two judges who were blind to the purpose of the experiment.

Results showed three significant findings. First, audiovisual exposure to an unfamiliar regional accent improves speech understanding. Speech understanding is believed to be shown when participants are able to correctly repeat words even when subtitles were not used. Consequently, this demonstrates a form of learning by rapid perceptual adaptation to a foreign accent in a foreign language. This is noteworthy for previous studies have only established this type of perceptual adaptation in the recognition of speech with a foreign accent but in the listener’s native language.

Second, the native-language or Dutch subtitles helped in the recognition of previously heard words (old items) but harmed the recognition of new words (new items). Harmed recognition of new words is indicated by being unable to correctly repeat the words in the audio excerpts presented after the video material was shown. The researchers claim that the Dutch subtitles aided recognition of old items because they helped participants decipher which English words had been uttered; thus, contributing to better processing and accurate repetition of English words. However, using Dutch subtitles provided lexical and phonological information that was inconsistent with the English words being spoken by the talker in the video. This weakens the influence of English lexical-phonological knowledge on perceptual learning. On the other hand, when English subtitles were used, consistency with what was being read and heard facilitated perceptual learning. This is the rationalization behind the third significant finding which showed that foreign-language subtitles improve repetition of previously heard and new words; thus, demonstrating perceptual learning.

Undoubtedly, the results of this study are of significance in the field of education, specifically for those who are learning and who are teaching a foreign language. For the learners, it has often been said to them that constant conversational practice will help develop their skill in understanding a foreign language. However, in order for this to occur, they must have a partner who is also fluent in the foreign language. Such a partner may not always be present at home, and more often than not, the conversational practice is generally confined within classroom walls. Now, with this study, individuals may look into using movies as learning tools. For example, a young Filipino child who wants to better improve his or her understanding of English could watch English movies with English subtitles. Additionally, a college student who has an adequate comprehension of Spanish may choose to watch Spanish films with Spanish subtitles to meet the same goal. However, it should be remembered that all of the participants had a good grasp of the English language beforehand. This is important for those who are teaching a foreign language course. They should not expect that exposing their students—ones who have no background of the foreign language whatsoever—to foreign subtitles in foreign films will make them understand the language better.

However, I think that the study’s findings are more applicable for Romance languages such as Spanish and Italian. After all, these languages are basically represented by the same characters based on the Latin alphabet. It’s my belief that it will be more difficult for lexical-guided learning to take place when Asian subtitles are used in Asian films. There’s a wider range and variety of sounds and characters used among different Asian languages. Taking into consideration the Chinese and Korean languages alone, there are Chinese phonemes and characters not encountered in the language of the Koreans and vice versa.

My train of thought always seems to be carried away at the end of every blog entry. Writing my previous entry caused me to want to watch Happy Feet. Writing about this one on different cultures and languages suddenly makes me want to look at possible places to visit for the upcoming break. So, if you’ll excuse me, I think I’m going to go call my travel buddies to look at cheap air fare tickets to Bangkok. I’m in need of serious retail therapy.

Reference:

Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), 146-150.

Image Sources:

http://2.bp.blogspot.com/_wtobNaYA4JY/SLHAB-aHUuI/AAAAAAAAApE/MfiioRTrtAs/s400/e215acacc2.jpg

http://www.popmag.com.au/resources/IMGDETAIL/230611110705_next-top-model.jpg

Do NOT Drink and Drive: Why?

•September 20, 2011 • Leave a Comment

Everyone has heard the saying “don’t drink and drive” and have accepted this seemingly as a law of nature: YOU DON’T DRINK AND DRIVE unless you want the vehicle you’re driving to end up in a curb somewhere.
This puts up a question: other than feeling groggy, how exactly does drinking alcohol affect one’s driving?  What is it with a drunken person’s perception that puts them into trouble?

"Is my parking good enough?"

As Nawrot has put it, many studies have already established the effect of alcohol and fast and slow eye movement. Fast eye movement concerns saccadic eye movement, foveating (thus focusing) an object of interest. Slow eye movement on the other hand is concerned with the maintenance of fixation on an object when the perceiver and/or the object of interest is in motion. Both these systems’ performances are affected by alcohol intoxication, reducing their effectiveness. Specifically, fast eye movement initiation and velocity is slowed. On the other hand, slow eye movement gain (eye velocity / target velocity) becomes too slow to be effective. The latter would induce fast eye movement in order to compensate with the inability of slow eye movement to track an object of interest. This movement is called “gaze nystagmus” and is an important component in field sobriety tests.

"I am not the police officer you are looking for"

Nawrot’s study focused on the effect of the influence of alcohol on motion parallax which is affected by the lowered gain of slow eye movement. Slow eye movement is an essential component in motion parallax where fixation on the object of interest should be maintained in order to infer an accurate depth perception, the discrepancies in the apparent velocity of the object being a cue for depth. Using a repeated measures design, he tasked his participants to simply follow a dot moving in a sinusoidal pattern while moving, either actively (active motion parallax) or passively (passive motion parallax). He contrasted the results of those that were influenced by 100 proof vodka and orange juice from those who weren’t. The results of the study suggested that the alcohol-deterred slow eye movement significantly affected depth perception. This induced a weaker tracking ability and elicited gaze nystagmus which indicates that you wouldn’t be focusing on the object well when drunk! This either means that drunk people fail to see how far objects are because they aren’t able to track it’s apparent movement, or they just didn’t see it because they’re just too drunk (as to alcohol’s influence on fast eye movement).

 

Reference:

Nawrot, M. Depth perception in driving: Alcohol intoxication, eye movement changes and the disruption of motion parallax. Department of Psychology, North Dakota State University, Dakota, USA.

Start Walking

•September 16, 2011 • Leave a Comment

                Gaydar. A lot of people say that they have the ability to detect those who are homosexuals. Even without contact, they are readily able to judge whether individuals are straight or gay. In fact, they just have to look at them and they can already make an accurate judgment. It’s fun to think that people actually have a special ability in detecting those who are gay, in short, those who have a gay radar. But I think what’s more interesting is finding out how they are actually able to detect these people, scientifically.

                Johnson, Gill, Reichman, and Tassinary (2007) have the same sentiments as me. They wanted to see what aspects of the individuals are used as a guide in perceiving their sex and sexual orientation, (whether they are males or females, or whether they are heterosexuals or homosexuals, respectively). Specifically, they tested body shape and body motion as specific determinants of the perception of the sexuality of the individual. Body shape involved ratios, such that men have higher while females have a lower waist to hip ratio. In simpler terms, hourglass figures are hypothesized to be generally perceived as women, while more tubular figures are more generally perceived to be men.  As for the aspect of body motion, those who have swaggering shoulders are hypothesized to be manlier, while those with swaying hips are more likely to be perceived as feminine.

Gay Pride!

                Sex is perceived from body motion when other sexually dimorphic cues (e.g., body shape) cannot be seen or ambiguous/non-diagnostic (e.g., the body’s shape hits the boundary between a man and a woman’s body shape).

                However, sexual orientation is based on the two aspects, and they are believed not to work on their own. It is hypothesized that the unique combination of body shape (e.g., shape of a woman) and motion (e.g., swaggering shoulders) will yield perceptions of a gender-atypical woman (Johnson & Tassinary, 2007, as cited by Johnson et al, 2007). This gender atypicality may then be the source of the judgment of a person’s sexual orientation, whether he/she is heterosexual or homosexual (Herek, 1984; Sirin et al., 2004, as cited by Johnson et al, 2007).

                For this study, they used three different experiments to prove their point. For the first experiment, they manipulated the body motion and body shape through a walker, which is basically a computer-generated animation. There were different levels of extremity with regards to the body motion and body shape of the walker, such that in some conditions, such that there were 5 different ways of hip-swaying/shoulder swagging and there were different levels of the waist-to-hip ratio. For the second experiment, they used all 5 different body motions and they used the most androgynous waist-to-hip ratio (hard to identify whether male or female) for the participants to judge. In addition, they also manipulated the social cognition of the participants, in such a way that the walkers were either males, females, or unspecified. For the third study, they played 32 movies that depicted the motions of 16 people (8 males, 8 females, 4 from each category self-identified to be gay and 4 were straight), which were recorded twice at different speeds as they walked on a treadmill. These clips were then transformed into dynamic figural outlines using the Find Edges feature in Adobe Premier. Afterwards, they played these 32 video clips and asked the participants to categorize each based on gender and sexual orientation.

Point Light Walker

                Results of this study show that when judging the sexual orientation of men, participants relied primarily on body motion. On the other hand, when judging the sexual orientation of women, participants relied on both motion and body shape. Other results show that although the body’s shape and motion are equally diagnostic of sexual orientation for men and women, the cues appear to be frequently misinterpreted in female targets. And although both body shape and motion cues are useful in identifying the sexual orientation of the individual, the ability to extract meaningful and reliable information from a real life video was greater for males. For both animated and real life stimuli, gender-typical combinations of body shape and body motion were more likely to be judged as heterosexual, while gender-atypical combinations were more likely to be judged as homosexual. In general, gender-atypical motion (i.e., hip sway exhibited by men and shoulder swagger exhibited by women) gave out more accurate perceptions of both men’s and women’s sexual orientations.

                I think that this article is really such a very big step for the field of perception as well as social psychology. They were able to make valid measures on how people perceive others’ gender and sexual orientation, especially because they were able to test both virtual and real life situations.

                I think that the results of this study are very important because it has a lot of implications on how sexual orientations can be identified in the real world. It’s very significant because it shows that people can easily segregate those who look gay from those who are straight, making them more capable of engaging in discrimination and prejudice.

                These results imply that besides the actual appearance of a person, motion is really important in determining the gender as well as the sexual orientation of the individual. Thinking about it, maybe there isn’t really a thing called gaydar unfortunately. It is just that some people are more sensitive to the social cues that may be present in the environment.

Main Reference:

Johnson, K. L., Gill, S., Reichman, V. & Tassinary, L. G. (2007). Swagger, sway, and sexuality: Judging sexual orientation from body motion and morphology. Journal of Personality and Social Psychology, 93(3), 321–334.

Other References:

Johnson, K. J., & Tassinary, L. G. (2007). Compatibility of basic social perceptions determines perceived attractiveness. Proceedings of the National Academy of Sciences, USA, 104, 5246–5251.

Herek, G. M. (1984). Beyond “homophobia”: A social psychological perspective on attitudes toward lesbians and gay men. Journal of Homosexuality, 10, 1–21.

Sirin, S. R., McCreary, D. R., & Mahalik, J. R. (2004). Differential reactions to men and women’s gender role transgressions: Perceptions of social status, sexual orientation, and value dissimilarity. Journal of Men’s Studies, 12, 119–132.

Mirror Mirror on the Wall

•September 16, 2011 • Leave a Comment

Just a few days ago, I was with Juli in Casaa. We were looking for something to eat for merienda. It was already 5 in the afternoon and many of the stalls were already closing. Juli asked me if I wanted to get corn instead (from the manong on the narrow road right in between Casaa and Palma Hall). While we were waiting for the corn we ordered, I heard a man scream ‘Oooooh’ in a very bad tone. I looked around to see where the sound came from. It was from a student who just bought some pasta from Casaa. He was on his way out, and his food spilled all over the floor. I looked at him and he was really pissed. He seemed a little angry but his face seemed like he knew there was really no one to blame but himself. He looked around and finally cleaned up his mess.

 

Spilled Pasta (This is just a sample image, not the actual one I experienced)

 

At that time, I don’t know why but I really felt for him. I was telling Juli ‘Shocks kawawa naman siya.’ And I even remember Juli saying ‘Ang mahal na kaya ng pasta ngayon.’ After the incident, we walked back to the Palma Hall Annex (PHAn) to the event of our organization PUGAD Sayk.

Back in PHAn, PUGAD Sayk held an eating contest on spicy squid balls. I watched as some of the contestants’ face turn red. Some of my friends in PUGAD Sayk said ‘Sobrang anghang! Grabe.’ I honestly love spicy food, but just looking at the reaction of the contestants made me want to not eat spicy food for a while. I felt like their tongues were really burning because it was so hot. Some even offered for me to try but I declined.

 

Spicy Squid Balls. Yum?

 

Today I was just thinking about what to write for my blog, and these two events hit me. I realized…MIRROR NEURONS. I thought to myself ‘Ito na ba yun? Did my mirror neurons made me feel for the guy who spilled his food and for those contestants in the spicy eating contest?’

Mirror neurons were first discovered by an Italian scientist named Giacomo Rizzolatti in his experiment in 1992 by accident. They were studying the brains of some monkeys and were looking at how the brains of the monkeys organize motor behaviors. Interestingly, what they found out was that the areas of the brain that lit up while doing a specific action also lit up when they watched someone else does the same action. Specifically, the neurons that fired when the monkey reached out for a nut also fired when the monkey was watching someone else reach out for a nut. Mirror neurons were termed by Rizzolatti’s team, and were defined as neurons that fire when an action is being performed and when that same action is being observed. Over the years, they were able to find evidence that mirror neurons were also present in the human brain, a possible reason why I felt the way I did for the guy who spilled his food and for the contestants that day.

 

Giacomo Rizzolatti, the man behind the discovery of mirror neurons

 

I stumbled upon some studies on mirror neurons, and I was surprised that many scientists have been studying this topic. In relation to my own experience, mirror neurons really do have a role in playing with our emotions. One example is a study by Laurie Carr, Marco Iacoboni, Marie-Charlotte Dubeau, John C. Mazziotta, and Gian Luigi Lenzi wherein they found out that neurons fired when the subjects imitated and observed emotional facial expressions.

 

Different facial expressions

 

Rizzolatti and his team performed a study where they asked participants to inhale odorants which produced strong feelings of disgust. They then asked the subjects to look at images of people whose facial expressions suggest feelings of disgust. What were the results? You’ve guessed it. The same areas of the brain were activated by the actual experience and by just viewing the images.

 

Disgusting!

 

Another study by Singer, Seymour, O’Doherty, Kaube, Dolan, and Frith found evidence that neurons in the bilateral anterior insula, rostral anterior cingulated cortex, brainstem, and cerebellum were activated when the subjects received pain themselves and while they watch their loved one experience pain in the same room they were in. (This study seemed a bit unethical to me.)

 

Man in Pain

 

In these situations, it seemed that mirror neurons really play a role in empathy. And this is what I experienced when I watched the guy spilled his food and the contestants eating the spicy squid balls. It must have been the mirror neurons at work.

Before I end my blog entry, I want to add some interesting information that I found on mirror neurons and on buying. Mirror neurons may also be responsible for the things we see online, according to Lindstrom. He shares about a seventeen-year-old boy named Nick Bailey from Michigan, who after purchasing his Nintendo Wii back in 2006, set up his camera and microphone and videotaped his experience on opening his new toy. Just a couple of hours after uploading his video online, there were more than 70, 000 views. It seemed that by simply watching Nick open his Wii set, other Nintendo fans who were aspiring to grab their own Wii set just had the same amount of pleasure and excitement as him. There are actually some sites like http://unboxing.gearlive.com/ and http://unbox-it.com/ that people can check out. These websites feature videos of people across the world opening their various purchases, from cellular phones to speakers to other hot gadgets. Check out the sites for the latest gadgets available in the market today.

 

Wii set

 

Written by: Justine Ng

References:

http://www.pnas.org/content/100/9/5497.full.pdf

http://www.sciencemag.org/content/303/5661/1157.abstract

http://www.cell.com/neuron/abstract/S0896-6273(03)00679-2

Lindstorm, M. (2008). Buyology. New York: Broadway Books.