Vous êtes sur la page 1sur 37

SENSORY LAB REPORT

Karina Jaime

MARCH 21, 2016


NUTRITION 205
Jaime 1

Abstract

This study’s purpose was to teach Nutrition 205 students how to conduct a sensory lab and be a
panelist in tests that looked into color and beverage association, descriptive term use,
differentiating samples with and without knowledge of a standard, and ranking/rating samples
based on intensity. The beverage and color association test had panelists determine
characteristics of five different beverage based solely off their colors. The descriptive tests had
panelists use terms from a provided list to describe numerous qualities in four different samples.
The differentiating tests consisted of the duo-trio and triangle test. The duo trio tested using
cookies and a known standard; the triangle tested using apple juice with varying levels of citric
acid added and provided no standard. The ranking and rating tests all used apple juice with
varying levels of citric acid and had different formatted scales to rank with. Results showed that
color strongly affects how beverages are perceived. Among a list of terms to describe the
appearance, flavor, texture, aroma, mouthfeel, and consistency, there are few select terms that
most panelists would agree to use to describe with. The differentiating tests showed that small
differences in citric acid contents are easily picked up by panelist. Finally, ranking/rating
samples based off of the intensity of sourness becomes more difficult with more the more
samples used in the test. This overall study showed the helpfulness of different sensory tests, and
gave Nutrition 205 students a beginner’s taste into the world of food evaluation through sensory
testing.
Jaime 2

Introduction

Food evaluation tests are very important in the food science world. It is because of these

tests that companies can provide nutrition fact labels, ensure food safety, improve their products,

work on their marketing, and generally make sure the consumers are satisfied with that they buy.

These tests include objective tests and sensory/subjective tests. Objective tests evaluate the

indisputable food facts like in the labels and safety. It is done by using technology like

microscopes, scales, calorimeters, line-spread test, etc. Objective tests all use panelists to

evaluate food and can be broken down into effective tests and affective tests. Effective tests are

used to evaluate food based on apparent differences. Examples of these are the threshold test,

triangle test, dilution test, duo-trio test, ranking test, and others that all require human

involvement. Affective tests, on the other hand, evaluate food based on personal preference.

Examples of these include hedonic test and paired preference test. (Brown).

The association between color and beverages has been tested in the food science world. It

has been found that people associate different characteristics to beverages when they are, or are

surrounded by, different colors. These tests are analytic in nature. A sensory test was conducted

by Nicolas Guéguen and Céline Jacob. Their test findings were published as: “Coffee Cup Color

and Evaluation of a Beverage's “Warmth Quality’”. In order to conduct their sensory test,

Guéguen and Jacob gathered 120 university students to use as panelists. Half of the students

were male and half were female. They were all studying different disciplines. At the start of the

sensory lab, they were each given four cups of coffee in differently colored cups. When asked

which coffee was the warmest, 38.3% of the panelists voted it was the red cup’s coffee, 28.3% of

the panelists voted that it was the yellow cup’s coffee, 20.0% of the panelists voted that it was

the green cup’s coffee, and 13.3% of the panelists voted that it was the blue cup’s coffee.
Jaime 3

(Guéguen). From this test, it can be established that the beverages and their assigned colors have

an effect on how people characterize the beverage. There is an association made between the

colors and what is assumed about the beverage.

The texture of food plays a big role in how much it is liked by consumers. The crispness

softness, breakability, elasticity, or tenderness of food can be measured with different

instruments through objective food evaluation. However, through subjective testing, the

preference and intensity of characteristics can be measured. Michael H. Tunick conducted a

study to see what specific characteristics made food more likable and why. In his paper, “Food

Texture Analysis in the 21st Century”, he looked at the fracture, acoustics, microstructure,

muscle involvement, and acceptability of food. While researching fracture, it was found that food

is expected to make, or not make, specific noise when bitten into and broken down. While

talking about microstructure, the physical structure and friction are both found to be important.

For muscles, Tunick wrote about how some people can’t bite the same way as others and how

that plays a role in food selection. Finally, when talking about acceptability, he wrote about how

age, culture, psychology, and time of the day all play key roles in how people select the foods

that they find acceptable. (Tunick). All in all, food texture has an effect on how consumers

perceive their food and thus should be looked at during sensory testing.

Descriptive tests are very important in food evaluation. In these tests, samples are

provided and panelists use descriptive terms to characterize the sample. This type of testing falls

under the category of analytical tests. It is helpful for producers to know what terms are

associated with their food to see if it matches their aim characteristics. Marketing can be

improved when this data is collected with the use of the average consumer as panelists.

Chueamchaitrakun and his team ran a sensory descriptive and texture analyses study. They then
Jaime 4

published an article, “Sensory Descriptive and Texture Profile Analyses of Butter Cakes Made

from Composite Rice Flours”, with their findings. Eleven total butter cakes were used in their

test. Four were store bought and the other seven were made from varying Hom-mali: glutinous

rice flour ratios. Seven trained panelists were taken to a room and given samples of the eleven

cakes, filtered water, and unsalted crackers. They were then told to examine for the “springiness,

compressibility, softness, cohesiveness, wetness, roughness, cohesiveness-of-mass, chewiness,

chew-count, and powdery” of the cakes (Chueamchaitrakun). Each panelist received a list of

terms and corresponding definitions to use during the testing period. Results showed that

increasing the amount of glutinous rice flour in the ratio for the cakes made them softer, moister,

and more cohesive. With that knowledge, food manufacturers can apply it to their products and

know that they would be soft, moist, and cohesive; this is why descriptive tests are very helpful.

(SOURCE).

Duo-trio and triangle tests are two analytical difference tests that are key in food

evaluation. In a duo-trio test, three samples are provided. One of the samples is said to be the

standard. Panelists must decide which of the other two samples is the most similar to the

standard. In triangle tests, three samples are again provided, but no standard is pointed out.

Instead, panelists must decide which of the three is different from the other two. Rousseau and

others ran an experiment using these tests. It was then published as “Power and Sensitivity of the

Same-different Test: Comparison with Triangle and Duo-trio Methods” in the Journal of Sensory

Studies. The experimenters gathered 16 untrained panelists. Vanilla flavored yogurt with added

sugar and without added sugar were the two samples that were used. Panelists then participated

in the two tests. Results showed that both test had high accuracy, and most panelists were able to

find the different sample in both types of tests. (Rousseau). Both tests assure uniqueness of the
Jaime 5

product paving the way for food manufacturers to compare it with competition. With the addition

of a paired preference test, food manufacturers could see how different heir product is from

competition and which for of it is most liked.

Ranking and Rating tests are another important two analytic difference tests. In ranking

tests, samples are presented and are ranked from lowest to highest based on specific

characteristics. In rating tests, samples are provided and rated using a numbered scale with a

standard sample given a reference rating. For both, many samples can be given simultaneously,

but for accurate results, there should not be more samples than an average panelist can handle.

This was written about in Vindras-Fouillet and her team’s article, “Sensory Analyses and

Nutritional Qualities of Hand-Made Breads with Organic Grown Wheat Bread Populations”. The

panelists in this study were trained bakers, farmers, researchers, and technicians. During the test,

panelists were asked to rank the samples of bread based on crust color, crumb color, air cell size,

aroma, saltiness, bitterness, acidity, crispiness, smoothness, and the long lasting taste. (Vindras-

Fouillet). With the results from this test, researchers were able to decide what bread types

showed more of specific characteristics. This information can be useful when manufacturers

desire specific qualities in their products; with this test, they can see how intensely their product

displays the desired qualities.

Food evaluation tests of all kinds are important for the food world. Companies want to

ensure customer satisfaction and provide accurate information with every product. Consumers

want to make sure that they like what they buy and know what they are putting in their bodies.

This is done partially with sensory labs. This is why Nutrition 205 Lab students from San Diego

state participated as panelists in class-run sensory labs. The goal was to teach them how to

administer sensory tests, see how color and beverages are associated, evaluate a spectrum of
Jaime 6

characteristics in food, be able to recognize differences in samples both when compared to a

standard and not, and to be able to rank samples for intensity of characteristics or by preference.

The following is their methods, results, and a discussion on their sensory lab experience.
Jaime 7

Methods

Panelists: The sensory lab had four participating sections with 69 total participants; each section

had between 16-18 participants. The following data includes the panelists from all sections. Ages

were as followed: 31% were 18 or 19 years old, 30% were between 20-23 years old, 20% were

between 24- 29 years old, and 9% were over the age of 30. Females made up 85% of the

participants, while males made up the other 15%. Participants who were married at the time of

the lab made up 8%, divorced participants made up 4%, and the remaining 88% were students

who have never been married. Only 3% of the participants were not in the Food Science and

Nutrition major. Of the participants, 91% were undergraduate students. Most, 71%, of the

participants had two or more roommates. Only 25% had one roommate, and 4% lived alone.

Smokers made up 4% of the participants. Finally, 19% of the students had a food allergy. These

allergies were one or more of the following: mango, pineapple, melon, banana, avocado, tomato,

raw carrots, raw celery, truffles, shrimp, shellfish, raw seafood, red meat, glucose, and lactose.

Environment: This sensory lab took place in the West Commons building room #203 on the San

Diego State University campus. The room had bright lighting. The room had five rows of seats in

the center of the room with six desks in each row. On either side of the desks there were cooking

areas. In front of the seats, there was a counter with a computer and other lab materials. Students

sat in the seats while the professor and the assistant stood behind the counter facing the

classroom. In one of the cooking areas on the right side of the room, the lab coordinator stood.

Throughout the entire lab period, there was a constant background buzz coming from either the

lights or the refrigerators. The room was otherwise silent; the doors were closed and panelists

were instructed to remain silent.


Jaime 8

Color Association/ Perception of Beverages: To begin this test, students were instructed to use

a specific sheet of paper that contained areas to write down answers and contained the

instructions. The test administrator began this test by asking the panelists if they drink apple

juice. Panelists were instructed to raise their hands to answer “yes” or “no”. The raised hands

were counted and the data was then entered into the computer.

Students were then presented with five beakers filled with colored liquids ranging from

light yellow to emerald green on the counter in the front of the classroom. The liquids were in

600ml beakers filled to the 400ml mark and remained in the front of the class throughout the test.

Each beaker was labeled with the color of the liquid inside. Panelists were told to ignore the

instruction on the document for this test and were to instead decide which of the five liquids

seemed to best fit the quality being evaluated. The categories were: sweetness, sourness,

artificiality, naturalness, preference, dislike, at what temperature would you drink it, and would

you drink it. The instructor asked panelists which liquid they thought would be the sweetest and

then listed the colored liquids, panelists raised their hands to cast their vote for the liquid they

thought was the sweetest. The same method was used for the remaining characteristics being

evaluated. The number of raised hands were counted after each characteristic for each drink and

the data was entered into the computer.

Descriptive test: Panelists were instructed to open their books to a page in the laboratory manual

with a chart where they were instructed to write down the name of the samples on one axis and

the appearance, aroma, flavor, texture, and consistency on the other axis. Panelists were then

instructed to add “mouth feel” to the characteristic axis, and fill out the sample names as: gold

fish, raisin, almond, and marshmallow. Panelists then received a small cup of deionized water so

they could cleanse their palate between tasting food samples. The lab instructor and coordinator
Jaime 9

walked throughout the room with trays that held one ounce cups containing bite sized samples.

Students were instructed to take one of each sample from the trays. One student did not take any

of the goldfish sample because of a food allergy which made the total participants (n) for the

section go from n=18 to n=17 for that sample. Once all panelists received their samples, panelists

were told to take out the list of descriptive words that were provided. The test then began and

panelists observed, smelled, and ate the products. Then, they assigned one descriptive term to

each of the characteristics: appearance, aroma, flavor, texture, consistency, and mouth fee. Once

everyone was done, the data collection started. Each descriptive word on the list was read and the

participants raised their hands to vote for the term they used. The hands were counted. Before

moving on to the next characteristic, the total answers (n) were counted and, if they didn’t equal

17, a recount was done. When the correct amount of hands were counted, the data was entered on

the computer.

Duo-trio test: Students were instructed to take out the correct document for the duo-trio test.

Upon reading that the sample was going to be cookies for the duo-trio test, two panelists

announced they would not be able to participate for this section because of allergies, dropping n

down to 16. The instructions were to choose one of the two sample cookies that was different

from the standard, and use “dry”, “crunchy”, or “less vanilla” to describe why it was different.

The instructor and lab coordinator walked around the classroom with three different plates full of

cookies; each plate with codes on them differentiating the samples. The codes were: 8175, 6104,

and 1108. The standard sample cookie 8175 was a Nilla Wafer, 6104 was a Kroger, and sample

cookie 6104 was a Nilla Wafer. Participants were instructed to take one of each sample. The first

cookie to be eaten was the standard cookie. Panelists observed and ate each of the cookies.

Participants then wrote down which one they thought was the different cookie from the standard
Jaime 10

and what the major difference was given the choices of dryness, crunchiness, and having less

vanilla. The codes of each sample were then announced and students raised their hands for the

one they picked to be the different one. The three possible reasons for determining the different

cookie were then read, and students raised their hands for the one they picked. All votes were

counted and entered in the computer. Panelists were then told which cookie was the different

one, that it was a Kroger, and that the other two samples were Nilla Wafers.

Paired Comparison test: For this test, the student in the first seat of their row helped prepare

and distribute the samples. The containers that had the samples were coded with 635T1 and

573T2. These same numbers were told to the participants so they could write them in areas of

their papers and keep the samples they received coded. Trays that had the samples in one ounce

cups made their way down the rows of panelists. Each panelist was instructed to take one of each

sample. They were then instructed to fill in the chart on their document for this test. It had room

to write down the intensity level, whether greater or lesser, and characteristic observed of the

sample. Once everyone had their samples and knew what to do, they drank the liquids. They

were encouraged to not make any faces or gestures that would sway the opinions of anyone

around them; this was in effort to make this sensory lab more similar to a real one. After

everyone wrote down their results, each sample code was announced and asked whether it was

lesser or greater intense and then for what characteristic was observed. Data was entered into the

computer. Panelists were then told the test tested for sourness and were given the percentages of

citric acid in the two samples. Sample 635T1 was apple juice with no citric acid added and

sample 573T2 was apple juice with 1% citric acid added.

Triangle Test: This test also required students sitting in the front of the column of seats to help

prepare and distribute the samples. The containers that had the sample were coded with the
Jaime 11

numbers 777C1, 542E2, and 112H9. Participants were again encouraged to write down the

numbers on separate areas of their paper so they could place the samples over that and keep them

organized. The tray containing the three samples in one ounce cups reached every student.

Participants grabbed one of each and kept them organized. They were told to wait for everyone

to start at the same time and with the same sample. They were also reminded to not make any

faces or suggestive gestures to others. After all three samples were tasted, the codes were

announced one at a time. Students raised their hands for the one they believed was different.

Raised hands were counted and entered into the computer. The percentage of citric acid added to

each sample was then disclosed to the panelists. Sample 777C1 was apple juice with 0% citric

acid, sample 542E2 was apple juice with 0% citric acid added, and sample 112H9 was apple

juice with 1% citric acid added.

Ranking Test: Students opened their books to the correct document for the test. They were

given five codes to write down: 555D7, 192L3, 695F8, 543K8, 495P2. They were instructed to

write them down in the chart on their documents in order of the one they thought was most sour

to the one they thought was the least sour. The ranking of their preference was also to be written

in the chart. The student sitting in the first seat of the column of seats were again told to help

prepare and distribute the samples. Each student grabbed one ounce cups that contained small

amounts of the sample. Panelists were in charge of keeping the five samples organized with their

codes. Panelists all began at the same time with the same coded sample. Once the ranking

process was complete, the data collection started. Every code was called for each ranking

position. Students raised their hands when the code they had picked was read out for the

matching rank number. The same was repeated for preference. After all the data was collected

and entered in the computer, panelists were told the citric acid percentage in each of the five
Jaime 12

samples: sample 555D7 was apple juice with 10% citric acid added, sample 192L3 was apple

juice and had 5% citric acid added, sample 695F8 was apple juice with 2.5% citric acid added,

sample 543K8 was apple juice with 1% citric acid added, and sample 495P2 was apple juice with

no citric acid added.

Rating Test: The student sitting in the first seat in the column helped prepare and distribute three

samples in one ounce cups. They were coded with 420M, 0110, and S723. The tray with samples

made its way back down the columns of panelists. Every participant grabbed the three samples.

The document said 0110 was to be the reference. On the sheet, there was 7 possible areas to

score the samples. Sample 0110 was given position 4. Panelists had to place the other two

samples in the position they believed to be appropriate. Participants all started to drink their

samples in the same order at the same time. Each position number was then read and students

had an opportunity to vote on which, if any, sample they rated to belong on that position. Votes

were counted and entered into the computer. The percentages of citric acid in each sample were

then shared with the students. Sample 420M was apple juice with 1% citric acid added, sample

S723 was apple juice with 5% citric acid added, and the reference sample 0110 was apple juice

with 2.5% citric acid added.

Statistical Analysis: The number of participating panelists for every test was written as an n-

value. N was the total number of participants. The lab instructor asked each participant to raise

their hands to vote for each question after each test. Data was entered into an excel sheet.

Following labs added their data to the same sheet. The excel sheet had a section for every

component to every test and for the demographics. All data was reported with vote numbers and

percentages.
Jaime 13

Results

Color of Association/ Perception of Beverages:

As seen in Chart 1, when panelists were asked if they drank apple juice, 78% of panelists

voted yes and 22% voted no.


Jaime 14

Chart 2 displays the votes from panelists on which colored liquid they found the sweetest,

most sour, most artificial, most natural, preferred the most, and disliked the most. These were the

qualities. Panelists were allowed to vote for a specific drink multiple or no times.

Light Yellow was the most preferred, most natural, and the sourest drink according to

panelists with 65%, 77%, and 49% of the votes respectively. It only received 4% of votes for

being the most disliked. It received 12% of the votes for being the sweetest and zero for being

the most artificial. Dark Yellow was perceived to be the sweetest drink with 45% of the votes. It

received 13% of votes for sourest, 3% for most artificial, 22% for most natural 20% for being

preferred the most, and 17% for being disliked the most. Chartreuse received 28% of the votes

for sourest, 7% for being the most disliked, 7% for being the most artificial, 3% votes for being

the sweetest, 3% votes for being the most preferred, and zero votes on being the most natural.
Jaime 15

Dark Chartreuse received 19% of the votes for being the most artificial, 16% for being disliked

the most, 12% for sweetest, 10% for sourest, 9% for being the most preferred, and zero for being

the most natural looking. Emerald got the most votes for being disliked and being the most

artificial with 55% and 72% votes respectively. It received 29% of the votes for being the

sweetest, 3% for being the sourest, 3% for being the most preferred, and 1% of the votes for

being the most natural.

Chart 3 shows the answers from panelists when asked if they would drink each of the

colored liquids. Light yellow received 88% of the votes for yes and 12% for no. Dark Yellow

received 61% of the votes for yes and 39% for no. Chartreuse received 59% of the votes for yes

and 41% for no. Dark Chartreuse received 26% of the votes for yes and 74% for no. Emerald

received 20% of the votes for yes and 80% for no.
Jaime 16

Chart 4 shows the percentage of votes for the temperature the colored drink must be in

order to be consumed. Light Yellow would be consumed by 97% of the panelists if it was cold,

by 12% if it was tepid, by 6% if it was warm, and by 4% if it was hot. Dark Yellow would be

consumed by 84% of the panelists if it was cold, by 13% if it was tepid, by 72% if it was warm,

and by 13% if it was hot. Chartreuse would be consumed by 96% of the panelists if it was cold,

by 10% if it was tepid, by 1% if it was warm, and by 4% if it was hot. Dark Chartreuse would be

consumed by 80% panelists if it was cold, 16% if it was tepid, 1% if it was warm, and 1% if it

was hot. Emerald would be consumed by 81% panelists if it was cold, by 10% if it was tepid, by

3% if it was warm, and by 3% if it was hot.


Jaime 17

Descriptive Test:

Chart 5 shows the votes that the top three terms received to describe each quality in

regards to goldfish. For appearance, the term dry received 24% of the votes, golden brown

received 43% of the votes, and grainy received 9% of the votes. For flavor, the term salty

received 81% of the votes, sharp received 10%, and pasty received 6%. For Texture, crisp

received 49% of the votes, crunchy received 37%, and gritty received 9%. For Aroma, Burt

received 32% of the votes, nothing received 7%, and flavor received 28%. For Consistency,

brittle received 56% of the votes, thin received 17%, and cheesy received 20%. For mouthfeel,

crisp received 40% of the votes, sticky and gritty both received 5%e votes, and crunchy received

47%.
Jaime 18

Chart 6 shows the votes that the top three terms received to describe each quality in

regards to raisins. For appearance, sunken received 19% of the votes, sticky received 23%, and

dry received 15%. For flavor, sweet received 42% of the votes, bitter received 12%, and fruity

received 33%. For texture, rubbery received 13% of the votes, gummy received 23%, and chewy

received 33%. For aroma, sweet received 30% of the votes, fruity received 35%, and nothing

received 22%. For consistency, gummy received 30% of the votes, chewy received 61%, and

rubbery received five 7%. For mouthfeel, sticky received 54% of the votes, slimy received 15%,

and smooth received 16%.


Jaime 19

Chart 7 shows the votes for the top three terms received to describe each quality in

regards to almonds. For appearance, dry received 19% of the votes, golden brown received 30%,

and light brown received 19%. For flavor, flat received 16% of the votes, nutty received 78%,

and stale received 6%. For texture, hard received 27% of the votes, firm received 22%, and

crunchy received 19%. For aroma, burnt and sweet both received 3% of the votes, flowery

received 9%, and none received 85%. For consistency, butter received 6% of the votes, chewy

received 28%, and thick received 60%. For mouthfeel, gritty received 27% of the votes, slick and

smooth both received 5%, and crunchy received 62%.


Jaime 20

Chart 8 shows the votes for the top three terms received to describe each quality in

regards to marshmallows. For appearance, smooth received 4% of the votes, puffy received 91%,

and symmetrical, rounded, and dry all received 2% each. For flavor, sweet received 76% of the

votes, floury received 10%, and pasty received 12%. For texture, velvety received 19% of the

votes, springy received 19%, and gummy received 25%. For aroma, sweet received 78% of the

votes, flowery received 3%, and nothing received 19%. For consistency, chewy received 32% of

the votes, thin received 7%, and gummy received 46%. For mouthfeel, sticky received 21% of

the votes, slimy received 21%, and smooth received 52%.


Jaime 21

Duo Trio Test:

Chart 9 shows the votes for each sample when asked which was different from the

standard cookie. Sample 6104, the Kroger cookie, was voted as the different one by 95% of the
Jaime 22

panelists. Sample 1108, the Nilla Wafer cookie, was voted as the different one by 5% of the

panelists.

Chart 10 shows the votes for the reasons panelists picked the sample to be different. The

dryness of the cookies made 31% of the panelists decide on which sample cookie was different,

crunchiness made 40% of the panelists decide, and the vanilla amount made 29% of the panelists

decide.

Paired Comparison:

Chart 11 shows the votes from panelists regarding which of the two samples they

believed was higher in intensity/ sourer. Sample 573T2, the apple juice with 1% citric acid, was

thought to be sourer by 98.5% of the panelists, and Sample 635T1, the apple juice with no citric

acid, was thought to be sourer by 1.5% of the panelists.


Jaime 23

Triangle Test:

Chart 12 shows the results for which of three samples panelists believed was different

from the other two. 100% of the panelists believed Sample 112H9, the sample with 1% citric

acid, was different from samples 777C1 and 542E2 which each had no citric acid.
Jaime 24

Ranking Test:

Chart 13 shows the votes for which sample panelists ranked to each position based of off

sourness. The higher the rank, the sourer the sample is believed to be. Rank position 1 had 97%

of the votes for the sample with 10% citric acid, 0% for the sample with 2.5% citric acid, 0% for

the sample with 1% citric acid, 1% for the sample with 0% citric acid, and 1% for the sample

with 5% citric acid. Rank position 2 had 0% of the votes for the sample with 10% citric acid, 6%

for the sample with 2.5% citric acid, 3% for the sample with 1% citric acid, 1% for the sample

with 0% citric acid, and 90% for the sample with 5% citric acid. Rank position 3 had 0% of the

votes for the sample with 10% citric acid, 90% for the sample with 2.5% citric acid, 4% for the

sample with 1% citric acid, 2% for the sample with 0% citric acid, and 4% for the sample with

5% citric acid. Rank position 4 had 1% of the votes for the sample with 10% citric acid, 7% for

the sample with 2.5% citric acid, 82% for the sample with 1% citric acid, 7% for the sample with
Jaime 25

0% citric acid, and 3% for the sample with 5% citric acid. Rank position 5 had 3% of the votes

for the sample with 10% citric acid, 0% for the sample with 2.5% citric acid, and 6% for the

sample with 1% citric acid, 90% for the sample with 0% citric acid, and 1% for the sample with

5% citric acid

Chart 14 shows the votes for which sample panelists ranked to each position based off of

preference. The higher the rank, the more it is preferred. Rank position 1 had 0% of the votes for

the sample with 10% citric acid, 10% for the sample with 2.5% citric acid, 31% for the sample

with 1% citric acid, 57% for the sample with 0% citric acid, and 2% for the sample with 5%

citric acid. Rank position 2 had 0% of the votes for the sample with 10% citric acid, 4% for the

sample with 2.5% citric acid, 60% for the sample with 1% citric acid, 34% for the sample with

0% citric acid, and 2% for the sample with 5% citric acid. Ranking position 3 had 3% of the

votes for the sample with 10% citric acid, 81% for the sample with 2.5% citric acid, 8% for the

sample with 1% citric acid, 4% for the sample with 0% citric acid, and 4% for the sample with
Jaime 26

5% citric acid. Rank position 4 had 3% of the votes for the sample with 10% citric acid, 4% for

the sample with 2.5% citric acid, 2% for the sample with 1% citric acid, 3% for the sample with

0% citric acid, and 88% for the sample with 5% citric acid. Rank position 5 had 92% of the votes

for the sample with 10% citric acid, 3% for the sample with 2.5% citric acid, 1% for the sample

with 1% citric acid, 0% for the sample with 0% citric acid, and 4% for the sample with 5% citric

acid.

Rating Test:

Chart 15 shows the votes for where each sample was ranked by panelists relative to a

standard that was ranked in position 4. Sample S723, the sample with 5% citric acid, was ranked

in position 1 by 50% panelists, position 2 by 38%, position 3 by 10%, position 4 by 1%, position

5 by 1%, and positions 6 and 7 by 0%. Sample 420M, the sample with 1% citric acid, was ranked
Jaime 27

in position 1 by 0%, position 2 by 0%, position 3 by 0%, position 4 by 0%, position 5 by 21%,

position 6 by 64%, and position 7 by 15%.


Jaime 28

Discussion

A possible reason why most people answered yes to drinking apple juice was because the

question was asked so vaguely, and panelists may have decided to interpret the question to be

asking if they would drink it if it was offered to them. The no answers were probably more from

panelists who interpreted it as if it was asking if they drank it regularly or if they had a hatred for

the drink.

In the beverage association test, light yellow and dark yellow were considered to be the

most natural dinks. Since apple juice had been in the previous question, it is possible that

panelists picked the yellows as natural since those are the colors of apple juice found in stores.

Another possibility is that panelists didn’t believe yellows were a color to artificially dye

beverages. This theory is further backed up with the fact that dark chartreuse and emerald, the

darkest colors, were both voted highly as being the most artificial. Panelists again may have

thought that darker or greener colors are the ones that are typically found in beverages because of

artificial colorings. Sweetness may have peaked in votes at both dark yellow and emerald

because darker colors are found to be sweeter in nature. Sourness may have, decreased with

darkness due to the fact that lighter colors are perceived to be sourer. Preference also declined

with darkening of the beverages with the exception of dark chartreuse. The later peak in a darker

color may be due to likeness for the color itself. Dislike for the beverage, however, was strongest

with dark yellow, dark chartreuse, and emerald. It is possible that darker colors aren’t preferred

in beverages, but since chartreuse was lighter than dark chartreuse, it was considered to be light

even though it was darker than dark yellow. Another possibility for why any vote was made is

because participants were able to see each other vote and may have wanted to follow the trend

instead of become an outcast.


Jaime 29

As the color of the beverages got darker, less panelists voted yes when asked if they

would drink it and vice-versa. This may be because of the artificiality factor. Since most believed

lighter meant more natural, maybe because 97% of the voters were Food Science and Nutrition

major, more panelists chose to only drink the beverages that they perceived to be more natural.

Another possibility for these results was that, again, participants were able to see the votes of

other panelists followed the trend of votes instead of being different.

Cold temperatures for beverages were preferred the most possibly because colder drinks

are thought to be better. This may tie back to the implied belief that these were apple juice from

the opening question. Votes for all other temperatures dropped dramatically meaning that most

people wouldn’t drink them any other way.

An error that occurred while taking this test was that panelists followed the instructions

given on the sheet even when they were told not to. There was no major effect to the integrity of

the test, but it made it take longer. There was also the need for recounts of the data.

The overall conclusions drawn from the color of association/perception of beverages test,

is that most people would drink apple juice; lighter colors are perceived to be more natural, sour,

are thus typically preferred, and would be consumed by more panelists; darker colors were seen

as sweeter, more artificial, less preferred, thus less likely to be consumed; and, among the

beverages that would be consumed, the colder the beverage, the more the odds of consumption.

All of these qualities and preferences were perceived and made simply from the colors of the

liquids just like the coffee cup colors had an influence on how warm panelists from Guéguen and

Jacob’s test found their coffee. They were very different tests, but the color and beverage

association still took place in both. (Guéguen). Next time this sensory test is conducted, correct

instructions should be provided from the start and data collection could be done slower.
Jaime 30

Goldfish are advertised as salty and cheesy. Possibly because of this, “salty” was the

most voted for flavor and “cheesy” was among the top three terms to describe the consistency.

Crisp and crunchy are obvious characteristics of goldfish which may be why they were among

the top terms used to describe both the mouthfeel and texture of them. The “golden brown” and

lack of a clear aroma were also a clear descriptions.

“Sticky” was the top voted quality for raisins in both the appearance and mouthfeel

categories. A synonym to “sticky” could be “chewy” and/or gummy which were both the top

voted qualities in consistency. These results could be because of the way raisins stick to teeth

when chewed. As for appearance, memories of raisins sticking to teeth probably caused panelists

to perceive them to look sticky.

It is common knowledge that almonds are a nut which is probably why “nutty” received

so many votes for its flavor. Panelists probably couldn’t think of a better description than what it

is. “Hard”, “firm”, “crunchy”, “chewy”, and “thick” are all very similar characteristics and were

all used to describe the consistency, mouthfeel, and texture of the almonds. Since biting into

them was the first real interaction and it was hard, that descriptive term seemed accurate and

synonyms of it were then used to describe everything possible.

“Puffy” is a common term used in marshmallow advertisement. It is on the bags they are

packaged in and in recipes, which is probably why it received the most votes by far to describe

the appearance. “Sweetness” is an obvious flavor in them and a reason they are used which may

be why panelists perceived them to smell sweet.

As for why the description test’s data had to be redone so many times, there are two big

theories. For one, there was a lot of different data collected fast meaning that losing one’s train
Jaime 31

of thought for a second mattered here; it doesn’t help that it was early in the morning before

many panelists were fully awake and alert. The second theory is that panelists got confused since

the order of qualities for each sample was different in their books than the order in which the

data was called for; this may have led to confusion on which set was being asked for. Next time,

the order of the qualities on the sheet should be made to match the order in which they are called

for in data collection.

Just like the results in the glutinous rice flour butter cake test could be used for the

manufacturers to improve their recipe based of which ratio of flours had the most votes of

qualities they wanted in their product, these results could be used to see which terms best

describe each sample’s qualities. (Vindras).

There was a 95% accuracy in picking which sample was the odd one out in the duo-trio

test. The three incorrect votes could have been made by people whose palettes were flavor

exhausted, didn’t keep the sample cups organized and associated with the correct sample codes

correctly, or had simply not been able to distinguish any flavor or texture differences. Since the

reasoning for why panelists picked their sample as the different one was pretty equally

distributes among having less vanilla, being drier, and being crunchier, it can be concluded that

there was no obvious reason, but instead that each played a significant part. Unlike in Rousseau,

Meyer, and O'Mahony’s test, Nutrition 205 panelists weren’t specified if they could or couldn’t

retry any sample. Since it was a less strict test, the results might not be as reliable. Next time, the

rule for re-trying samples should be clarified. Regardless, in both, results were mostly accurate.

There was only one panelist that did not vote for the correct sample in the paired

comparison test. This may have been because they did not place their sample with the correct

code, or because the difference between zero and one percent citric acid added to their sample
Jaime 32

was too insignificant, because their palette was flavor exhausted, or because they were nervous

for the test.

The triangle test resulted in 100% of the panelists picking the correct sample that was

different. This accuracy could be explained by assuming that in this case the difference between

the zero and one percent added citric acid was significant enough, that panelists now knew

sourness was what was being tested for, or simply that this test format was easier. In the research

done by Rousseau and others, the triangle test also turned out accurate results. The theory that

this test is easier is reinforced by this. However, they were stricter with their rules in that

experiment and the same style should be adapted by the Nutrition 205 class in the future. Besides

that, they were conducted the same; since results were accurate for both, a reasonable conclusion

is that the triangle test is reliable. (Rousseau).

The ranking test had a significant amount of votes for one sample in each ranking

position. Panelists that did not fall under the trend could have been overwhelmed with the taste

of all five samples for one test and not have remembered which exactly was how sour, they could

have been overwhelmed with having to rank for both sourness and preference, they could have

not cleansed their palate with enough DI water, they could have done it too fast, they could have

found it hard to distinguish the citric acid levels (0%, 1%, 2.5%, 5%, and 10%), or they could

have not understood that rank number one meant the sample was the most sour while rank five

meant least. The five sample being too many and the two characteristics being observed being

too many factors are known as a limitations; there are only so many samples panelists can retain

the information of and only so many characteristics they can apply it to. Other tests, like the one

done by Vindras-Fouillet, Ranke, Anglade, Taupier-Letage, Véronique, and Goldringer, push

this limitation and not only do a big multitude of samples, but the test also does more than two
Jaime 33

characteristic’s ranking. In their sensory test, 11 characteristics were ranked for. Their limit was

farther than the Nutrition 205 panelist’s because their panelists were trained. (SOURCE). Future

panels done for Nutrition 205 should continue the five samples for two characteristics as the

limit.

The rating test showed clear trends; the sample with 5% citric acid added was rating in

the smaller numbers indicating sourer while the sample with 1% citric acid added was rated in

the larger numbers indicating less sour. Only two votes didn’t follow the trend and believed the

5% sample was equal to or in one rating spot lower than the standard which had 2.5% citric acid

added. This error could have again been taste fatigue, inability to recognize the difference,

tiredness of being in the sensory lab, or the panelist having difficulty with the format of the

rating test. Future sensory test should be treated the same, however, because the trend was still

over all accurate meaning the test was reliable and well-done.
Jaime 34

References
Brown A. 2015. Understanding food principles and preparation. 5; 1-10.

Chueamchaitrakun P, Chompreeda P, Haruthaithanasan V, Suwonsichon T, Kasemsamran S, Prinyawiwatkul

P. 2011. International Journal of Food Science & Technology 46; 2538-2365.

Guéguen N, Jacob C. 2012. Coffee cup color and evaluation of a beverage's “warmth quality”. J Color

Research & Application 39;79-81.

Rousseau B, Meyer A, O’Mahony M. 2007. Power and sensitivity of the same-difference test: comparison

with triangle and duo-trio methods. J of Sensory Studies 13: 149-173.

Tunick MH. 2011. Food texture analysis in the 21st century. J. Agric. Food Chemistry 59 (5), pp 1477–1480

Vindras-Fouillet C, Ranke O, Anglade J, Taupier-Letage B, Véronique C, Goldringer I. 2014. Sensory analyses

and nutritional qualities of hand-made breads with organic grown wheat bread populations. Food

and Nutrition Sciences, 2014, 5, 1860-1874


Jaime 35
Jaime 36

Vous aimerez peut-être aussi