Telechargé par ranaivosaonatoky

Take it or leave it Cognitive rules and

publicité
1
Take it or leave it: Cognitive rules and satisfying choices.
Wahida Chowdhury ([email protected])
Institute of Cognitive Science, Carleton University
Ottawa, ON, K1S 5B6, Canada
Warren Thorngate ([email protected])
Department of Psychology, Carleton University
Ottawa, ON, K1S 5B6, Canada
Abstract
cognitive rules people might use to make choices in these
situations.
Cognitive rules to make decisions can range from
sophisticated evaluations of each alternative, to seeking
advice, to flipping a coin. The rules we employ depend
not only on our personal characteristics (such as,
temperament, cognitive ability, practice, and habit), but
also on the decision situations we face (Simon 1956).
Psychologists and economists have studied the
consequences of varying cognitive rules used in decisionsituations with simultaneous alternatives (e.g., Fasolo,
McClelland, & Todd 2007; Gourville, & Soman 2005;
Thorngate, 1980). However, the literature for cognitive
rules used in TIOLI decision-situations is scarce.
Simon (1956) argued that, when choosing an
alternative, people tend to satisfice (set a minimum
standard of satisfaction and take the first alternative that
meets this standard) rather than optimize: (take the best
alternative after evaluating all). TIOLI situations force
decision makers to use some form of satisficing cognitive
rule. We might begin our search by setting a fixed
minimum standard: to be acceptable, for example, an
alternative must have at least seven out of ten desired
features. We might then take the first alternative that
meets or exceeds our standard, or we might lower our
standard if we examine several alternatives and none meet
the standard we set.
There are several variations of satisficing rules,
prompting a few simple questions: How do variations of
the rules compare in the choice outcomes they generate?
How simple can cognitive rules be and yet make
satisfying choices? We attempt to answer such questions
by running a computer model of TIOLI situations, with
variations of cognitive rules.
We frequently face decision situations (selecting a mate,
accepting a job offer, etc.) presenting only one alternative
at a time and requiring us to "take it or leave it" (TIOLI).
These situations force us to adopt some kind of satisfying
rule setting minimum standards and accepting the first
alternative that meets or exceeds them. One such rule sets
initial standards and does not change them as alternatives
are sampled. Another kind of rule modifies the standards in
light of sampled alternatives. The present simulation
examined how level of initial standards, quality of
alternatives, and rule modification speed influenced search
length and the quality of choice outcomes. Results show
that search length grows exponentially when standards
remain fixed and declines drastically even when
modifications are slow. Results also support the
speculation that people adopting lower standards are far
more likely to choose alternatives exceeding their
expectations than are people adopting higher standards.
Implications for the shift from idealism to realism are
discussed.
Keywords: decision-making, satisficing rules, search
length, take it or leave it
Decision-Making Situations
The hardest thing to learn in life is which bridge to cross
and which to burn. David Russell
Most research in human decision-making examines
situations where a chooser must select one of two or more
alternatives presented simultaneously (e.g., Botti & Hsee
2010; Schwartz 2005). This allows the chooser to
compare the alternatives before making a choice.
However, many decision-situations present only one
alternative at a time, and a chooser is constrained to either
take it or leave it (TIOLI). If she/he takes it, no more
alternatives are presented; if she/he leaves it, the rejected
alternative is not presented again. For example, most of us
cannot ask prospective mates to wait for a few years while
we search for someone better, or ask a prospective
employer to hold a job open for a few months while we
look for a better one. Also, we are rarely guaranteed
alternatives in the future. We might, for example, reject
prospective mates until we are too old to marry, or decline
job offers until our skills are no longer required. The
present study presents a computer model of TIOLI
situations, and investigates the consequences of possible
The TIOLI Computer Model
Drawing on Thorngate's (2000) original simulation, we
wrote a program in "R" that (1) created TIOLI situations
and (2) applied variations of satisficing rules for making
choices in these situations, and (3) examined how the
variations affected the number of alternatives examined
and expected outcome. Each TIOLI situation was
represented in a matrix of 10 rows and 50,000 columns;
its cells filled randomly with 0s and 1s in varying
proportions. Each column in the TIOLI matrix
107
2
represented an alternative that could be chosen, and each
row represented a feature that was present (=1) or absent
(=0). Thus, for example, Column 3 represented the third
alternative a chooser would see if she/he rejected the first
alternative (column 1) and the second (column 2). If
Column 3 contained the vector {0 1 0 1 1 0 0 1 1 0}, it
would indicate that alternative 3 had features 2, 4, 5, 8
and 9 (where the 1s were) but not features 1, 3, 6, 7, and
10 (where the 0s were). Setting a maximum of 10 features
per alternative was arbitrary; it could have been 5 or 50,
but 10 seemed to be a reasonable number. Creating
50,000 alternatives allowed the simulated chooser to
reject 49999 of them – a theoretical possibility when
standards are high and the chances of alternatives meeting
them are low.
weight on the number of features in previously
encountered alternative).
We ran the TIOLI simulation for each of the 6x4x6 =
144 combinations of the three variables (six levels of
initial standard * four levels of probability * six levels of
weight to current standard). For each combination, our
computer programme generated a sample of 0s and 1s in a
10x50,000 TIOLI matrix. Then our simulated chooser
examined each of these alternatives, one-by-one, looking
for the first alternative that met or exceeded the chooser's
initial/adjusted minimum standard.
Dependent Variables
The programme was iterated 100 times for each of the
144 combinations of independent variables. Each of these
100 iterations contained a new, randomly generated set of
50,000, ten-feature alternatives. When the 100 iterations
for each of the 144 combinations of independent variables
were completed, the programme printed: 1) the average
number of alternatives rejected before finding the first one
that met or exceeded the chooser’s current minimum
standard, and 2) the average number of features in the
accepted alternative.
Independent Variables
We ran our simulation with many possible combinations
of three independent variables. Our first independent
variable was the initial setting of a chooser’s minimum
standard -- whether, for example, the chooser would take
the first alternative with at least 3 features, or at least 4
features, or at least 9 features. We used six levels of this
initial setting: 10 (the chooser only accepts an alternative
if it has all ten features), 8, 6, 4, 2, and 0 (the chooser has
no fixed standard and so accepts the first alternative
encountered).
Our second independent variable was the probability
that a feature would be present – thus varying whether,
for example, the expected number of features in any
alternative was 5 or 3 or 8. We used four levels of this
probability: 0.8 (80% chance that a feature would be
present), 0.6, 0.4, and 0.2 (20% chance that a feature
would be present).
The third independent variable was the rate at which
choosers adjusted their minimum standard as they
sampled and rejected alternatives. We did this by giving
complementary weights to (a) the current minimum
standard and (b) the number of features in most recently
rejected alternative, where a + b = 1.0. Suppose, for
example, a = 0.7 and b = 0.3, and suppose the initial
minimum standard of a chooser was 6 out of 10. If the
first alternative sampled had only two of 10 features, then
the alternative would be rejected and the new minimum
standard would be calculated:
New standard = (0.7*6) + (0.3*2) = 4.8.
If the 2nd alternative had four features, it too would be
rejected and a second adjustment of the standard would be
made:
New standard = (0.7*4.8) + (0.3*4) = 3.48.
If the third alternative had five features, it would exceed
the new minimum standard of 3.48 and thus be accepted.
We used six levels of the ‘a’ weight: 1.0 (100% weight
on current standard and 0% weight on the number of
features in the most recently rejected alternative), 0.8, 0.6,
0.4, 0.2, and 0 (no weight on current standard, i.e. 100%
Cognitive Rules to Make a Choice
Different combinations of the third independent variable,
rate of adjustment, represented two different cognitive
rules of satisficing: When the weights of a = 1.0 and b =
0.0, the chooser never changed her/his initial minimum
standard. When the weights of a < 1.0 and b > 0.0, the
initial minimum standard was adjusted in light of the
number of features encountered as each alternative was
sampled and rejected. We report the results separately.
Fixed Minimum Standard
Average number of
alternatives examined
We first considered a chooser with a simple, if rigid,
cognitive rule: Set an initial minimum standard and stick
with it until a choice is made. Figure 1 shows four curves
plotting the relationship among minimum standards,
probability of features, and search length (average number
of alternatives examined).
100000 0.2
10000 0.4
1000 0.6
100 0.8
10 1 0 2 4 6 8 10 Minimum standard
Figure 1: Minimum standards, probability of features and
search lengths.
108
3
Average number of
alternatives examined
Visual inspection of Figure 1 shows that search length
to find an acceptable alternative increases exponentially
with the increase of minimum standards, and that the
exponential effect is amplified as the probability of a
feature declines. Choosers who stick to high initial
minimum standards, and do not adapt to the chances of
features in their TIOLI situation, face very long search
tasks. For example, a chooser who will accept nothing
less than 8 out of 10 features when the probability of a
feature is 0.40 can expect to examine about 100
alternatives before finding one with the minimum; an
idealistic, uncompromising chooser who wants 10 out of
10 features can expect to examine about 1,000
alternatives before the perfect one comes along.
Figure 2 plots the relationship between a chooser’s
minimum fixed standard and the average number of
features she/he gets in her/his accepted alternative.
Prob=1.0 Prob=0.8 Prob=0.4 Prob=0.2 than 5. The chooser would then evaluate the second
alternative with the new, minimum standard. Again, if the
second alternative met or exceeded the adjusted standard,
the chooser would take it; otherwise the chooser would
compromise. The chooser would continue this adjustment
cycle until she/he finds an alternative that met or
exceeded her/his current minimum standard.
The rule for adjusting current standard was based on a
simple formula:
St+1 = ws*St + wf*Ft
where
St is the current standard employed;
St+1 is the adjusted standard employed for the next
trial;
Ft is the number of features in the most recently
rejected alternative;
ws is the weight given to the current standard;
wf is the weight given to the encountered number of
features;
and ws+wf = 1.0.
Our Fixed Minimum Standard simulation, above, was
run by setting ws = 1 and wf = 0, so 100% weight was
given to current minimum standard and no weight was
given to the number of features in previously encountered
alternatives. In this second study we varied ws to be 0.8,
0.6, 0.4, 0.2, and 0.0.
Because the chooser adjusts or decreases her/his
standard towards the number of features found in
encountered alternatives, the lower the weight a chooser
gives to her/his current standard, the faster the standard
adjusts to new information. Also, the faster the standard
adjusts, the fewer the number of alternatives the chooser
should search to find an acceptable alternative, and the
fewer the number of features the chooser should get in an
accepted alternative. We conducted the simulation to
examine in more detail the shape of this relationship.
Prob=0.6 10 8 6 4 2 0 0 2 4 6 8 Minimum standard
10 Figure 2: Effects of standards and feature probabilities
on average number of features obtained.
As expected, Figure 2 shows the number of features
obtained was equal to or higher than the minimum
standard. However, a higher number of features was
obtained than the initially set minimum standard as the
minimum standard decreased. This suggests that choosers
are more likely to get an alternative, with a higher average
number of features than what was desired, if their initially
set minimum standard was low.
Results. Using 0.6 for the probability of a feature,
Figure 3 shows six curves plotting the relationship
between the initial minimum standard and the average
length of search to find an acceptable alternative for
various values of ws (weight given to current standard).
For comparison, Figure 3 also shows the plot when ws =
1.0, duplicated from Figure 1.
Average number of alternatives examined Adjusted Minimum Standard
We next considered a chooser with a more complex
cognitive rule; the chooser still set a minimum standard
before beginning to search, but would adjust the standard
in light of the number of features in each rejected
alternative. For example, a chooser might begin with a
standard that an acceptable alternative must have at least
eight out of ten features. If the first encountered
alternative meets/exceeds the standard, the chooser takes
it without changing the initial standard; if the first
alternative has, say, five features, then the chooser would
compromise by lowering the standard below 8 but greater
100 1.0 0.8 0.6 0.4 0.2 0.0 10 1 0
2
4
6
Initial standard
109
8
10
4
Figure 3: Effects of initial standards and weights to initial
standards on average number of alternatives searched.
decline from about 100 to about less than 10 (see Figure
3). This suggests small adjustment to initially set high
standard reduces search length considerably but reduces
the number of obtained features minimally.
Figure 4 also shows that the decline in the average
number of features obtained became even less
pronounced as the initial standard declined. This suggests
again that there is little to be gained by adjusting
standards if initially they were set moderate or low.
Plots for other probabilities of a feature showed similar
shapes and are not reported here for the sake of space.
Visual comparison of the top data series (ws =1.0) in
Figure 3 with the other data series shows a dramatic
decline in the number of alternatives searched when the
initially set minimum standard was 10; the values of ws
from 0.8 to 0 also reduced search length by an order of
magnitude, but not as noticeably as it did when ws
decreased from 1.0. This suggests that large reductions in
search length do not require rapid adjustments to initial
standards or to situational features; a slow adjustment can
also decrease search length substantially.
Figure 3 also shows that the decline in the search length
became less pronounced as the initial standard declined.
This suggests there is little to be gained by adjusting
standards if initially they were set moderate or low.
In order to examine how a decline in ws (weight to
current standard) affects the number of features in the
chosen alternative, we plotted the average number of
features obtained across six levels of the initially set
minimum standard (0, 2, 4, 6, 8, and 10) for each of the
six levels of ws (1, 0.8, 0.6, 0.4, 0.2, and 0). The plots are
shown in Figure 4. As with Figure 3, we report here the
plot we obtained when the probability that a feature
would be present was 0.6.
Average number of features
obtained
Weight=1.0 Weight=0.6 Weight=0.2 Discussion
We created our simulation to explore some consequences
of adopting plausible cognitive rules for satisficing
(Simon 1956) in take-it-or-leave-it situations. One rule set
an initial minimum standard and stuck to it regardless of
the chances of desirable features in each alternative
examined. A second rule modified the initial minimum
standard according to information about the chances of
desirable features gleaned from examining rejected
alternatives.
The simulation results teach at least three lessons about
setting standards and adjusting them. The first lesson is
that raising standards leads to an exponential increase in
the number of alternatives examined before an acceptable
one is found. Unless desirable features of alternatives are
abundant, perfectionists must be prepared to be very
patient.
The second lesson suggested by our results is that even
slow adjustments of high standards to match more closely
the features of alternatives will save considerable search
costs. It will also save the disappointment associated with
examining additional alternatives that do not meet
choosers’ high standards.
The third lesson is that lowering standards will not only
decrease search costs but also increase the chances of
pleasant surprises. Choosers with extremely high
standards will face long and costly searches, and will
rarely if ever find an alternative that greatly exceeds their
standards. Choosers with lower standards will face far
shorter and less costly searches, and will frequently
choose alternatives that do greatly exceed their standards.
Our results suggest that the tradeoffs between search
costs and rewards are likely to be optimized by quickly
estimating the average number of features in alternatives,
then setting a minimum standard somewhere between the
average and perfection. The suggestion reflects
adjustments in the idealism of youth as life experience
nudges them towards realism. It is nicely summarized in
the anonymous life prescription, “If all else fails, lower
your standards!”
Weight=0.8 Weight=0.4 Weight=0.0 12 10 8 6 4 2 0 0 2 4 6 8 Initial Standard
10 Figure 4: Effects of initial standards and weights to
initial standards on average number of features obtained.
As expected, Figure 4 shows that the average number of
features in the chosen alternative declined as ws declined.
However, the decline in the number of features obtained
was not as dramatic as the decline in search length. For
example, Figure 4 shows that, when the initial minimum
standard was 10, adjustments of this minimum in light of
search experience would reduce the average number of
features obtained from 10 to between 6 and 8. However,
the average number of alternatives searched would
References
Botti, S., & Hsee, C. K. (2010). Dazed and confused by
choice: How the temporal costs of choice
freedom lead to undesirable outcomes.
Organizational Behavior and Human Decision
Processes, vol. 112 (2), 161–171.
110
5
Fasolo, B., McClelland, G. H., & Todd, P. M. (2007).
Escaping the tyranny of choice: when fewer
attributes make choice easier. Marketing Theory,
vol. 1 (13), 13-26. DOI:
10.1177/1470593107073842.
Gourville, J. T., & Soman, D. (2005). Overchoice and
Assortment Type: When and Why Variety
Backfires. Marketing Science, vol. 24 (3), 382395.
Schwartz, B. (2005). The paradox of choice: Why more is
less. USA: Harper Perennial.
Simon, H. A. (1956). Rational choice and the structure of
the environment. Psychological Review;
Psychological Review, 63(2), 129.
Thorngate, W. (1980). Efficient Decision Heuristics.
Behavioral Science, 25(3), 219-225.
Thorngate, W. (2000). Teaching social simulation with
Matlab. Journal of Artificial Societies and Social
Simulation, 3(1).
Appendix
1.Download and install from http://www.r-project.org/ the programming language “R” (64-bit).
2.Start R. R Console window will load.
3.Under the “File” tab, select “New script”. An “untitled - R editor” window will open;
4.Copy the entire codes below and paste it into the “untitled - R editor” window.
5.Click the floppy disk icon on top and save the codes under the name “tioli.r”;
6.Close the “R editor” window so that you are back to the “R console” window;
7.Under the “file” tab, select “Source R code” and, then “tioli.r” (the codes you just saved will load in the R Console
window);
8.Type in tioli()
# InitialStandardNF =How many features did you initially want in an alternative?
# prob="what is the probability that features would be present in an alternative?
# a=how much weight from 0 to 1 do you give to your standard?
# b=how much weight from 0 to 1 do you give to the number of features in current alternative?
simulation=function(InitialStandardNF,prob,a){
b=1-a
# searchLength is the alternative number that meets your standard
# adjStandard is you standard after you adjusted it
# CurrentNumFeatures is the number of features in the current alternative.
# Set all these variable to 0 before you start your search
searchLength=0
adjStandard=0
CurrentNumFeatures=0
# Generate 100 TIOLI situations
for (trial in 1:100){
# set your standard as what you specified
standardnf= InitialStandardNF
# generate the first alternative in a TIOLI situation and see how many features the alternative has.
altnum=1
alt=rbinom(10,1,prob)
currentnf=sum(alt)
# now adjust your set standard depending on how much weight you gave to your standard and find the alternative that meets
your standard
for(trial in 1:50000){
ifelse(currentnf>=standardnf, (trial=50001), (
(standardnf=a*standardnf+b*currentnf) && (altnum=altnum+1) && (alt=rbinom(10,1,prob)) && (currentnf=sum(alt))
)
)
}
#keep a running record of your searchLength for each cycle
searchLength=searchLength+altnum
#keep a running record of your adjusted standard for each cycle
adjStandard=adjStandard+standardnf
111
6
#keep a running record of CurrentNumFeatures for each cycle
CurrentNumFeatures=CurrentNumFeatures+currentnf
}
# report the average across 100 TIOLI situations
return(c(InitialStandardNF,prob,a,(searchLength/100),(adjStandard/100),(CurrentNumFeatures/100)))
}
# run the TIOLI model with the following levels of “InitialStandardNF”, “prob”, and “a”
tioli=function() {
InitialStandardNF =c(10,8,6,4,2,0)
prob=c(0.8,0.6,0.4,0.2)
a=c(1.0,0.8,0.6,0.4,0.2,0)
print(c("InitialStandardNF","prob","a","searchLength","adjStandard","CurrentNumFeatures"))
for(x in InitialStandardNF) {
for(y in prob) {
for(z in a) {
result=simulation(x,y,z)
print(c(result))
}
}
}
}
#end of "tioli" function
112
Téléchargement