Georgesen & Harris (1998) - denigration power and performance

Telechargé par nk nk
Personality
and
Social
Psychology
Review
1998,
Vol.
2,
No.
3,
184-195
Copyright
C)
1998
by
Lawrence
Erlbaum
Associates,
Inc.
Why's
My
Boss
Always
Holding
Me
Down?
A
Meta-Analysis
of
Power
Effects
on
Performance
Evaluations
John
C.
Georgesen
and
Monica
J.
Harris
Department
of
Psychology
University
of
Kentucky
One
factor
with
potential
links
to
performance
evaluation
is
evaluator
power.
In
a
meta-analytic
review
of
the
available
literature,
the
relation
between
power
and
performance
evaluation
was
examined.
Results
indicate
that
as
power
levels
increase,
evaluations
of
others
become
increasingly
negative
and
evaluations
of
the
self
become
increasingly
positive.
We
examined
moderators
of
these
relations,
and
methodological
variables
caused
the
most
differences
in
effect
sizes
across
studies.
The
article
addresses
implications
of
these
findings
for
businesses
and
social
psychological
theories
of
power.
From
being
threatened
with
additions
to
one's
"per-
manent
record"
in
grade
school
to
the
review
process
for
a
tenured
position,
individuals
encounter
evaluative
situations
throughout
their
lives.
Performance
evalu-
ations
are
omnipresent
in
many
careers
and
show
no
signs
of
lessening
in
either
frequency
or
popularity
(Budman
&
Rice,
1994).
Periodic
evaluations
occur
throughout
the
employment
of
most
individuals
in
pro-
fessional
jobs,
and
the
topic
has
been
of
interest
to
psychologists
for
more
than
50
years
(Landy
&
Farr,
1980).
As
one
might
expect
of
such
a
well-established
practice,
performance
evaluations
offer
tangible
bene-
fits:
They
may
yield
concrete
measures
of
employee
performance,
often
provide
useful
information
for
im-
proving
problem
weak
areas,
and
may
facilitate
careers
(Queen,
1995;
White,
1979).
Although
the
previous
factors
illustrate
only
a
few
of
the
total
benefits,
why
organizations
rely
on
the
wealth
of
information
stem-
ming
from
evaluations
is
obvious.
Some
members
of
organizations,
however,
perceive
evaluative
rating
as
biased
and
unreflective
of
their
true
abilities
(Gioia
&
Longenecker,
1994;
Landy
&
Farr,
1980).
A
variety
of
factors
may
bias
the
performance
evaluation
process.
Executives
often
complain
about
politics
pervading
others'
evaluations
of
them
(Gioia
&
Longenecker,
1994).
As
their
position
in
an
organiza-
tion
rises,
executives
increasingly
feel
evaluations
of
their
performance
are
driven
by
factors
such
as
issues
of
executive
control
and
ensuring
the
continuance
of
the
current
structure
in
the
organization
(Gioia
&
Longenecker,
1994).
This
dissatisfaction
with
the
evaluation
process
is
not
limited
to
executives,
how-
ever.
Individuals
lower
in
the
organizational
hierarchy
also
may
view
performance
evaluations
as
inaccurate
and
biased
(Kleiman,
Biderman,
&
Faley,
1987).
Given
that
many
employees
feel
their
evaluations
are
overly
negative
and
inaccurate,
it
is
interesting
to
note
that
substantial
differences
in
evaluation
results
exist
between
differing
types
of
evaluators
such
as
the
self,
supervisors,
and
peers
(M.
M.
Harris
&
Schaub-
roeck,
1988).
Performance
evaluations
traditionally
have
been
conducted
by
one's
supervisor,
although
increasing
numbers
of
organizations
use
multiple
raters
in
conducting
evaluations
(Budman
&
Rice,
1994).
Self-evaluations
and
peer-employee
evaluations
are
becoming
increasingly
common.
Frequently,
however,
evaluations
of
the
same
individual
differ
significantly
between
raters
(M.
M.
Harris
&
Schaubroeck,
1988;
Landy
&
Farr,
1980).
Furthermore,
only
moderate
agreement
exists
between
self-evaluations
and
supervi-
sor
evaluations,
although
the
relation
between
peer
and
supervisor
evaluations
is
somewhat
stronger
(M.
M.
Harris
&
Schaubroeck,
1988).
What
causes
these
dis-
crepancies
in
evaluation?
Egocentric
bias
has
been
suggested
as
one
explana-
tion
of
the
discrepancies
between
supervisor,
self-,
and
peer
evaluations
(M.
M.
Harris
&
Schaubroeck,
1988).
This
theory
suggests
that
individuals
inflate
self-ratings
to
improve
their
evaluation
and
secure
their
employ-
ment.
Within
this
theory,
discrepancies
are
also
be-
lieved
to
result
from
attributional
differences;
peers
and
supervisors,
serving
as
observers,
attribute
poor
per-
formance
to
internal
personality
factors
whereas
self-
evaluators,
or
actors,
attribute
poor
performance
to
their
environment
(M.
M.
Harris
&
Schaubroeck,
1988).
Furthermore,
peers
and
supervisors
attribute
good
per-
formance
to
the
environment
whereas
self-evaluators
184
Requests
for
reprints
should
be
sent
to
John
C.
Georgesen,
Depart-
ment
of
Psychology,
125
Kastle
Hall,
University
of
Kentucky,
Lex-
ington,
KY
40506-0044.
E-mail:
jcgeorl
@pop.uky.edu.
at PENNSYLVANIA STATE UNIV on May 16, 2016psr.sagepub.comDownloaded from
POWER
EFFECTS
ON
PERFORMANCE
EVALUATION
attribute
good
performance
to
internal
factors.
Other
variables,
however,
also
affect
performance
evaluation,
cause
discrepancies
between
raters,
or
both
(M.
M.
Harris
&
Schaubroeck,
1988;
Landy
&
Farr,
1980).
For
example,
the
general
job
performance
of
the
rater
af-
fects
the
quality
of
that
rater's
evaluations
(Landy
&
Farr,
1980)
as
does
the
rater's
opportunity
to
observe
the
target's
performance
(M.
M.
Harris
&
Schaubroeck,
1988;
Landy
&
Farr,
1980).
Another
factor
potentially
biasing
performance
evaluation
is
the
power
level
of
the
evaluator.
Although
addressed
in
one
research
program
(Kipnis,
1972;
Kip-
nis,
Castell,
Gergen,
&
Mauch,
1976;
Kipnis,
Schmidt,
Price,
&
Stitt,
1981;
Wilkinson
&
Kipnis,
1978)
and
other
isolated
studies,
the
effects
of
power
on
evaluation
have
not
been
systematically
explored.
In
addition,
except
for
a
recent
resurgence
of
interest,
power
phe-
nomena
have
been
irregular
targets
of
social
psycho-
logical
inquiry
and
theorizing (Fiske
&
Morling,
1996;
Ng,
1980).
Therefore,
a
general
definition
of
power
may
be
useful
before
addressing
its
potential
effects
on
performance
evaluation.
In
the
past,
power
has
been
defined
as
the
amount
of
influence
that
one
person
can
exercise
over
another
person
(Dahl,
1957;
Huston,
1983;
Kipnis
et
al.,
1976).
More
recent
definitions
of
power
(Fiske,
1993;
Fiske
&
Morling,
1996),
however,
have
focused
on
actual
con-
trol
rather
than
influence
as
the
key
component
of
power.
The
reason
for
this
change
in
focus
has
been
the
realization
that
one
may
possess
influence
without
pos-
sessing
actual
control
over
the
outcomes
of
others.
For
example,
one
might
be
a
moral
leader
in
the
community
and
influence
others'
decisions
through
persuasive
rhetoric
but
possess
no
actual
control
over
others'
out-
comes.
Thus,
in
current
conceptualizations,
power
is
the
difference
in
the
amount
of
control
possessed
by
mem-
bers
of
a
dyad
(Fiske,
1993;
Fiske
&
Morling,
1996).
The
member
of
a
dyad
with
the
most
control
over
the
other's
outcomes
has
more
power.
Defining
power
in
this
way
is
also
congruent
with
other
traditional
conceptions
of
power.
For
example,
in
the
classic
French
and
Raven
(1959;
Raven,
1992)
typology,
a
distinction
is
made
between
power
that
involves
control
of
others'
outcomes
(e.g.,
reward
and
coercive
power)
and
bases
of
power
that
do
not
(e.g.,
expert,
informational,
referent).
The
latter
forms
of
power
are
more
indirect
and
involve
the
willing
coop-
eration
and
deference
of
the
target.
These
power
bases
involve
influence
over
others,
but
whether
they
address
the
actual
ability
of
the
power
holder
to
control
the
outcomes
of
the
other
is
not
clear.
For
example,
an
individual
high
in
referent
power
may
not
be
able
to
actually
control
another
by
reward,
coercion,
or
both.
Therefore,
we
prefer
to
adopt
a
definition
of
power
that
encompasses
the
idea
of
unilateral
control
because
we
feel
that
making
this
distinction
allows
the
creation
of
theoretically
unambiguous
operational
definitions
of
power.
Defining
power
in
other
ways
requires
the
as-
sumption
that
the
target
actively
participates
in
the
influence
attempt.
With
its
relation
to
the
control
of
others,
power
is
an
obvious
candidate
for
consideration
of
its
potential
effects
on
evaluation.
Two
models
address
the
potential
relation
between
power
and
perform-
ance
evaluation.
The
first
model
of
how
power
may
affect
evaluations
stems
from
the
research
of
Kipnis
and
his
colleagues
(Kipnis,
1972;
Kipnis
et
al.,
1976,
1981;
Wilkinson
&
Kipnis,
1978).
They
ar-
gued
that
as
an
evaluator's
power
increases,
that
evaluator
will
make
more
attempts
to
influence
others.
As
more
attempts
to
influence
others
are
made,
the
evaluator
comes
to
believe
that
he
or
she
controls
the
behavior(s)
of
other
people.
This
por-
tion
of
Kipnis'
model
is
similar
to
Kelley
and
Thibaut's
(1978)
conception
of
fate
control.
This
belief
of
responsibility
for
others'
behaviors
conse-
quently
causes
the
individual
to
devalue
the
per-
formance
of
others;
that
is,
the
higher
power
person
comes
to
believe
that
he
or
she
is
the
causal
agent
in
producing
relevant
outcomes.
Furthermore,
this
bias
conceivably
may
cause
an
evaluator
to
take
responsibility
for
any
successes
associated
with
the
work
of
others.
Applied
to
a
performance
evalu-
ation
situation,
the
model
suggests
that
as
the
power
level
of
an
evaluator
increases,
the
positivity
of
their
evaluations
of
subordinates
will
decrease.
Another
model
less
directly
addressing
the
power-evaluation
relation
but
theoretically
promising
in
its
implications
concerns
the
association
between
power
and
stereotyping
(Fiske,
1993;
Fiske
&
Morling,
1996).
The
general
premise
of
this
model
is
that
persons
in
positions
of
power
are
especially
vulnerable
to
stereotyping
subordinate
others.
Fiske
and
Morling
(1996)
argued
that
individuals
in
powerful
positions
attended
less
to
subordinates
for
three
reasons:
they
lacked
cognitive
capacity,
their
outcomes
were
not
controlled
by
their
subordinates,
and
they
may
not
have
wanted
to
attend
because
of
their
dominance
level
and
associated
beliefs.
Individuals
in
powerful
positions
may
suffer
a
lack
of
cognitive
capacity
due
to
increased
attentional
demands
(Fiske,
1993).
For
example,
a
pow-
erful
individual's
position
may
involve
the
supervision
of
20
subordinates
whereas
an
individual
lower
in
the
organizational
hierarchy
may
supervise
only
1
or
2
employees.
Also,
because
their
outcomes
often
do
not
depend
on
their
subordinates,
individuals
with
power
may
focus
the
brunt
of
their
attention
elsewhere.
Fi-
nally,
dominance
orientation
may
lead
to
alack
of
attention
in
that
individuals
with
a
dominant
personality
may
attempt
to
control
their
interactions
with
others
and
consequently
ignore
the
actions
and
motivations
of
others
during
the
interaction
(Fiske
&
Morling,
1996).
Regardless
of
its
cause,
decreases
in
attention
make
powerful
individuals
more
likely
to
depend
on
stereo-
185
at PENNSYLVANIA STATE UNIV on May 16, 2016psr.sagepub.comDownloaded from
GEORGESEN
&
HARRIS
types
in
interacting
with
subordinates
(Fiske
&
Mor-
ling,
1996).
This
model
has
not
been
directly
tested
on
a
range
of
performance
evaluation
situations
but
is
intuitively
appealing
in
its
implications.
If
a
powerful
individual
in
an
evaluator
role
does
not
attend
to
a
subordinate
and
holds
negative
stereotypes
about
subordinates
gener-
ally,
then
an
evaluation
of
that
subordinate
might
be
negatively
affected
by
stereotyping.
This
reasoning
as-
sumes
that
individuals
in
positions
of
power
hold
nega-
tive
stereotypes
about
subordinates,
an
argument
that
may
be
supported
in
part
by
Western
beliefs
that
power
is
given
to
people
on
the
basis
of
their
talents
and
skills
(Goodwin,
Fiske,
&
Yzerbyt,
1997).
Thus,
on
the
basis
of
their
belief
that
power
is
earned,
powerful
individuals
may
be
more
likely
to
stereotype
negatively
those
indi-
viduals
with
lower
power
than
themselves.
Both
of
the
previous
models
hold
promise
for
ex-
plaining
potential
relations
between
power
and
per-
formance
evaluation.
However,
the
basic
question
of
what
exactly
the
effect
of
power
is
on
performance
evaluation
remains
as
yet
unanswered.
Some
re-
searchers
have
found
negative
effects
of
power
on
evaluation
(Kipnis,
1972;
Kipnis
et
al.,
1976,
1981;
Wilkinson
&
Kipnis,
1978),
whereas
other
researchers
have
found
power
effects
to
be
negligible
(Lightner,
Burke,
&
Harris,
1997;
Pandey
&
Singh,
1987;
Wexley
&
Snell,
1987).
The
range
of
findings
in
this
area
suggest
that
a
synthesis
is
necessary,
both
to
ascertain
if
power
actually
affects
performance
evaluation
and
to
lay
a
foundation
for
later
testing
of
the
mechanism(s)
by
which
it
affects
evaluation.
Our
purpose
here
was
to
conduct
the
first
quantita-
tive
review
of
research
on
the
effects
of
power
on
performance
evaluation.
We
addressed
three
areas
of
primary
interest
in
this
meta-analysis:
1.
Does
the
power
position
of
the
evaluator
affect
performance
evaluation?
We
asked
this
question
from
two
perspectives:
(a)
What
is
the
effect
of
power
on
evaluations
of
a
lower
ranked
other?,
and
(b)
What
is
the
effect
of
power
on
self-evaluations?
What
is
the
size
and
direction
of
the
effect?
Based
on
both
of
the
power
theories
mentioned
earlier,
one
might
expect
that
as
power
increases,
evaluations
of
lower
power
others
become
increasingly
negative.
Available
literature,
however,
provides
no
clear
predictions
concerning
the
effects
of
power
on
self-evaluation.
2.
What
theoretical
variables
moderate
the
effect
of
power
on
performance
evaluation?
For
the
purposes
of
this
meta-analysis,
a
variable
was
considered
a
theoreti-
cal
moderator
if
that
variable
would
add
to
a
more
refined
theoretical
understanding
of
the
nature
of
power.
Theoretical
moderators
examined
in
this
study
include
location
of
study,
participant
gender,
and
par-
ticipant
age.
For
example,
location
of
study
is
consid-
ered
a
theoretical
variable
in
that
various
cultural
factors
associated
with
different
geographic
regions
may
affect
patterns
of
power
usage.
3.
Can
differences
in
effect
sizes
between
studies
be
explained
by
methodological
variables?
Methodologi-
cal
moderators
studied
were
the
laboratory
responsible
for
research,
quality
of
study,
strength
of
power
ma-
nipulation,
type
of
research,
type
of
study,
and
year
of
study.
Method
Sample
The
literature
search
began
with
a
computer-based
strategy.
Searches
were
conducted
on
the
PsycLIT
(American
Psychological
Association,
1974-present)
database
for
articles
published
since
1974.
Also,
com-
puter
searches
were
conducted
on
ABI
Inform
(Univer-
sity
Microfilms
International,
1975-present),
a
busi-
ness
database,
for
articles
published
since
1975.
After
these
searches,
the
first
author
manually
inspected
the
Journal
of
Applied
Psychology
(1960-1996)
to
ensure
both
that
articles
were
not
missed
on
the
computer
searches
due
to
improper
keyword
usage
and
that
earlier
articles
in
the
area
had
been
uncovered.
Following
the
collection
of
articles
elicited
in
the
previous
searches,
an
ancestry
approach
of
exploring
their
references
was
used
to
collect
any
previously
undiscovered
articles.
The
ancestry
approach
involves
using
the
references
of
relevant
articles
to
retrieve
more
articles.
Then,
the
references
of
the
relevant
articles
are
used
to
retrieve
yet
more
articles
in
a
continuing
process
until
no
further
useful
references
are
discovered.
As
a
final
step,
the
Social
Sciences
Citation
Index
(Institute
for
Scientific
Information,
1981-1998)
was
used
in
a
descendancy
search
involving
several
often-cited
arti-
cles
(Kipnis
et
al.,
1976;
Wexley
&
Snell,
1987;
Wilkin-
son
&
Kipnis,
1978)
in
the
already
obtained
literature.
The
descendancy
approach
involves
perusing
the
Social
Sciences
Citation
Indexes
to
retrieve
a
list
of
all
the
articles
that
cite
a
particular
well-known
study.
The
retrieval
of
articles
discovered
in
the
ancestry
and
descendancy
searches
was
not
limited
by
year
of
pub-
lication.
Criteria
for
inclusion
and
exclusion.
Articles
col-
lected
in
the
previous
searches
were
chosen
for
inclu-
sion
or
exclusion
on
the
basis
of
both
publication
and
content
criteria.
Studies
included
in
this
meta-analysis
were
published
journal
articles
and
available
unpub-
lished
experiments.
The
content
of
a
study
also
affected
its
inclusion
or
exclusion.
To
be
included
in this
analysis,
a
study
must
have
conceptualized
power
in
a
manner
similar
to
the
following
definition:
Power
is
the
ability
of
an
individ-
186
at PENNSYLVANIA STATE UNIV on May 16, 2016psr.sagepub.comDownloaded from
POWER
EFFECTS
ON
PERFORMANCE
EVALUATION
ual
to
exert
more
control
or
influence
over
another's
outcomes
than
the
other
can
exert
over
the
controlling
individual's
outcomes
(Fiske,
1993;
Fiske
&
Morling,
1996).
The
study
also
had
to
address
the
effects
of
power
on
evaluations
explicitly
and
either
manipulate
posi-
tional
power
differences
or
use
preexisting
power
dif-
ferences.
Studies
were
included
whether
they
measured
real
or
perceived
power
differences.
Investigations
ex-
ploring
status
without
mention
of
power
were
excluded
from
this
review
because
individuals
may
possess
dif-
fering
status
without
the
ability
to
exercise
unequal
amounts
of
influence
over
one
another
(Fiske,
1993).
Finally,
the
study
had
to
contain
an
actual
evaluation
of
a
subordinate,
the
self,
or
both
by
an
individual(s)
possessing
a
power
advantage
over
the
outcomes
of
others.
Coding
The
first
author
Georgesen
coded
studies
selected
for
inclusion
along
the
following
dimensions
to
inves-
tigate
the
moderating
effects
of
both
theoretical
and
methodological
variables
on
the
relation
between
evaluator
power
and
performance
evaluation.
In
addi-
tion,
a
second
rater
also
coded
the
subjective
variables
included
in
this
study:
overall
quality,
strength
of
ma-
nipulation,
and
type
of
study.
Initial
interrater
reliabili-
ties,
as
measured
by
interrater
correlations,
were
varied:
.40
for
the
strength
of
manipulation
variable,
.68 for
overall
quality,
and
1.00
for
the
type
of
study
variable.
Due
to
the
somewhat
low
reliabilities
associated
with
the
strength
of
manipulation
and
quality
variables,
these
rating
discrepancies
were
explored
and
resolved
by
conference.
The
coding
process
began
with
each
study
being
coded
as
to
whether
it
measured
self-
or
other
perform-
ance
evaluation.
Studies
were
also
coded
by
location,
percentage
of
male
participants,
and
participants'
age.
Location,
defined
as
whether
the
study
was
conducted
in
the
United
States
or
elsewhere,
may
moderate
the
effects
of
power
on
evaluation
via
unique
cultural
fac-
tors.
For
example,
power
may
have
larger
effects
on
evaluation
in
an
individualistic
rather
than
a
collectivis-
tic
cultures
due
to
emphasis
placed
on
individual
achievement
in
individualistic
cultures.
Coding
for
the
percentage
of
male
participants
allows
for
examining
potential
gender
effects
on
power
roles.'
The
considera-
tion
of
age
is
important
in
that
one
may
become
increas-
Studies
were
not
coded
for
gender
more
specifically
(i.e.,
gender
of
supervisor)
because
the
vast
majority
of
the
retrieved
studies
did
not
contain
the
necessary
information.
Available
evidence
from
several
studies
in
our
own
laboratory,
which
did
code
and
analyze
for
supervisor
gender,
suggests
that
the
supervisor's
gender
does
not
affect
self-
and
other-evaluation.
ingly
likely
to
hold
more
powerful
positions
with
in-
creasing
age,
which
may
strengthen
any
effects
of
power
on
other-evaluation.
When
possible,
median
age
of
study
participants
was
coded.
Age
was
dropped
as
a
potential
moderator
after
initial
coding,
however,
due
to
its
complete
redundancy
with
the
type
of
study
(in
all
cases,
experimental
investigations
of
power
took
place
on
college
campuses).
Also
redundant
with
type
of
study,
the
type
of
power
measured
may
moderate
the
effects
of
power
on
other-evaluation.
Participants
who
actually
possess
power
in
a
workplace
setting
may
be
more
likely
to
derogate
others
in
their
evaluations
than
participants
experiencing
a
contrived
power
difference
stemming
from
an
experimental
manipulation.
Studies
were
also
coded
for
several
methodological
variables
that
may
moderate
the
relation
between
evalu-
ator
power
and
evaluation.
Each
study
was
coded
by
the
raters
on
a
4-point
scale
ranging
from
1
(very
poor
overall)
to
4
(very
good
overall)
for
both
overall
study
quality
and
strength
of
experimental
manipulation
(if
applicable).
The
overall
quality
of
the
study
was
rated
by
considering
issues
such
as
power,
sample
size,
ade-
quate
controls,
randomization,
and
other
methodologi-
cal
factors
that
can
affect
study
outcomes
(Rosenthal,
1991).
For
example,
a
study
that
had
a
large
enough
sample
to
investigate
the
question
of
interest
with
at
least
a
moderate
degree
of
power,
randomly
assigned
participants
to
groups,
conceptualized
measures
clearly,
and
performed
the
appropriate
analyses
would
receive
a
quality
rating
of
4.
A
study
that
met
all
of
the
previous
requirements
except
for
one
criterion
would
receive
a
quality
rating
of
3,
and
so
forth
down
the
rating
scale
until
a
study
that
met
only
one
of
the
previous
criteria
would
receive
a
quality
rating
of
1.
The
strength
of
an
experimental
manipulation
was
rated
by
considering
the
realism,
believability,
and
participant
involvement
associated
with
the
power
ma-
nipulation.
For
example,
a
manipulation
that
led
the
participants
to
believe
they
were
part
of
a
supervi-
sor-subordinate
dyad
in
which
the
supervisor
could
negatively
affect
the
subordinate's
outcomes,
directly
involved
both
participants
in
their
respective
roles,
and
made
it
seem
as
if
the
power
difference
was
affecting
the
results
of
the
interactions
would
receive
a
strength
manipulation
of
4.
A
study
that
contained
two
of
these
elements
but
not
three
would
be
down
rated
to
a
strength
of
3,
a
trend
that
could
continue
downward
until
a
manipulation
was
rated
as
having
a
strength
of
1
if
it
contained
none
of
the
previous
elements.
The
studies
were
also
coded
for
author
(laboratory
responsible
for
research),
overall
number
of
partici-
pants,
nature
of
research
(psychology
or
business
pub-
lication),
type
of
study
(correlational,
quasi-experimen-
tal,
or
experimental),
and
the
year
in
which
the
article
was
published.
These
variables
were
chosen
for
inclu-
sion
because
of
their
demonstrated
effects
on
study
outcomes
in
other
meta-analyses
(Rosenthal,
1991).
187
at PENNSYLVANIA STATE UNIV on May 16, 2016psr.sagepub.comDownloaded from
GEORGESEN
&
HARRIS
In
addition
to
being
coded
for
theoretical
and
meth-
odological
variables,
each
study
was
coded
to
yield
one
index of
effect
size
and
significance
level.
This
meta-
analysis
used
the
Pearson
correlation
coefficient
r
as
the
estimate
of
effect
size.
In
studies
with
multiple
outcome
measures
of
other-evaluation,
self-evaluation,
or
both,
the
effect
size
for
each
measure
was
transformed
to
its
associated
Fisher
Zr
and
combined
with
the
other
Fisher
Zr's
to
yield
the
mean
Fisher
Zr.
This
was
then
trans-
formed
back
to.
r
to
yield
the
mean
effect
size
for
the
study.
In
the
case
of
one
of
the
other-evaluation
studies,
not
enough
statistical
information
was
included
to
cal-
culate
an
effect
size,
so
in
line
with
Rosenthal's
(1995)
recommendations,
that
study
was
assigned
an
r
of
.00
and
a
p
of
.50.
Results
The
literature
search
yielded
a
total
of
25
codable
studies;
7
studies
examined
the
effects
of
power
on
self-evaluation,
and
18
studies
examined
the
effects
of
power
on
other-evaluation.
Tables
1
and
2
list
the
studies
included
in
each
category,
their
scores
on
the
coded
methodological
and
theoretical
variables,
and
the
effect
size
and
significance
level
extracted
from
each
study.
As
mentioned,
we
analyzed
these
two
groups
of
studies
separately
due
to
the
differing
predictions
asso-
ciated
with
their
relation
to
power.
For
both
groups
of
studies,
all
effects
were
in
the
predicted
direction
with
the
exception
of
the
other-
evaluation
study
coded
as
having
an
effect
size
of
r
=
.00
and
p
=
.50
due
to
insufficient
statistical
informa-
tion.
Following
the
calculation
and
coding
of
individual
effect
sizes
and
significance
levels
(see
Table
1
and
Table
2),
these
results
were
combined
to
yield
a
mean
effect
size
and
mean
significance
level
for
each
of
the
two
groups
of
studies.
General
Analyses
The
overall
effect
size
for
each
set
of
studies
was
calculated
by
obtaining
the
weighted
mean
Fisher
Zr
and
transforming
it
back
to
the
weighted
mean
r
(Rosenthal,
1991).
Due
to
the
disparate
sample
sizes
of
articles
included
in
this
meta-analysis,
each
study
was
weighted
by
its
total
degrees
of
freedom
as
a
means
of
taking
study
size
and
the
reliability
associated
with
larger
samples
into
account.
Where
possible,
this
weighting
occurred
throughout
the
analyses.
The
weighted
mean
effect
size
associated
with
the
combined
self-evaluation
studies
was
r=
.45,
indicating
a
medium
to
large
effect
of
power
on
self-evaluation.
As
power
levels
increase,
self-evaluations
become
in-
creasingly
positive
in
tone.
When
unweighted,
the
mean
effect
size
for
this
set
of
studies
was
somewhat
smaller,
r
=
.38
(see
Table
3).
The
median
effect
size,
minimum
and
maximum
effect
size,
quartile
scores,
and
95%
confidence
intervals
for
this
set
of
studies
were
also
computed,
and
are
reported
in
Table
3.
Throughout
the
analyses,
confidence
intervals
were
calculated
accord-
ing
to
the
procedures
recommended
by
Rosenthal
(1991),
using
weighted
mean
effect
sizes.
A
stem
and
leaf
plot
(see
Table
3)
was
constructed,
using
the
effect
sizes
of
the
self-evaluation
studies,
to
check
for
possible
outliers
affecting
the
results,
but
none
were
found.
The
weighted
mean
effect
size
associated
with
the
combined
other-evaluation
studies
was
r
=
.29,
indicat-
ing
a
roughly
medium
effect
of
power
on
other-evalu-
ation.2
As
power
levels
increase,
one's
evaluations
of
others
become
more
derogatory.
When
unweighted,
the
mean
effect
size
did
not
vary,
r
=
.29
(see
Table
4).
The
median
effect
size,
minimum
and
maximum
effect
size,
quartile
scores,
and
95%
confidence
intervals
were
also
calculated
for
this
set
of
studies
(see
Table
4).
Confi-
dence
intervals
were
calculated
using
the
weighted
mean
effect
size.
A
stem
and
leaf
plot
did
not
reveal
any
outlying
effect
sizes.
The
combined
significance
levels
for
each
of
the
two
sets
of
studies
were
calculated
with
the
Stouffer
method
(Rosenthal,
1991),
again
weighted
by
degrees
of
free-
dom.
The
combined
significance
level
for
the
self-
evaluation
studies
was
significant,
Z
=
12.88,
p
<
.001.
The
combined
significance
level
for
the
other-evalu-
ation
studies
was
significant
as
well,
Z=
1
1.07,p
<
.001.
Thus,
that
no
relation
exists
between
power
and
evalu-
ation
for
either
set
of
studies
is
extremely
unlikely.
Following
the
previous
computations,
a
chi-square
testing
heterogeneity
of
variance
was
computed
for
each
set
of
studies
to
examine
the
consistency
of
results
across
studies
(Rosenthal,
1991).
The
chi-square
test
for
self-evaluation
studies
indicated
significant
differences
in
effect
sizes
across
studies,
X2(6,
N
=
7)
=
21.26,
p
<
.05.
Significant
differences
in
effect
sizes
were
also
found
for
the
other-evaluation
studies,
x2(
17,
N
=
18)
=
62.62,
p
<
.01.
These
results
suggested
that
various
factors
may
moderate
the
effects
of
power
on
evaluation
and
prompt
closer
examination
of
the
data.
Effects
of
Moderating
Variables
The
moderating
effects
of
coded
methodological
and
theoretical
variables
were
assessed
using
both
the
con-
trast
Z
and
correlational
approaches
(Rosenthal,
1991).
Recall
that
one
of
the
other-evaluation
studies
was
assigned
an
effect
size
of
zero
due
to
insufficient
statistical
information.
A
set
of
analyses
was
also
conducted
excluding
this
study,
thus
treating
it
as
missing
data.
The
weighted
mean
effect
size
r
for
other-evaluation
studies
was
nearly
identical,
r
=
.30,
unweighted
mean
effect
size
r
=
.30,
and
median
r
=
.31.
Therefore,
leaving
out
the
effect
size
r
of
.00
did
not
alter
our
pattern
of
results,
and
we
decided
to
continue
its
inclusion
in
subsequent
analyses,
a
more
conservative
approach
recommended
by
Rosenthal
(1991).
188
at PENNSYLVANIA STATE UNIV on May 16, 2016psr.sagepub.comDownloaded from
1 / 12 100%
La catégorie de ce document est-elle correcte?
Merci pour votre participation!

Faire une suggestion

Avez-vous trouvé des erreurs dans linterface ou les textes ? Ou savez-vous comment améliorer linterface utilisateur de StudyLib ? Nhésitez pas à envoyer vos suggestions. Cest très important pour nous !