Denison Survey Validation Study

2a. Denison Culture Survey Validation Study 0704-AALO 5.9.18.pdf

DLA Culture/Climate Survey

Denison Survey Validation Study

OMB: 0704-0575

Document [pdf]
Download: pdf | pdf
This article was downloaded by: [Levi Nieminen]
On: 14 September 2012, At: 12:36
Publisher: Psychology Press
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

European Journal of Work and Organizational
Psychology
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/pewo20

Diagnosing organizational cultures: A conceptual and
empirical review of culture effectiveness surveys
a

b

Daniel Denison , Levi Nieminen & Lindsey Kotrba

b

a

International Institute for Management Development, Lausanne, Switzerland

b

Denison Consulting, Ann Arbor, MI, USA

Version of record first published: 28 Aug 2012.

To cite this article: Daniel Denison, Levi Nieminen & Lindsey Kotrba (): Diagnosing organizational cultures: A conceptual
and empirical review of culture effectiveness surveys, European Journal of Work and Organizational Psychology,
DOI:10.1080/1359432X.2012.713173
To link to this article: http://dx.doi.org/10.1080/1359432X.2012.713173

PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to
anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should
be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,
proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in
connection with or arising out of the use of this material.

EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY
2012, 1–17, iFirst article

Diagnosing organizational cultures: A conceptual and empirical review of culture
effectiveness surveys
Daniel Denison1, Levi Nieminen2, and Lindsey Kotrba2
1

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

2

International Institute for Management Development, Lausanne, Switzerland
Denison Consulting, Ann Arbor, MI, USA
This review traces the development of survey research methods within the organizational culture tradition and focuses
specifically on those instruments that measure the aspects of culture that are related to organizational effectiveness. Our
review suggests that the reliability and validity of most instruments in this category is quite limited. This review outlines
the recommended logic for the development and validation of culture effectiveness surveys and identifies three key
challenges for future culture researchers to address: (1) the confirmatory testing of nested models, (2) the guidelines for
aggregating data to the organizational level, and (3) the establishing of criterion-related validity. Using data from the
Denison Organizational Culture Survey, we present an empirical illustration of the three challenges identified above and
conclude by considering limitations and opportunities for future research.
Keywords: Cross-level analysis; Organizational culture; Organizational effectiveness; Scale validation; Survey.

Since the early days of organizational culture research,
scholars interested in the impact of culture on
organizational effectiveness have faced a dilemma:
Case studies and theoretical models are plentiful, but
many of the core measurement issues required to do
comparative research on culture and effectiveness have
remained relatively undeveloped. Nonetheless, over the
past decade the number of instruments has grown
significantly (Jung et al., 2009), and research on the
link between culture and effectiveness has continued to
develop (Hartnell, Ou, & Kinicki, 2011; Lim, 1995;
Siehl & Martin, 1990; Wilderom, Glunk, & Maslowski,
2000). In a recent review, Sackmann (2011) identified
55 empirical studies, 45 of which had been published
during the last decade. Importantly, the review also
traces growing evidence supporting the direct effects of
organizational culture on organization-level financial
performance and effectiveness.
Given the progress that has occurred in the cultureeffectiveness domain and with no signs of declining
interest for the foreseeable future (Ashkanasy,

Wilderom, & Peterson, 2011; Sackmann, 2011), it is
crucial that methodological research keeps pace with
the field’s substantive development. The goals of this
study were to closely examine the set of instruments
that have been advanced to better understand the
culture-effectiveness relationship and to highlight the
key issues and gaps in the research that has been
conducted to establish their reliability and validity.
Accordingly, our focus in this study was on effectiveness profiling instruments. By assessing facets of
culture directly linked to organizational effectiveness
outcomes, surveys of this type are the most direct,
diagnostic assessments of organizational culture.
However, prior reviews suggest that relatively little
attention has been paid to their systematic evaluation
(Ashkanasy, Broadfoot, & Falkus, 2000). It is
critically important that this limitation be addressed
in order to clarify the current state of measurement in
the culture-effectiveness domain, as well as to clarify
the appropriate set of methodological considerations
that future studies should attempt to satisfy.

Please address all correspondence to Daniel Denison, International Institute for Management Development, Chemin de Bellerive, 23,
Lausanne, Switzerland 1001. Email: denison@imd.ch
The authors gratefully acknowledge the support for this work provided by IMD Business School in Lausanne, Switzerland and by
Denison Consulting in Ann Arbor, MI, USA.
Ó 2012 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
http://www.psypress.com/ejwop
http://dx.doi.org/10.1080/1359432X.2012.713173

2

DENISON, NIEMINEN, KOTRBA

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

This article begins by presenting an overview of the
use of survey instruments within the organizational
culture research tradition and examines the role of
surveys in the investigation of the link between
organizational culture and effectiveness. Next, we
summarize the conclusions of several key methodological reviews and provide an update, focusing
specifically on effectiveness profiling surveys. Based
on our update, we identify and describe several key
considerations that must be addressed in the validation process: (1) the confirmatory testing of nested
models, (2) the guidelines for aggregating data to the
organizational level, and (3) the establishing of
criterion-related validity. The final section provides
an empirical illustration of our approach to each of
these challenges using archival data from the Denison
Organizational Culture Survey.

MEASURING THE
CHARACTERISTICS OF
ORGANIZATIONAL CULTURES
Organizational culture was first described by Elliott
Jaques in his 1951 book, The Changing Culture of a
Factory. Jacques invoked culture—described as informal social structures—as a way to explain the
failure of formal policies and procedures to resolve
the unproductive dynamic between managers and
employees at the Glacier Metal Company. Andrew
Pettigrew (1979) reintroduced the concept to the field
by pointing to culture as the ‘‘social tissue’’ that
contributes to collective sense making in organizations (p. 574). Informal social structures and collective sense making are still reflected in modern
definitions of organizational culture. Schein’s (1992)
definition of culture is probably the most widely
accepted, but nearly all organizational scholars agree
that the core content includes the values, beliefs, and
assumptions that are held by the members of an
organization and the way in which they guide
behaviour and facilitate shared meaning (Alvesson,
2011; Denison, 1996; Smircich, 1983). The potential
for multiple cultures (or subcultures) within a single
organization is also generally acknowledged in
definitions (Martin, 1992; Martin & Meyerson, 1988).
Measurement perspectives on organizational culture have evolved greatly over time. Early scholarship
reflected the anthropological origins of the culture
concept and therefore emphasized qualitative, ethnographic research methods (Rousseau, 1990). Similarly, culture was conceptualized mainly from an emic
perspective, in which cultures are viewed as unique,
rather than etic perspective, in which cultures are
viewed as comparable. Hence, the historical and
epistemological forces guiding early scholarship
mainly discounted the possibility that organizational
cultures could be studied within a nomothetic

framework using standardized survey instruments
(Martin & Frost, 1996; Martin, Frost, & O’Neill,
2006; Trice & Beyer, 1993). The culture–climate
‘‘debate’’ also shaped researchers’ thinking about
the appropriateness of surveys, with methodological
preference seen as one factor in differentiating
culture, as a qualitative tradition, from climate, as a
quantitative tradition (Denison, 1996; Ostroff,
Kinicki, & Tamkins, 2003).
More recently, these ‘‘culture wars’’ have given
way to a more eclectic, multi-method perspective
among culture researchers (Ashkanasy et al., 2011;
Hofstede, Neuijen, Ohayv, & Sanders, 1990; Ostroff
et al., 2003; Rousseau, 1990; Sackmann, 2006). For
comparative research, surveys most often provide the
foundation for quantitative assessment and crossorganization comparison (Xenikou & Furnham,
1996). Additionally, surveys are less resource intensive than clinical or ethnographic methods, can
provide normative information about an organization’s culture, facilitate the benchmarking and
organizational change process, and allow for direct
replication (Ashkanasy et al., 2000; Cooke &
Rousseau, 1988; Tucker, McCoy, & Evans, 1990).
Researchers have generally acknowledged two
main limitations of survey methodologies: their
inability to access ‘‘deeper’’ cultural elements such
as symbolic meaning, semiotics, and fundamental
assumptions (Rousseau, 1990; Schein, 1992; Smircich,
1983; Van Maanen, 1988), and their use of a priori
content—predefined, standardized questions—which
may fail to capture the most relevant aspects of
culture in a given situation. In addition, the survey
approach also assumes that respondents’ perceptions
of the culture are meaningful when aggregated to the
group level (Sackmann, 2006). Thus, culture surveys
are most appropriate when the focus is on the
‘‘observable and measurable manifestations of culture’’, such as values and behavioural norms, and
when the research purpose calls for making comparisons across organizations using the same set of
culture concepts (Ashkanasy et al., 2000, p. 132). In
her review, Sackmann (2011) describes how the wide
variety of survey instruments used makes it difficult to
establish clear patterns across studies, instead creating
‘‘a rather broad and colorful picture of the link
between different culture dimensions and performance measures’’ (p. 196). This diversity is a healthy
form of pluralism, but it also represents several
challenges.

Theoretical diversity: the wide-ranging
content of organizational culture
Most culture surveys assess specific behavioural
norms and values to characterize an organization’s
culture (Ashkanasy et al., 2000). These specific norms

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

DIAGNOSING CULTURE

and values are grouped into meaningful themes or
dimensions, and often integrated into a model that
describes the interrelationship among those dimensions. But surveys differ significantly in their nominal
categorizations of the content of culture. Ott (1989)
revealed 74 unique dimensions, and van der Post, de
Coning, and Smit (1997) identified 114! Beyond the
superficial differences in labelling, several studies
have sought to determine the conceptual overlap
of culture dimensions across surveys (Delobbe,
Haccoun, & Vandenberghe, 2002; Detert, Schroeder,
& Mauriel, 2000; Ginevicius & Vaitk
unait_e, 2006;
Xenikou & Furnham, 1996).
For example, Xenikou and Furnham (1996) used a
quantitative approach to test the convergent validity
of the dimensions of the Organizational Culture
Inventory (OCI; Cooke & Lafferty, 1989), the Culture Gap Survey (Kilman & Saxton, 1983), the
Organizational Beliefs Questionnaire (OBQ; Sashkin,
1984), and the Corporate Culture Survey (Glaser,
1983). Their findings showed that the 30 dimensions
clustered into six factors. Detert et al. (2000)
expanded on this study by examining the dimensional
overlap among 25 culture frameworks. The resulting
model included eight broadly defined themes. These
studies provide support for the idea that the
dimensions assessed by different culture surveys can
often be described in terms of a simpler set of higher
order culture dimensions. Higher order frameworks
seem particularly useful in light of the difficulty of
accumulating research findings based on different
survey instruments (Sackman, 2011). Xenikou and
Furnham also suggest that the broad themes identified in their research may provide a useful basis for
developing new scales. Ashkanasy et al. (2000)
describe some of the tradeoffs between simple and
more complex models.
One possible solution to this dilemma may be the
use of ‘‘nested’’ factor structures, in which survey
results are interpretable at more than one level of
specificity. First-order dimensions can be specific
enough to facilitate clear statements about behavioural norms and values, whereas the higher order
factors are broad enough to allow conceptual
linkages to other instruments and theoretical models.
Examples of culture surveys with a nested structure
include Cooke and Lafferty’s (1989) OCI, Woodcock
and Francis’s (1989) Organizational Values Questionnaire, and the Denison Organizational Culture
Survey (Denison & Neale, 1996).

3

Typing instruments categorized organizations into
mutually exclusive culture types. For example, the
competing values framework identified four types of
cultures—clans, adhocracies, hierarchies, and markets (Quinn & Rohrbaugh, 1981). Ashkanasy and his
colleagues critiqued the typing approach, arguing
that it could lead to overly simplistic, stereotypical
views of culture. Furthermore, the proposition that
culture types are discrete has not received much
empirical support. A recent meta-analysis by Hartnell
et al. (2011) demonstrated moderate to strong
positive interrelationships among descriptors of
culture. These authors concluded that, ‘‘the CVF’s
culture types in opposite quadrants are not competing or paradoxical. Instead, they coexist and work
together’’ (p. 687).
Consistent with these findings, profiling instruments describe culture using a set of nonorthogonal
dimensions within a profile. Organizations can be
high or low on each dimension assessed, and the
pattern of scores across dimensions provides a
complex representation of an organization’s culture.
Ashkanasy et al. (2000) identified three types of
profiling instruments, each with a unique research
purpose. Person–culture fit measures are designed to
understand the value congruence between an individual and the organization and better understand how
these factors influence individual-level outcomes such
as effectiveness and turnover (e.g., O’Reilly, Chatman, & Caldwell, 1991). Descriptive measures focus
on differences in organizational cultures without
defining the impact that these differences have on
organizational effectiveness. Effectiveness instruments capture cultural differences that can help to
explain differences in the effectiveness of organizations (Sparrow, 2001).
Descriptive instruments typically focus on the
internal reliability and validity of the survey measures. In addition to this form of validity, effectiveness instruments must also demonstrate that the
dimensions are linked to organizational effectiveness.
Thus, effectiveness measures are generally more
focused than descriptive measures, retaining only
those dimensions with a strong theoretical or
empirical linkage to effectiveness outcomes (Ginevicius & Vaitk
unait_e, 2006; van der Post et al., 1997).
Effectiveness instruments are also normative. Purely
descriptive instruments may remain value-neutral,
but effectiveness instruments must be rooted in a
theory of how specific behavioural norms and values
lead to higher effectiveness.

The importance of research purpose
Differences in the content of the instruments often
reflect the specific purpose of the research (Rousseau,
1990). For example, Ashkanasy et al. (2000) distinguished between typing and profiling instruments.

Reliability and validity
Several authors have reviewed the reliability
and validity of culture surveys (e.g., Ostroff et al.,
2003; Sackmann, 2006; Scott, Mannion, Davies, &

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

4

DENISON, NIEMINEN, KOTRBA

Marshall, 2003; Walker, Symon, & Davies, 1996).
Ashkanasy et al. (2000) reviewed a sample of 18
culture surveys, finding that evidence of reliability
and validity was generally lacking for most instruments. No evidence was found for 10 of the
instruments, and two others reported only minimal
support. Only two instruments—the Organizational
Culture Profile (O’Reilly et al., 1991) and the OCI
(Cooke & Lafferty, 1989)—were supported in all
evidence types reviewed. Among the three effectiveness surveys reviewed, two possessed no evidence of
reliability or validity and the third possessed minimal
evidence. Overall, the most common evidence type
was criterion-related validity (available for 33% of
surveys) and the least common was consensual
validity (available for 22% of surveys).
More recently, Jung et al. (2009) presented a
comprehensive review of 70 culture instruments, 48 of
which were quantitative survey measures. The results
showed that evidence was available for only a
minority of the key reliability and validity criteria
by which culture survey are evaluated. For example,
60% of all ‘‘judgements’’—across all surveys reviewed and all evaluative criteria considered—indicated that no statistical analyses could be located,
27% of judgements fell into the marginal or mixed
support category, and 13% of all judgements
indicated that an adequate level of evidence had
been attained. Across all surveys, predictive validity
was reported for 54% of surveys, and internal
consistency was reported for 46% of surveys. The

least commonly reported type of evidence included
test–retest reliability and convergent/discriminant
validity. These types of evidence were available for
only 10% of surveys.

LINKING CULTURE AND
EFFECTIVENESS: A REVIEW OF
EFFECTIVENESS PROFILING
MEASURES
For this review, we identified six effectiveness surveys
from prior reviews and three additional surveys from
our review of the recent literature. Instruments were
reviewed according to the criteria of validity
described by Jung et al. (2009). Table 1 describes
the structure of the nine instruments and summarizes
reliability evidence. Table 2 summarizes validity
evidence.
This review shows more research evidence for
effectiveness instruments than identified by Ashkanasy et al. (2000), but it also points to several
problematic trends. Five of nine instruments had
little or no research following the initial publication,
including the three reviewed by Ashkanasy et al. (i.e.,
OBQ, Organizational Values Questionnaire, and
Organizational Culture Survey or OCS) and two
others—the OASIS Culture Questionnaire (Cowherd
& Luchs, 1988) and the Organization Assessment
Survey (OAS: Usala, 1996a, 1996b). Aside from the
two studies we located—one by Muldrow et al. (2002)
reporting use of the OAS as part of a culture change

TABLE 1
Summary of reliability evidence for culture effectiveness profiling instruments
Instrument
Denison Organizational Culture
Survey (Denison & Neale, 1996)
OASIS Culture Questionnaire
(Cowherd & Luchs, 1988)
Organizational Assessment Survey
(Usala, 1996a)
Organizational Beliefs Questionnaire
(Sashkin, 1984)
Organizational Culture Survey (van
der Post et al., 1997)
Organizational Culture Survey
Instrument (Harris & Moran, 1984)
Organizational Values Questionnaire
(Woodcock & Francis, 1989)
Questionnaire of Dimensions of
Organizational Culture (Ginevicius
& Vaitk
unait_e, 2006)
Value Performance Index (Scho¨nborn,
2010)

Structure

Internal consistencya

Test–
retest

60 items, 12 dimensions, 4.70 (Fey & Denison, 2003); .88 to
4 traits
.97 (Gillespie, Denison, Haaland,
Smerek, & Neale, 2008)
33 items, 5 dimensions
n/a

n/a

Adequate rwg, ICC(1), and
ICC(2) (Gillespie et al.,
2008)
n/a

100 items, 17 dimensions

n/a

n/a

n/a

n/a

Aggregation

50 items, 10 dimensions

.35 to .78 (Xenikou & Furnham,
1996)

n/a

97 items, 15 dimensions

.79 to .93 (van der Post et al.,
1997)
n/a

n/a

99 items, 7 dimensions

Low within-organization
variance (Sashkin &
Fulmer, 1985)
n/a

n/a

n/a

60 items, 12 values

n/a

n/a

n/a

48 items, 12 dimensions

n/a

n/a

n/a

n/a

n/a

105 items, 13 dimensions .71 to .94 (Scho¨nborn, 2010)

References shown in italics are unpublished sources. aValues shown indicate lower and upper bounds of alphas reported across dimensions
or factors.

n/a

Organizational Beliefs
Questionnaire (Sashkin,
1984)
Organizational Culture
Survey (van der Post
et al., 1997)

EFA to define dimension
structure (Scho¨nborn, 2010)

Value Performance Index
(Scho¨nborn, 2010)

References shown in italics are unpublished.

Employee satisfaction (Aydin &
Ceylan, 2008; Ginevicius &
Vaitk
unait_e, 2006)

EFA with little support for
dimensional structure (Aydin
& Ceylan, 2008, 2009;
Ginevicius & Vaitk
unait_e,
2006)

n/a

n/a

n/a

n/a

Job satisfaction, personality
(Liebenberg, 2007; Strydom
& Roodt, 2006); mentoring
(Rieker, 2006)
n/a

Other culture questionnaires
(Xenikou & Furnham, 1996)

n/a

Organizational Culture
Survey Instrument
(Harris & Moran, 1984)
Organizational Values
Questionnaire
(Woodcock & Francis,
1989)
Questionnaire of
Dimensions of
Organizational Culture
(Ginevicius &
Vaitk
unait_e, 2006)

Factor and content analysis
(van der Post et al., 1997)

Factor analytic support (Usala,
1996a, 1996b)

Organizational Assessment
Survey (Usala, 1996a)

n/a

n/a

n/a

n/a

Australia (Erwee, Lynch,
Millett, Smith, & Roodt,
2001)

n/a

n/a

2/4 factors correlate with overall
performance index (Ginevicius &
Vaitk
unait_e, 2006); 10/10
dimensions correlate with
perceived performance
composite (Aydin & Ceylan,
2009)
7/13 dimensions correlated with
dichotomous performance
composite (Scho¨nborn, 2010)

n/a

15/15 dimensions correlated with
financial performance composite
(van der Post, de Coning, &
Smit, 1998)
n/a

n/a

n/a

OASIS Culture
Questionnaire (Cowherd
& Luchs, 1988)

n/a

Longitudinal evidence linking
culture to sales and customer
satisfaction (Boyce, 2010); crosssectional with ‘‘hard’’
performance metrics (Denison,
1984; Denison & Mishra, 1995;
Gillespie et al., 2008); crosssectional with perceived
effectiveness outcomes (Denison
et al., 2003; Fey & Denison, 2003)
Case study demonstrating link
between culture gap scores and
profitability (Cowherd & Luchs,
1988)
n/a

Asia, Australia, Brazil, Japan,
Jamaica, and South Africa
(Denison, Haaland, &
Goelzer, 2003); Russia (Fey
& Denison, 2003); Spain
(Bonavia et al., 2009)

Leadership (Block, 2003);
commitment (Taylor et al.,
2008); knowledge
management, org. structure,
strategy (Zheng, Yang, &
McLean, 2010)

Factor analytic support for
indexes (Bonavia, Gasco, &
Toma´s, 2009; Fey &
Denison, 2003; Taylor, Levy,
Boyacigiller, & Beechler,
2008); factor analytic support
for second-order model
(Gillespie et al., 2008)

Denison Organizational
Culture Survey (Denison
& Neale, 1996)

n/a

Predictive validity

Cross-cultural application

Convergent/discriminant validity

Dimensionality

Instrument

TABLE 2
Summary of validity evidence for culture effectiveness profiling instruments

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

n/a

n/a

n/a

n/a

n/a

Two case studies
demonstrating change
over time (Muldrow,
Buckley, & Schay, 2002)
n/a

n/a

Longitudinal study of 95
car dealerships (Boyce,
2010)

Sensitivity to change

DIAGNOSING CULTURE

5

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

6

DENISON, NIEMINEN, KOTRBA

intervention with two government agencies and the
previously described study of the OBQ’s convergence
with other culture surveys (Xenikou & Furnham,
1996)—research interest in these five instruments
seems to have halted altogether.
Research interest appears to have been somewhat
stronger for the OCS (van der Post et al., 1997). The
OCS was developed through an extensive literature
review and synthesis of 114 dimensions of culture. A
preliminary version of the survey, including 169 items
along 15 synthesized dimensions, was administered to
408 employees from eight organizations. Item reliability analyses were used to reduce the total number of
items to 97. Factor analysis of these items supported
the presence of 15 correlated factors (van der Post
et al., 1997). A second study by van der Post et al.
(1998) provided evidence of criterion-related validity
between the OCS dimensions and financial performance in 49 organizations. Unfortunately, few details
regarding the factor analysis, the sampling methods,
the number of survey respondents, and the aggregation of culture scores were provided in the second
study. Furthermore, this second study relied on two or
three managers per organization to provide a representative assessment of the organization’s culture.
Four studies have used the OCS since its development. Erwee et al. (2001) used the OCS, which was
developed in South Africa, with a sample of 326
managers from the Australian Institute of Management. Based on their analysis of reliability, the authors
concluded that the OCS was valid in the Australian
context. But an exploratory factor analysis and an
alpha coefficient of .99 supported a single-factor
solution rather than the 15-factor solution proposed
by van der Post et al. (1997). More recent studies by
Strydom and Roodt (2006) and Liebenberg (2007) have
used the OCS to link employee satisfaction and affect
with perceptions of organizational culture. One additional individual-level study by Rieker (2006) linked
the OCS dimensions to the quality of formal mentorship relationships in two US Airforce organizations.
Clearly, additional research is needed to establish
the validity of the OCS and clarify the number of
factors. The high internal consistency and singlefactor solution reported by Erwee et al. (2001) call
into question whether multiple concepts are indeed
measured (Boyle, 1991). The model’s predictive
validity also requires a larger and more representative
sample than that reported by van der Post et al.
(1998). The OCS also raises questions about validity
at the aggregate level. With the exception of the
original study by van der Post et al., none of the other
studies assessed organizational culture at the aggregate level. They focused on individuals’ perceptions
of organizational culture.
Two other instruments reviewed were in early
stages of development and validation. The

Questionnaire of Dimensions of Organizational
Culture developed by Ginevicius and Vaitk
unait_e
(2006) is based on a comprehensive review of the
dimensions from other instruments that were correlated with effectiveness outcomes. The authors used
the 12 dimensions and 48 items from their review to
produce the final instrument. A preliminary factor
analysis based on individual respondents from 23
organizations supported a four-factor model, and
correlational analyses provided mixed support for the
four factors predicting subjective performance ratings
and employee satisfaction. Subsequent studies by
Aydin and Ceylan (2008, 2009) reported significant
positive correlations between overall culture and
employee satisfaction, and between dimensions of
culture and perceived organizational effectiveness.
The Value Performance Index (VPI: Scho¨nborn,
2010) was constructed to assess the three levels of
culture specified by Schein (1992). The initial survey
with 135 items was administered to 2873 managers
from 46 companies in three European countries.
Based on an exploratory factor analysis, 13 dimensions were identified. Correlational analyses demonstrated significant predictive relationships with a
dichotomous composite index of financial performance for seven of the 13 dimensions. As with
Ginevicius and Vaitk
unait_e’s (2006) instrument, the
VPI has significant potential as a predictive tool, but
also underscores several key challenges that warrant
further attention, including the use of individual
rather than organization-level analysis and the use of
manager-only samples that may not be representative
of the total organizations studied.
The final instrument reviewed in our update is the
Denison Organizational Culture Survey (DOCS;
Denison & Neale, 1996). Based on the amount of
research that the DOCS has generated, it is clear that
this instrument has advanced well beyond the initial
stages of scale development. Reviewing the high
volume of unpublished dissertations and technical
reports—we count over 30 dissertations alone—is
beyond the scope of this manuscript, so we have
focused primarily on the published research in our
discussion in this article.

The Denison Organizational Culture Survey
The development of the DOCS occurred in tandem
with the development of a theory linking four key
cultural traits to organizational effectiveness: involvement, consistency, adaptability, and mission
(Denison & Mishra, 1995). These traits, presented
in Table 3, grew from a line of research by Denison
and colleagues that combined qualitative and
quantitative methods to examine the cultural characteristics of high and low performing organizations (Denison, 1984, 1990; Denison et al., 2003;

DIAGNOSING CULTURE

7

TABLE 3
Definitions of culture traits and indexes from the DOCS
Effectiveness traits and corresponding index definitions
Involvement concerns the personal engagement of individuals within the organization and reflects a focus on the internal dynamics of the
organization and on flexibility.
Empowerment—Individuals have the authority, initiative, and ability to manage their own work. This creates a sense of ownership
and responsibility towards the organization.
Team orientation—Value is placed on working cooperatively towards common goals for which all employees feel mutually
accountable. The organization relies on team effort to get work done.
Capability development—The organization continually invests in the development of employees’ skills in order to stay competitive
and meet ongoing business needs.
Consistency refers to shared values, and efficient systems and processes and reflects an internal and stable focus.
Core values—Members of the organization share a set of values which create a sense of identity and a clear set of expectations.
Agreement—Members of the organization are able to reach agreement on critical issues. This includes both the underlying level of
agreement and the ability to reconcile differences when they occur.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

Coordination and integration—Different functions and units of the organization are able to work together well to achieve common
goals. Organizational boundaries do not interfere with getting work done.
Adaptability refers to employees’ ability to understand what the customer wants, to learn new skills, and to change in response to demand. The
focus of adaptability is external and flexible.
Creating change—The organization is able to create adaptive ways to meet changing needs. It is able to read the business
environment, react quickly to current trends, and anticipate future changes.
Customer focus—The organization understands and reacts to their customers and anticipates their future needs. It reflects the
degree to which the organization is driven by a concern to satisfy their customers.
Organizational learning—The organization receives, translates, and interprets signals from the environment into opportunities for
encouraging innovation, gaining knowledge, and developing capabilities.
Mission refers to an organization’s purpose and direction, and reflects a focus external to the organization and on stability.
Strategic direction and intent—Clear strategic intentions convey the organization’s purpose and make it clear how everyone can
contribute and ‘‘make their mark’’ on the industry.
Goals and objectives—A clear set of goals and objectives can be linked to the mission, vision, and strategy, and provide everyone
with a clear direction in their work.
Vision—The organization has a shared view of a desired future state. It embodies core values and captures the hearts and minds of
the organization’s people, while providing guidance and direction.

Denison & Mishra, 1995; Fey & Denison, 2003).
These studies support the idea that the highest
performing organizations find ways to empower and
engage their people (involvement), facilitate coordinated actions and promote consistency of behaviours
with core business values (consistency), translate the
demands of the organizational environment into
action (adaptability), and provide a clear sense of
purpose and direction (mission).
These four individual characteristics have a long
history among organizational researchers interested
in the characteristics of high performance organizations (e.g., Katz & Kahn, 1978; Kotter & Heskett,
1992; Lawler, 1986; Mintzberg, 1989; Selznick, 1957;
Spreitzer, 1995, 1996). In Denison’s model, these
traits are organized into a framework that draws on
both classic and contemporary theories of the
dynamic tensions underlying organizational functioning and effectiveness (Denison & Spreitzer, 1991;
Katz & Kahn, 1978; Lawrence & Lorsch, 1967;
Parsons, 1951; Quinn & Cameron, 1988). As Schein
(1992) has noted, effective organizations need to solve

two problems at the same time: external adaptation
and internal integration. The dimensions of stability
and flexibility and internal and external focus are
used to frame these four concepts in a way that
captures how organizations balance these dynamic
tensions. For example, mission and consistency
provide support for stability, whereas adaptability
and involvement provide support for flexibility
(Denison & Mishra, 1995).
This framework is based on the same dimensions as
the competing values framework (CVF) advanced by
Quinn and colleagues (Quinn & Cameron, 1988;
Quinn & Rohrbaugh, 1981), but maintains a few
important differences. One key difference is that the
CVF, originally developed as a leadership framework,
has led primarily to assessments of culture types, in
contrast to the DOCS’s use of a profile approach. This
key choice has several implications. The CVF is
designed to identify the organizational type, whereas
the trait model developed by Denison and colleagues
focuses on the balance among cultural elements. Their
model proposes that it is not only possible for an

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

8

DENISON, NIEMINEN, KOTRBA

organization to display strong internal and external
values and the capabilities for both stability and
flexibility, but that the most effective organizations
are those that display ‘‘full’’ profiles as indicated by
high levels of all four traits (Denison, 1990).
Another important difference is the second-order
measurement model. Each trait is assessed by three
indexes, each of which operationalizes a specific facet
of the trait at the measurable level of manifest behaviours and values. The survey consists of 60 items or
five items per index. With the second-order model,
information is provided at two levels of abstraction.
The indexes are designed to measure 12 understandable and actionable content areas (e.g., team orientation, customer focus, goals and objectives), whereas
the traits organize these concepts into broader
principles that are portable across organizational
contexts and support the theoretical grounding of
the model and instrument (Denison & Mishra, 1995).
The link between the culture measures and effectiveness outcomes was central in the early development
of the survey. Qualitative research helped focus
attention on the cultural characteristics of effective
organizations and helped to develop the quantitative
measures. The earliest research focused mainly on the
bottom-up aspects of culture and their connection to
bottom-line financial performance metrics (Denison,
1984). These concepts evolved into the involvement and
consistency traits. Further qualitative research helped
balance these internal traits, leading to the addition of
the externally focused traits of mission and adaptability.
Denison and Mishra (1995) provided the first empirical
test of these four traits with data from 764 organizations. This study provided initial evidence of the
predictive validity of the four culture traits with a
variety of performance indicators and also supported
the idea that different cultural traits influence different
aspects of effectiveness. Profitability outcomes were
strongest in stable cultures with a strong sense of
mission and consistency, and growth outcomes were
strongest in flexible cultures with high levels of
involvement and adaptability. No other effectiveness
profile that we could find has established differential
prediction of effectiveness outcomes.
More recent studies have demonstrated predictive
validity across industry and national boundaries.
Gillespie et al. (2008) and Boyce (2010) showed a link
to customer satisfaction and sales growth over time
among home construction firms and franchise car
dealerships. Fey and Denison (2003), Denison et al.
(2003), Denison, Lief, and Ward (2004), and Bonavia
et al. (2009) examined the survey’s validity with
organizational samples from nine countries outside
the USA. For example, comparisons between Asian
organizations and ‘‘the rest of the world’’ indicated
similar mean levels and predictive patterns between
the indexes and effectiveness outcomes, although the

authors also provided examples of how the expression of specific values and behaviour can vary
somewhat across contexts (Denison et al., 2003). A
second study comparing US and Russian organizations demonstrated the importance of all four traits in
both contexts but also indicated that flexibility and
involvement were more highly correlated with overall
perceptions of effectiveness than was mission in the
dynamic Russian environment (Fey & Denison,
2003). Together, these studies provide initial evidence
that the DOCS has been translated to several other
languages and applied with similar support for
reliability and validity. Nonetheless, there are of
course many unresolved issues regarding application
in different national contexts.
Despite the strong empirical support for the
validity of this survey there are also a number of
‘‘gaps’’ in the evidence. Several of the studies applied
different versions of the current 60-item DOCS
(Denison & Mishra, 1995; Fey & Denison, 2003).
Gillespie et al. (2008) and Kotrba et al. (2012) have
presented the best evidence of the second-order factor
solution, providing evidence of a good fit to the data.
Several of these studies also used single-respondent
samples or manager-only samples (Fey & Denison,
2003; Denison & Mishra, 1995). Although the
literature reveals many studies that rely on a small
number of respondents (e.g., Birkinshaw, Hood, &
Jonsson, 1998; Delaney & Huselid, 1996; Delery &
Doty, 1996; Geringer & Hebert, 1989), it raises
obvious questions about the representativeness of
these samples and cannot capture the level of
agreement throughout the organization.
This discussion has identified three key challenges
for diagnostic assessments of organizational culture.
First, they must pass a psychometric test to make
certain that individual respondents can discern the
underlying structure proposed by the theory. Second,
the respondents within each organization must show
a high level of agreement in order to claim that
organizational characteristics are being measured.
And third, the organizational level characteristics
must show a close link to the organizational level
outcomes suggested by the model. Next, we evaluate
the DOCS with respect to these considerations.

AN EMPIRICAL ILLUSTRATION
OF THE THREE KEY
CHALLENGES
This section presents a set of analyses based on data
from 160 companies from a variety of industries and
geographic locations. These organizations completed
the DOCS between 1997 and 2001. The organizations
in the sample were generally large, ranging from 10
organizations with more than 200,000 employees to
11 organizations with fewer than 1000 employees.

DIAGNOSING CULTURE

The annual revenue also varied, ranging from 11 with
more than 50 billion US dollars to seven with under
100 million US dollars. A number of smaller private
firms were also included. In total, 35,474 individuals
completed the DOCS, with at least 25 respondents
sampled per organization. Response rates ranged
from 48% to 100%, with an average of 60%, well

9

within the range recommended in the management
literature (Baruch, 1999). The specific samples drawn
from each organization varied. Some organizations
surveyed all members and others surveyed specific
divisions, locations, and levels. Table 4 summarizes
the organizational characteristics and demographics
for individuals in the final sample.

TABLE 4
Demographic characteristics of organizational and respondent sample

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

Organizational category
Country
Australia
Canada
France
Germany
Great Britain
India
Japan
The Netherlands
Norway
Sweden
Switzerland
United States
Industry
Basic materials
Consumer cyclical
Consumer staples
Health care
Energy
Financials
Capital goods
Technology
Pharmaceuticals
Communication Services
Utilities
Transportation
Employee populationa
Fewer than 1000
1000 to 5000
5001 to 10,000
10,001 to 20,000
20,001 to 50,000
50,001 to 100,000
100,001 to 200,000
More than 200,000
Organizational revenueb
Under $100 million
$100 million–$1 billion
$1 billion–$5 billion
$5 billion–$10 billion
$10 billion–$20 billion
$20 billion–$30 billion
$30 billion–$50 billion
More than $50 billion

n

% of sample

3
5
2
4
8
2
5
2
1
1
8
119

1.9
3.1
1.3
2.5
5.0
1.3
3.1
1.3
0.6
0.6
5.0
74.4

23
19
22
17
1
17
17
25
1
10
7
1

14.4
11.9
13.8
10.6
0.6
10.6
10.6
15.6
0.6
6.3
4.4
0.6

11
26
12
16
30
28
20
10

7.2
17.0
7.8
10.5
19.6
18.3
13.1
6.5

7
17
35
14
15
18
14
11

5.3
13.0
26.7
10.7
11.5
13.7
10.7
8.4

Demographic category
Age
520
20–29
30–39
40–49
50–59
460
No response
Gender
Male
Female
No response
Educational level
High school
Some college
Associate degree
Bachelor’s degree
Some graduate work
Master’s degree
Doctoral degree
Other
No response
Function
Finance and accounting
Engineering
Manufacturing and production
Research and development
Sales and marketing
Purchasing
Human resources
Administration
Support staff
Professional staff
No response
Organizational level
Nonmanagement
Line management
Middle management
Senior management
Executive/Senior Vice President
CEO/President
Owner
No response
Years with organization
Less than 6 months
6 months to 1 year
1 to 2 years
2 to 4 years
4 to 6 years
6 to 10 years
10 to 15 years
More than 15 years
No response

n

% of sample

22
3,006
8,034
7,680
3,650
283
12,799

0.1
8.5
22.6
21.6
10.3
0.8
36.1

14,104
8,369
13,001

39.8
23.6
36.6

2,059
3,983
1,910
7,231
1,894
4,115
710
266
13,306

5.8
11.2
5.4
20.4
5.3
11.6
2.0
0.7
37.5

2,033
1,863
1,928
1,548
5,083
864
917
1,031
1,973
1,820
16,414

5.7
5.3
5.4
4.4
14.3
2.4
2.6
2.9
5.6
5.1
46.3

9,018
4,960
4,765
1,031
280
71
12
15,337

25.4
14.0
13.4
2.9
0.8
0.2
0.0
43.2

1,042
1,432
2,315
3,093
2,017
2,952
2,998
5,989
13,636

2.9
4.0
6.5
8.7
5.7
8.3
8.5
16.9
38.4

a
Information on employee population was unavailable for seven organizations. bInformation on organizational revenue was unavailable for
29 organizations.

10

DENISON, NIEMINEN, KOTRBA

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

Surveys with missing data on any of the 60 items
were excluded from this analysis. All items used a
5-point Likert-type scale ranging from 1 ¼ ‘‘strongly
disagree’’ to 5 ¼ ‘‘strongly agree’’. Respondents also
rated the organization on the following six dimensions of effectiveness relative to similar companies:
sales/revenue growth, market share, profitability/
ROA, quality of goods and services, new product
development, and employee satisfaction. These items
were rated on a 5-point Likert-type scale ranging
from 1 ¼ ‘‘low performer’’ to 5 ¼ ‘‘high performer’’.
Although less attractive than objective indicators,
past researchers have demonstrated that subjective
measures of organizational effectiveness can be useful
proxies for objective sales or profitability data (Baer
& Frese, 2003; Guthrie, 2001; Wall et al., 2004).

The confirmatory testing of nested models
We considered two key pieces of evidence to test the
nested models. First, we examined the internal
consistency of the 12 indexes to determine if the
5-item subsets held as reliable scales. Second, we used
confirmatory factor analysis to see if the pattern of
relationships between the observed variables and
latent traits support the hierarchical structure of the
proposed model.
Table 5 presents the results for the first step in the
analysis. Alpha coefficients for the indexes ranged
from .70 to .85 indicating an acceptable level of
internal consistency (Nunnally, 1978). Item-total
correlations exceeded .50 for over two-thirds of the
60 items in the survey. Item 15 from the capability
development index (‘‘Problems often arise because we
do not have the skills necessary to do the job’’)
showed an unusually low item-to-total correlation of
.23. This negatively worded item was retained
because (1) the alpha coefficient for the index itself
still reaches an acceptable level of .70, and (2) the
item was judged to have adequate content validity
based on its fit with the definition provided for this
index. Table 6 presents the correlations between
indexes. Values ranged from .45 to .74 with an overall
mean correlation of .59.
Next, a second-order confirmatory factor model
was tested using the 60 items from the DOCS as
observed variables, the 12 indexes as first-order
factors, and the four higher order traits as secondorder factors. Figure 1 presents the second-order
model with the best fit to the data. Item loadings
generally fell in the .60 to .75 range, indicating
considerable shared variance within those items
intended to measure the same underlying concepts.
Second-order factor loadings (indexes loading on
traits) and intercorrelations range from the low .70s
to the mid-.90s, indicating overlap in the variance
explained by the first-order factors (indexes) and

TABLE 5
Alpha coefficients and descriptive statistics for the DOCS
Dimension
Involvement

Index
Empowerment
a ¼ .76

1
2
3
4
5
6
7
8
9
10
11

.43
.59
.57
.56
.51
.56
.70
.61
.63
.54
.43

3.94
3.13
3.11
3.24
3.13
3.53
3.47
3.31
3.46
3.24
3.39

0.81
1.01
1.07
0.98
1.04
1.00
1.02
1.06
1.01
0.98
1.03

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

.54
.56
.56
.23
.47
.39
.61
.36
.51
.54
.41
.60
.47
.50
.43

3.31
3.45
3.62
3.30
3.13
3.34
3.47
3.74
3.84
3.42
3.50
2.94
3.09
3.15
3.22

0.95
1.05
.98
1.08
1.03
0.94
1.01
0.94
0.92
0.94
0.94
0.91
0.96
0.97
1.00

27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

.60
.62
.53
.59
.56
.53
.61
.46
.48
.57
.60
.49
.53
.36
.52

3.03
2.70
3.01
3.20
2.82
3.29
3.37
2.82
3.21
3.34
3.48
3.01
3.44
3.57
3.34

1.00
0.98
1.08
0.93
1.04
0.99
0.96
0.99
0.87
0.91
0.93
1.03
1.01
1.00
0.98

42
43
44
45
46

.52
.46
.46
.56
.70

3.04
2.79
3.73
2.76
3.63

1.04
1.08
0.93
1.02
0.99

47
48
49
50
Goals and objectives 51
a ¼ .80
52
53
54
55
Vision
56
a ¼ .79
57
58
59
60

.51
.75
.80
.67
.60
.56
.58
.56
.60
.63
.65
.41
.60
.60

3.24
3.48
3.44
3.29
3.24
3.38
3.70
3.67
3.37
3.05
3.32
2.59
3.02
3.10

0.96
0.96
1.00
1.15
0.92
0.97
0.86
0.91
0.97
0.98
1.00
0.99
0.99
0.93

Team orientation
a ¼ .82

Capability
development
a ¼ .70

Consistency

Core values
a ¼ .71

Agreement
a ¼ .74

Coordination and
integration
a ¼ .78

Adaptability

Creating change
a ¼ .76

Customer focus
a ¼ .74

Organizational
learning
a ¼ .74

Mission

N ¼ 35,474.

Item-total
Item correlation Mean SD

Strategic direction
and intent
a ¼ .86

DIAGNOSING CULTURE

11

TABLE 6
Correlation matrix for the indexes of the DOCS
Indexes
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.

Empowerment
Team orientation
Capability development
Core values
Agreement
Coordination and integration
Creating change
Customer focus
Organizational learning
Strategic direction and Intent
Goals and objectives
Vision

1

2

3

4

5

6

7

8

9

10

11

.74
.64
.61
.63
.61
.57
.49
.65
.58
.61
.60

.66
.61
.65
.63
.58
.50
.66
.58
.61
.60

.57
.61
.55
.57
.48
.65
.58
.59
.60

.64
.57
.47
.45
.58
.58
.60
.57

.65
.58
.49
.66
.57
.60
.61

.60
.48
.63
.58
.61
.62

.54
.65
.56
.57
.61

.54
.50
.52
.52

.61
.63
.68

.74
.73

.71

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

N ¼ 35,474. All correlations are statistically significant, p 5 .01.

strong relationships between second-order factors.
Model fit was evaluated using several fit indices,
including RMSEA (Hu & Bentler, 1998), GFI
(Jo¨reskog & So¨rbom, 1989), NFI (Bentler & Bonnett,
1980), and CFI (Bentler, 1990). These results are also
presented in Figure 1.
In general, these values indicate good fit for the
second-order model, with RMSEA, NFI, and CFI
values meeting recommended guidelines. GFI was
slightly lower than the recommended cutoff, but the
collection of indices as a whole suggest that the model
closely fits the data. We also tested two alternative
models, to confirm that this second-order model provided the best fit. The first alternative model excluded
the 12 first-order factors—the culture indexes—so
that the 60 items were forced to load directly onto the
four latent traits. The second alternative model forced
60 items to load directly onto a single latent factor,
eliminating the four culture traits. As shown in
Figure 1, both of these alternative models produced
a worse fit, indicating that the second-order hierarchical model represents the best fit with the data.

Evidence for aggregation to the
organizational level
Aggregating individual responses to create an organizational-level variable requires that those ratings
are sufficiently homogeneous (Dansereau & Alutto,
1990; Klein et al., 2000). Several statistical methods
are available for assessing the homogeneity of
responses within groups (Peterson & Castro, 2006),
such as a within and between analysis (WABA;
Markham, Dansereau, Alutto, & Dumas, 1983), rwg
for single item measures or rwg(j) for multiitem
measures (James, Demaree, & Wolf, 1984), and
indices of reliability such as ICC(1) and ICC(2)
(Shrout & Fleiss, 1979). As is routinely reported in
the culture domain, we focus on agreement and
reliability statistics (e.g., Gillespie et al., 2008; Kotrba

et al., 2012). rwg(j) was computed for each organization as a function of the five items in each index of the
DOCS and based on deviation from the uniform
response distribution (Lindell, Brandt, & Whitney,
1999). Values greater than .70 have generally been
recognized as sufficient response consistency to justify
aggregating individual responses to the group level
(Klein et al., 2000). ICC(1) and ICC(2) were computed as omnibus indexes of intraorganizational
reliability at the index level. ICC(1) indicates the
proportion of total variance attributable to organization membership, and ICC(2) indicates the extent to
which organizations are reliably differentiated by the
measure (Bryk & Raudenbush, 1992). F-values from
random effects one-way ANOVAs provide a statistical significance test for the ICC(1) values.
The agreement and reliability indices for each
index of the DOCS are shown in Table 7. Mean rwg(j)
across organizations and culture indexes ranged from
.85 to .89. The rwg(j) values observed for individual
organizations all reached the recommended minimum, but ranged quite a bit from the mid-.70s to the
mid-.90s. ICC(1) ranged from .06 to .10 across
culture indexes indicating that between 6 and 10%
of the variance in culture ratings can be accounted for
by organization membership. Corresponding F-values demonstrated that this proportion of variance
was statistically significant in all cases (p 5 .001).
ICC(2) ranged from .93 to .96, demonstrating high
reliability for the organization-level means on each
index. These results support the aggregation of
individual ratings of culture to the organization level.
These results also suggest that positioning interrater agreement as a threshold to justify aggregation is
somewhat misguided. Our results suggest that nearly
all of the organizations met the minimal criteria to
justify aggregation. Nonetheless, there are still significant variations between the organizations. Thus,
internal consistency may be more important to consider as a variable rather than as a threshold.

Figure 1. Factor structure of the DOCS. Item loadings, second-order factor loadings, and trait intercorrelations are shown. All loadings and intercorrelations are significant, p 5 .01. The 12 culture
indexes (from left to right) are: empowerment, team orientation, capability development, core values, agreement, coordination and integration, creating change, customer focus, organizational learning,
strategic direction and intent, goals and objectives, and vision. Model fit was best for the second-order factor solution here, w2(1692) ¼ 122,715.83, p 5 .01, GFI ¼ .88, NFI ¼ .98, CFI ¼ .98, and
RMSEA ¼ .04. The chi-square and fit indices for the first alternative model specifying four first-order trait factors were: w2(1704) ¼ 157,276.98, p 5 .01, GFI ¼ .85, NFI ¼ .98, CFI ¼ .98, and
RMSEA ¼ .05. Comparison to the second-order model shown here indicated significantly worse fit as evidenced by a significant change in chi-square, Dw2 (12) ¼ 34,561.15, p 5 .001, higher RMSEA (.05
vs. .04) and lower GFI (.85 vs. .88). The second alternative model specifying a single latent factor resulted in a further decline in model fit and indicated poor fit overall, w2(1710) ¼ 173,663.25, p 5 .01,
GFI ¼ .78, NFI ¼ .78, CFI ¼ .79, and RMSEA ¼ .06.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

12
DENISON, NIEMINEN, KOTRBA

DIAGNOSING CULTURE

In addition, as Kotrba et al. (2012) have shown,
internal consistency can be an important measure of
culture strength that is closely linked to performance.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

Evidence of criterion-related validity
Criterion-related validity holds special importance
for effectiveness profiling instruments. In this section,
we evaluate the criterion-related validity of the
indexes from the DOCS as organization-level predictors of effectiveness using subjectively rated
indicators. Analyses of objectively defined effectiveness outcomes have been presented in a number of
the aforementioned studies. Correlations between the
culture indexes and ratings of sales growth, market
share, profitability, quality of products and services,
new product development, and employee satisfaction
are presented in Table 8. As the table shows, most of
these validity coefficients were statistically significant
at the .01 level and had magnitudes of at least .30.
The strongest relationships were observed between

13

culture measures and employee satisfaction, with
correlations ranging from .42 to .79 (mean r ¼ .63).
Slightly weaker correlations were observed for
organizational ratings of new product development
(mean r ¼ .37), quality (.36), sales growth (.26), and
profitability (.25). The weakest relationships were
observed for culture predicting ratings of market
share, with correlations ranging from .04 to .26
(mean r ¼ .13). When the six effectiveness indicators
were combined into a unit-weighted composite,
correlations between the culture indexes/traits and
effectiveness ratings ranged from .44 to .68 (mean
r ¼ .58). Overall, these results support previous
studies demonstrating positive linkages between the
DOCS culture indexes and aspects of organizational
effectiveness. It is important that this evidence be
weighted alongside prior studies with objective
effectiveness criteria, given that correlations based
on same-source raters are known to be inflated to a
degree by common method variance (Spector &
Brannick, 1995).

TABLE 7
Descriptive statistics and aggregation evidence for the indexes of the DOCS
Index

Mean

SD

Mean rwg(j)

Min rwg(j)

Max rwg(j)

ICC(1)

ICC(2)

F-value

Empowerment
Team orientation
Capability development
Core values
Agreement
Coordination and integration
Creating change
Customer focus
Organizational learning
Strategic direction and intent
Goals and objectives
Vision

3.31
3.40
3.41
3.50
3.22
3.03
3.10
3.37
3.13
3.41
3.47
3.30

0.71
0.77
0.69
0.66
0.66
0.73
0.69
0.69
0.71
0.82
0.69
0.67

.87
.86
.86
.88
.88
.86
.87
.87
.86
.85
.89
.87

.74
.73
.75
.73
.81
.78
.75
.76
.74
.67
.77
.74

.94
.95
.94
.94
.94
.95
.95
.95
.96
.95
.96
.94

.10
.08
.08
.08
.07
.09
.06
.06
.06
.08
.08
.08

.96
.95
.95
.95
.94
.95
.94
.93
.94
.95
.95
.95

25.32
19.99
18.85
21.31
17.96
21.62
16.18
15.33
15.89
20.91
20.10
19.51

N ¼ 35,474. All F-values are statistically significant, p 5 .001.

TABLE 8
Correlations between the DOCS and indicators of organizational effectiveness
Trait/index

Sales growth

Market share

Profit

Quality

New product

Employee satisfaction

Overall performance

.24**
.20*
.17*
.33**
.20**
.20**
.26**
.11
.29**
.35**
.21**
.20*
.36**
.40**
.26**
.34**

.13
.11
.11
.16
.12
.15
.13
.07
.10
.13
.08
.04
.19*
.26**
.15
.10

.23**
.21**
.20*
.26**
.28**
.27**
.29**
.21**
.24**
.24**
.16*
.21**
.31**
.32**
.27**
.29**

.39**
.37**
.32**
.41**
.42**
.36**
.43**
.36**
.34**
.31**
.31**
.27**
.38**
.38**
.35**
.34**

.41**
.36**
.36**
.43**
.26**
.21**
.32**
.17*
.45**
.49**
.27**
.39**
.47**
.53**
.39**
.41**

.79**
.74**
.70**
.77**
.62**
.52**
.66**
.53**
.66**
.63**
.42**
.65**
.62**
.55**
.57**
.66**

.61**
.57**
.51**
.65**
.58**
.53**
.60**
.48**
.60**
.57**
.44**
.54**
.68**
.66**
.60**
.65**

Involvement
Empowerment
Team orientation
Capability development
Consistency
Core values
Agreement
Coordination and integration
Adaptability
Creating change
Customer focus
Organizational learning
Mission
Strategic direction and intent
Goal orientation
Vision
N ¼ 155. *p 5 .05, **p 5 .01.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

14

DENISON, NIEMINEN, KOTRBA

As in past research, these results also show that
some features of organizational culture are better
predictors of specific effectiveness criteria than others
(Denison & Mishra, 1995; Gillespie et al., 2008). The
pattern of correlations observed here indicates that
the internally focused traits involvement and consistency are generally better predictors of operating
performance such as quality and profitability,
whereas the externally focused traits mission and
adaptability are generally better predictors of sales
growth. Similarly, mission—and particularly, strategic direction and intent—was the only significant
predictor of market share. Other noteworthy trends
were that new product development was least
strongly correlated with the consistency trait, and
involvement was clearly the strongest predictor of
employee satisfaction. Together, these findings indicate that the aspects of culture assessed within the
DOCS likely contribute to overall organizational
effectiveness in complementary ways.

DISCUSSION
Perspectives on the measurement of the cultures of
work organizations have shifted over time (Martin
et al., 2006). Researchers and practitioners have
adopted surveys as a useful tool for understanding
the behaviours and values that characterize an
organization’s culture. Growing evidence of the link
between culture and bottom-line performance also
supports the role of surveys in culture research
(Sackmann, 2011). The value of surveys in the
diagnostic process also supports the more practical
objectives of organizational development and change
by serving as a means of feedback and benchmarking.
Effectiveness profiling instruments are the type of
culture surveys most directly aligned with these
applications (Ashkanasy et al., 2000).
This review assessed the progress in the development and validation of instruments in this category.
Despite the lack of continued research interest for five
of the nine instruments, many of their key concepts
have been borrowed or adapted by the four remaining instruments. Ginevicius and Vaitk
unait_e (2006)
and van der Post et al (1997) in particular have
synthesized the dimensions of other culture surveys.
In contrast, the approaches taken by the two
remaining instruments represent a blend of inductive
and theory-driven components (Denison & Neale,
1996; Scho¨nborn, 2010). This approach allows the
researchers to draw more direct connections between
their measurement models and the theories from
which they follow, which will be increasingly
important as more theory-based perspectives emerge
over time.
Criterion-related validity has always been a central
concern in this literature. Despite recent progress, the

limitations are familiar ones. The need for more
longitudinal research, better effectiveness measures
and more of them, larger and more representative
samples of organizations, and cross-cultural validation remain at centre stage. Researchers will still need
to address these issues in future studies, and this will
strengthen the evidence presented for observed
relationships between survey ratings of culture and
effectiveness outcomes. At the same time, a handful
of the other limitations identified reflect specific and
perhaps underappreciated considerations in the
validation of effectiveness instruments.

Generalizability across contexts
Unlike descriptive approaches to measuring culture,
the diagnostic approach generally leads to an
inference about cultural effectiveness without necessarily considering all of the possible contingency
factors. Showing that a predictive relationship exists
in a single context is a major achievement. Nonetheless, effectiveness instruments ought to be able to
demonstrate that the culture–effectiveness relationship is robust across a range of contexts. Support can
be demonstrated through a variety of strategies that
test measurement and predictive equivalence across
national cultures, industries, or types of organizations (Vandenberg & Lance, 2000).
The stream of research cited for the DOCS
illustrates some of the complexity involved in crosscultural comparative work. For example, these studies
have demonstrated that while the culture concepts
assessed retain similar meanings across national
settings, the specific manifestations of these concepts
can differ (Denison et al., 2003). This suggests that
effectiveness instruments may need to be versatile
enough to accommodate information about culture at
varying levels of specificity. The second-order framework underlying the DOCS provides one possible
solution. The same line of research also indicates that
although all four traits contribute to organizational
effectiveness across national cultures, the rank ordering of traits in terms of the magnitude of predictive
relationships also varies somewhat across cultures and
contexts. Thus, generalizability is clearly a multifaceted issue that requires a programmatic research
effort in order to fully elucidate the boundaries.

Multilevel considerations
Another set of issues has to do with the shift from
individuals to organizations as the primary unit of
analysis (Chan, 1998). Several studies cited in our
review committed an atomistic fallacy by inferring
organization-level relationships on the basis of
regressions or correlations with individuals (DiezRoux, 1998). Unfortunately, this type of evidence

DIAGNOSING CULTURE

does little to substantiate criterion-related validity for
an organizational assessment. Instead, an appropriate test involves examining the relationships between
aggregated culture ratings and firm-level effectiveness
criteria. There are a number of methods available for
handling multilevel data including the approach
illustrated here (Bliese, Halverson, & Schriesheim,
2002). Whichever analysis strategy researchers adopt,
there are two main points of interest: first, demonstrating that individual ratings can be used to
represent the overall culture of organizations in a
valid and reliable manner, and second, testing the
culture–effectiveness linkages at the organization
level.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

CONCLUSION
In conclusion, our review has identified a total of nine
published survey instruments whose objective is to
diagnose organizational cultures by assessing those
values and behavioural norms that are most directly
related to organizational effectiveness. The review
indicated a number of problematic trends and
remaining gaps in the types of reliability and validity
evidence that support these instruments, underscoring
the need for additional methodological research. Each
of the ‘‘active’’ instruments reviewed appear to be in
varying stages of development and evidence gathering, and research on several others appears to have
fallen off. Our review also identified the DOCS as the
most well-researched effectiveness instrument to date.
We therefore provided a more in-depth discussion of
the background, strengths, and limitations that
differentiate this particular instrument from other
culture effectiveness surveys. And finally, our empirical illustration helps to clarify several key challenges
extracted from our review, while attempting to close
some of the remaining gaps found for the DOCS.

REFERENCES
Alvesson, M. (2011). Organizational culture: Meaning, discourse,
and identity. In N. Ashkanasy, C. Wilderom, & M. Peterson
(Eds.), The handbook of organizational culture and climate (2nd
ed., pp. 11–28). Thousand Oaks, CA: Sage.
Ashkanasy, N., Broadfoot, L., & Falkus, S. (2000). Questionnaire
measures of organizational culture. In AN. Ashkanasy, C.
Wilderom, & M. Peterson (Eds.), The handbook of organizational culture and climate (2nd ed., pp. 131–162). Thousand
Oaks, CA: Sage.
Ashkanasy, N., Wilderom, C., & Peterson, M. (2011). Introduction
to the handbook of organizational culture and climate, second
edition. In N. Ashkanasy, C. Wilderom, & M. Peterson (Eds.),
The handbook of organizational culture and climate (2nd ed., pp.
3–10). Thousand Oaks, CA: Sage.
Aydin, B., & Ceylan, A. (2008). The employee satisfaction in
metalworking manufacturing: How do organizational culture
and organizational learning capacity jointly affect it? Journal of
Industrial Engineering and Management, 1, 143–168.

15

Aydin, B., & Ceylan, A. (2009). The role of organizational culture
on effectiveness. Ekonomika A Management, 3, 33–49.
Baer, M., & Frese, M. (2003). Innovation is not enough: Climates
for initiative and psychological safety, process innovations
and firm performance. Journal of Organizational Behavior, 24,
45–68.
Baruch, Y. (1999). Response rate in academic studies—A
comparative analysis. Human Relations, 52, 421–438.
Bentler, P. M. (1990). Comparative fit indexes in structural models.
Psychological Bulletin, 107, 238–246.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and
goodness of fit in the analysis of covariance structures.
Psychological Bulletin, 88, 588–606.
Birkinshaw, J., Hood, N., & Jonsson, S. (1998). Building firmspecific advantages in multinational corporations: The role
of subsidiary initiative. Strategic Management Journal, 19,
221–241.
Bliese, P. D., Halverson, R. R., & Schriesheim, C. A. (2002).
Benchmarking multilevel methods: Comparing HLM, WABA,
SEM, and RGR. Leadership Quarterly, 13, 3–14.
Block, L. (2003). The leadership-culture connection: An exploratory investigation. Leadership and Organization Development
Journal, 24, 318–334.
Bonavia, T., Gasco, V. J., & Toma´s, D. B. (2009). Spanish
adaptation and factor structure of the Denison Organizational
Culture Survey. Psicothema, 21, 633–638.
Boyce, A. S. (2010). Organizational climate and performance: An
examination of causal priority. Unpublished dissertation,
Michigan State University, Lansing, MI.
Boyle, G. J. (1991). Does item homogeneity indicate internal
consistency or item redundancy in psychometric scales?
Personality and Individual Differences, 12, 291–294.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical
linear models for social and behavioural research: Applications
and data analysis methods. Newbury Park, CA: Sage
Publications.
Chan, D. (1998). Functional relations among constructs in the
same content domain at different levels of analysis. Journal of
Applied Psychology, 83, 234–246.
Cooke, R., & Rousseau, D. (1988). Behavioral norms and
expectations: A quantitative approach to the assessment of
organizational culture. Group and Organizational Studies, 13,
245–273.
Cooke, R. A., & Lafferty, J. C. (1989). Organizational culture
inventory. Plymouth, MI: Human Synergistics.
Cowherd, D. M., & Luchs, R. H. (1988). Linking organization
structures and processes to business strategy. Long Range
Planning, 21, 47–53.
Dansereau, F., & Alutto, J. (1990). Levels of analysis issues in
climate and culture research. In B. Schneider (Ed.), Organizational climate and culture (pp. 193–236). San Francisco, CA:
Jossey-Bass.
Delaney, J. T., & Huselid, M. A. (1996). The impact of human
resource management practices on perceptions of organizational performance. Academy of Management Journal, 39, 949–
969.
Delery, J., & Doty, D. (1996). Modes of theorizing in strategic
human resource management: Tests of universalistic, contingency, and configurational performance predictions. Academy
of Management Journal, 39, 802–835.
Delobbe, N., Haccoun, R. R., & Vandenberghe, C. (2002).
Measuring core dimensions of organizational culture: A review
of research and development of a new instrument. Unpublished
manuscript, Universite catholique de Louvain, Belgium.
Denison, D. R. (1984). Bringing corporate culture to the bottom
line. Organizational Dynamics, 13, 4–22.
Denison, D. R. (1990). Corporate culture and organizational
effectiveness. New York, NY: Wiley.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

16

DENISON, NIEMINEN, KOTRBA

Denison, D. R. (1996). What IS the difference between organizational culture and organizational climate? A native’s point of
view on a decade of paradigm wars. Academy of Management
Review, 21, 619–654.
Denison, D. R., Haaland, S., & Goelzer, P. (2003). Corporate
culture and organizational effectiveness: Is there a similar
pattern around the world? Advances in Global Leadership, 3,
205–227.
Denison, D. R., Lief, C., & Ward, J. L. (2004). Culture in familyowned enterprises: Recognizing and leveraging unique
strengths. Family Business Review, 17, 61–70.
Denison, D. R., & Mishra, A. (1995). Toward a theory of
organizational culture and effectiveness. Organizational Science,
6, 204–223.
Denison, D. R., & Neale, W. S. (1996). Denison organizational
culture survey. Ann Arbor, MI: Aviat.
Denison, D. R., & Spreitzer, G. (1991). Organizational culture and
organizational development: A competing values approach.
Research in Organizational Change and Development, 5, 1–21.
Detert, J. R., Schroeder, R. G., & Mauriel, J. J. (2000). A
framework for linking culture and improvement initiatives in
organizations. Academy of Management Review, 25, 850–863.
Diez-Roux, A. V. (1998). Bringing context back into epidemiology:
Variables and fallacies in multilevel analyses. American Journal
of Public Health, 88, 216–222.
Erwee, R., Lynch, B., Millett, B., Smith, D., & Roodt, G. (2001).
Cross-cultural equivalence of the organisational culture survey
in Australia. Journal of Industrial Psychology, 27, 7–12.
Fey, C., & Denison, D. R. (2003). Organizational culture and
effectiveness: Can an American theory be applied in Russia?
Organization Science, 14, 686–706.
Geringer, M., & Hebert, L. (1989). Control and performance of
international joint ventures. Journal of International Business
Studies, 20, 235–254.
Gillespie, M., Denison, D., Haaland, S., Smerek, R., & Neale, W.
(2008). Linking organizational culture and customer satisfaction:
Results from two companies in different industries. European
Journal of Work and Organizational Psychology, 17, 112–132.
Ginevicius, R., & Vaitk
unait_e, V. (2006). Analysis of organizational culture dimensions impacting performance. Journal of
Business Economics and Management, 7, 201–211.
Glaser, R. (1983). The corporate culture survey. Bryn Mawr, PA:
Organizational Design and Development.
Guthrie, J. P. (2001). High-involvement work practices, turnover,
and productivity: Evidence from New Zealand. Academy of
Management Journal, 44, 180–190.
Harris, P. R., & Moran, R. T. (1984). Managing cultural
differences. Houston, TX: Gulf.
Hartnell, C. A., Ou, A. Y., & Kinicki, A. (2011). Organizational
culture and organizational effectiveness: A meta-analytic
investigation of the competing values framework’s theoretical
suppositions. Journal of Applied Psychology, 96, 677–694.
Hofstede, G., Neuijen, B., Ohayv, D., & Sanders, G. (1990).
Measuring organizational cultures: A qualitative and quantitative study across twenty cases. Administrative Science Quarterly,
35, 286–316.
Hu, L. T., & Bentler, P. M. (1998). Fit indices in covariance
structure modeling: Sensitivity to underparameterized model
misspecification. Psychological Methods, 3, 424–453.
Jaques, E. (1951). The changing culture of a factory. London, UK:
Tavistock.
James, L. R., Demaree, R. G., & Wolf, G. (1984). Estimating
within-group interrater reliability with and without response
bias. Journal of Applied Psychology, 69, 85–98.
Jung, T., Scott, T., Davies, H. T., Bower, P., Whalley, D.,
McNally, R., & Mannion, R. (2009). Instruments for exploring
organizational culture: A review of the literature. Public
Administration Review, 69, 1087–1096.

Jo¨reskog, K. G., & So¨rbom, D. (1989). LISREL 7: A guide to the
program and applications. Chicago, IL: SPSS Publications.
Katz, D., & Kahn, R. (1978). The social psychology of organizations. New York, NY: Wiley.
Kilman, R. H., & Saxton, M. J. (1983). The Kilman-Saxton culturegap survey. Pittsburgh, PA: Organizational Design Consultants.
Klein, K. J., Griffin, M. A., Bliese, P. D., Hofmann, D. A.,
Kozlowski, S. W. J., James, L. R., et al. (2000). Multilevel
analytical techniques: Commonalities, differences, and continuing questions. In K. Klein & S. Kozlowski (Eds.), Multilevel
theory, research, and methods in organizations (pp. 512–553).
San Francisco, CA: Jossey-Bass.
Kotrba, L. M., Gillespie, M. A., Schmidt, A. M., Smerek, R. E.,
Ritchie, S. A., & Denison, D. R. (2012). Do consistent
corporate cultures have better business performance? Exploring
the interaction effects. Human Relations, 65, 241–262.
Kotter, J., & Heskett, J. (1992). Corporate culture and performance.
New York, NY: Free Press.
Lawler, E. (1986). High involvement management. San Francisco,
CA: Jossey-Bass.
Lawrence, P., & Lorsch, J. (1967). Organization and environment:
Managing differentiation and integration. Boston, MA: Harvard
University Division of Research.
Liebenberg, J. S. (2007). Factors influencing a customer service
culture in a higher education environment. Unpublished dissertation, Rand Afrikaans University, South Africa.
Lim, B. (1995). Examining the organizational culture and
organizational performance link. Leadership and Organization
Development Journal, 16, 16–21.
Lindell, M. K., Brandt, C. J., & Whitney, D. J. (1999). A revised
index of interrater agreement for multi-item ratings of a single
target. Applied Psychological Measurement, 23, 127–135.
Markham, S. E., Dansereau, F., Alutto, J. A., & Dumas, M.
(1983). Leadership convergence: An application of within and
between analysis to validity. Applied Psychological Measurement, 7, 63–72.
Martin, J. (1992). Cultures in organizations: Three perspectives.
New York, NY: Oxford University Press.
Martin, J., & Frost, P. (1996). Organizational culture war games: A
struggle for intellectual dominance. In S. Clegg, C. Hardy, &
W. Nord (Eds.), Handbook of organization studies (pp. 599–
621). Thousand Oaks, CA: Sage.
Martin, J., Frost, P., & O’Neill, O. A. (2006). Organizational
culture: Beyond struggles for intellectual dominance. In S.
Clegg, C. Hardy, T. Lawrence, & W. Nord (Eds.), The Sage
handbook of organization studies (2nd ed., pp. 725–753).
Thousand Oaks, CA: Sage.
Martin, J., & Meyerson, D. (1988). Organizational cultures and the
denial, channeling and acknowledgment of ambiguity. In L.
Pondy (Ed.), Managing ambiguity and change (pp. 93–125).
New York, NY: Wiley.
Mintzberg, H. (1989). Mintzberg on management. New York, NY:
Free Press.
Muldrow, T. W., Buckley, T., & Schay, B. W. (2002). Creating
high-performance organizations in the public sector. Human
Resource Management, 41, 341–354.
Nunnally, J. (1978). Psychometric theory. New York, NY:
McGraw-Hill.
O’Reilly, C., Chatman, J., & Caldwell, D. (1991). People and
organization culture: A profile comparison approach. Academy
of Management Journal, 34, 487–516.
Ostroff, C., Kinicki, A. J., & Tamkins, M. M. (2003). Organizational culture and climate. In W. C. Borman, D. R. Ilgen, & R.
J. Klimoski (Eds.), Handbook of psychology: Industrial and
organizational psychology, Vol. 12 (pp. 565–593). Hoboken, NJ:
John Wiley & Sons.
Ott, J. S. (1989). The organizational culture perspective. Pacific
Grove, CA: Brooks-Cole.

Downloaded by [Levi Nieminen] at 12:36 14 September 2012

DIAGNOSING CULTURE
Parsons, T. (1951). The social system. London, UK: Routledge &
Kegan Paul.
Peterson, M. F., & Castro, S. L. (2006). Measurement metrics at
aggregate levels of analysis: Implications for organization
culture research and the GLOBE project. Leadership Quarterly,
17, 506–521.
Pettigrew, A. M. (1979). On studying organizational cultures.
Administrative Science Quarterly, 24, 570–581.
Quinn, R., & Cameron, K. (1988). Paradox and transformation:
Toward a theory of change in organization and management.
Cambridge, MA: Ballinger.
Quinn, R., & Rohrbaugh, J. (1981). A competing values approach
to organizational effectiveness. Public Productivity Review, 5,
122–140.
Rieker, D. J. (2006). An evaluation of how organizational culture can
perpetuate a formal mentoring relationship. Unpublished thesis,
Air Force Institute of Technology, Wright-Patterson Air Force
Base, OH.
Rousseau, D. M. (1990). Assessing organizational culture: The case
for multiple methods. In B. Schneider (Ed.), Organizational
culture and climate (pp. 153–192). San Francisco, CA: JosseyBass.
Sackmann, S. A. (2006). Assessment, evaluation, improvement:
Success through corporate culture. Gu¨tersloh, Germany: Bertelsmann Stiftung.
Sackmann, S. A. (2011). Culture and performance. In N.
Ashkanasy, C. Wilderom, & M. Peterson (Eds.), The handbook
of organizational culture and climate (2nd ed., pp. 188–224).
Thousand Oaks, CA: Sage.
Sashkin, M. (1984). Pillars of excellence: Organizational beliefs
questionnaire. Bryn Mawr, PA: Organizational Design and
Development.
Sashkin, M., & Fulmer, R. (1985). Measuring organizational
excellence culture with a validated questionnaire. Paper presented at the Academy of Management, San Diego, CA.
Schein, E. (1992). Organizational culture and leadership (2nd ed.).
San Francisco, CA: Jossey-Bass.
Scho¨nborn, G. (2010). Value performance: On the relation between
corporate culture and corporate success. Journal of Psychology,
218, 234–242.
Scott, T., Mannion, R., Davies, H., & Marshall, M. (2003). The
quantitative measurement of organizational culture in health
care: A review of the available instruments. Health Services
Research, 38, 923–945.
Selznick, P. (1957). Leadership in administration. Evanston, IL:
Row & Peterson.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in
assessing rater reliability. Psychological Bulletin, 86, 420–428.
Siehl, C., & Martin, J. (1990). Organizational culture: A key to
financial performance? In B. Schneider (Ed.), Organizational
climate and culture (pp. 241–281). San Francisco, CA: JosseyBass.
Smircich, L. (1983). Concepts of culture and organizational
analysis. Administrative Science Quarterly, 28, 339–358.
Sparrow, P. (2001). Developing diagnostics for high performance
cultures. In C. Cooper, S. Cartwright, & C. Earley (Eds.), The
international handbook of organizational culture and climate (pp.
85–106). New York, NY: Wiley.
Spector, P. E., & Brannick, M. T. (1995). The nature and effects of
method variance in organizational research. In C. L. Cooper &
I. T. Robertson (Eds.), International review of industrial and
organizational psychology (pp. 249–274). Chichester, UK:
Wiley.
Spreitzer, G. (1995). Psychological empowerment in the workplace:
Dimensions, measurement, validation. Academy of Management Journal, 38, 1442–1466.

17

Spreitzer, G. (1996). Social structural characteristics of psychological empowerment. Academy of Management Journal, 39, 483–
504.
Strydom, A., & Roodt, G. (2006). Developing a predictive model of
subjective organisational culture. Journal of Industrial Psychology, 32, 15–25.
Taylor, S., Levy, O., Boyacigiller, N., & Beechler, S. (2008).
Employee commitment in MNCs: Impacts of organizational
culture, HRM and top management orientations. International
Journal of Human Resource Management, 19, 501–527.
Trice, H. M., & Beyer, J. M. (1993). The culture of work
organizations. Englewood Cliffs, NJ: Prentice Hall.
Tucker, R. W., McCoy, W. J., & Evans, L. C. (1990). Can
questionnaires objectively assess organizational culture? Journal
of Managerial Psychology, 5, 4–11.
Usala, P. (1996a). Clerical/technical study technical report: Psychometric analysis of the organizational assessment items: Work unit
experiences and organizational experiences scales. Unpublished
technical report by the US Office of Personnel Management,
Personnel Resources and Development Center, Washington
DC.
Usala, P. (1996b). Psychometric analysis of the organizational
assessment items: Work unit experiences and organizational
experience scales. Unpublished technical report by the US
Office of Personnel Management, Personnel Resources and
Development Center, Washington DC.
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis
of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research.
Organizational Research Methods, 3, 4–70.
van der Post, W. Z., de Coning, T. J., & Smit, E. M. (1997). An
instrument to measure organizational culture. South African
Journal of Business Management, 28, 147–168.
van der Post, W. Z., de Coning, T. J., & Smit, E. M. (1998). The
relationship between organizational culture and financial
performance: Some South African evidence. South African
Journal of Business Management, 29, 30–41.
Van Maanen, J. (1988). Tales of the field. Chicago, IL: University
of Chicago Press.
Walker, H., Symon, G., & Davies, B. (1996). Assessing organizational culture: A comparison of methods. International Journal
of Selection and Assessment, 4, 96–105.
Wall, T. D., Michie, J., Patterson, M., Wood, S. J., Sheehan, M.,
Clegg, C. W., & West, M. (2004). On the validity of subjective
measures of company performance. Personnel Psychology, 57,
95–118.
Wilderom, C. P., Glunk, U., & Maslowski, R. (2000). Organizational culture as a predictor of organizational performance. In
N. Ashkanasy, C. Wilderom, & M. Peterson (Eds.), The
handbook of organizational culture and climate (2nd ed., pp.
193–209). Thousand Oaks, CA: Sage.
Woodcock, M., & Francis, D. (1989). Clarifying organizational
values. Aldershot, UK: Gower.
Xenikou, A., & Furnham, A. (1996). A correlational and factor
analytic study of four questionnaire measures of organizational
culture. Human Relations, 49, 349–371.
Zheng, W., Yang, B., & McLean, G. (2010). Linking organizational culture, structure, strategy, and organizational effectiveness: Mediating role of knowledge management. Journal of
Business Research, 63, 763–771.
Original manuscript received October 2011
Revised manuscript received July 2012
First published online August 2012


File Typeapplication/pdf
File TitleDiagnosing organizational cultures: A conceptual and empirical review of culture effectiveness surveys
SubjectEuropean Journal of Work and Organizational Psychology 0.0:1-17
AuthorDaniel Denison a * The authors gratefully acknowledge the suppor
File Modified2016-09-09
File Created2012-09-14

© 2024 OMB.report | Privacy Policy