Friday, April 8, 2011

Chỉ số hoạt động (Performance Indicators): Định nghĩa, ứng dụng, và quan ngại

Nguồn: www.cshe.nagoya-u.ac.jp/publications/journal/no1/07.pdf
-----
Phần dưới đây được chép từ bài viết có tựa là "On the Use of Performance Indicators in Japan's Higher Education Reform Agenda" trên tạp chí Nagoya Journal of Higher Education , số 1 năm 2001. Link đã đưa ở trên. Toàn văn bài viết là 32 trang, tôi chỉ chép ở đây phần đáng quan tâm nhất.
------
Định nghĩa
Performance measures or indicators are typically defined as factual or opinion information, usually in quantitative forms (e.g., ratios, percentages, ranks, and so forth) but also in qualitative forms as well, about various aspects of the functioning of higher education institutions and for various purposes-- e.g., monitoring, evaluation, and resource allocation (see, for example, Kells, 1992; Sizer, Spee & Bormans, 1992; Cave et al., 1997). Performance measurement reflects the view that higher education needs to be more responsive to state concerns and more accountable to a broader constituency that includes students, employers, parents, and the general public.

Measures usually provide information about the resources (inputs), characteristics of the educational production (process), and outputs or outcomes at various levels of the higher education systems (e.g., system, institutional, or college) and allow institutions to compare their relative position in key strategic areas to peers, to past performance, or to some standard or reference point. Selected indicators range from simple quantifiable indicators such as student/faculty ratios or costs per student to more qualitative indicators such as student satisfaction measured by surveys or assessing the quality of research and scholarship activity. Types, numbers, and purposes of measures vary greatly by nation or state and by institution. For example, while some institutions report about 250 specific measures, others report fewer than a dozen measures.

Nonetheless, the actual number of measures typically ranges between 15 and 25; and most nations, states and institutions usually share a common core of measures despite individual differences. It should be noted that this consistency is more related to the availability of certain data rather than with broad consensus about what is most important across the institutions. A comprehensive but by no means complete list of potential performance measures in higher education in the United States is provided in Borden and Banta (1994). The list includes about 280 specific measures in 21 areas, ranging from admissions to teaching/learning. Another useful
source would be the 1997 annual report on performance indicators by the University of Minnesota (Office of the Executive Vice President, 1997). Similar lists and examples of such performance measures are provided by Burke (1997) and Ruppert (1994) for the United States, by Cave and his colleagues (1997) and Johnes and Taylor (1990) for the United Kingdom, and by Lord and her colleagues (1998) for New Zealand.
[...]
Ứng dụng
One should not lose sight of the multiple purposes for which performance measures might be employed within higher education. Many institutions, for example, use individual faculty performance indicators to assist in making decisions about annual salary increases in those institutions with a merit pay system. Other institutions use unit or program performance indicators for resource allocation decisions within colleges (Lewis & Kallsen, 1995) or between colleges within universities (Dolence & Norris, 1994; Massy, 1996). Still others use performance indicators for reporting to their boards of trustees (Office of the Executive Vice President, 1997). And, of course, the most common form of performance indicators have been used to report to the state or nation about performance on those measures that are important to the institution's mission and the states priorities.

In almost all cases, the development of performance measures has led to a transition from essentially a regulatory internal review and resource reallocation role to one of providing information to the consuming public. Cave and his colleagues (1997) have pointed out that this has had several consequences. First, performance measurement has ceased to be only a centralized monolithic system serving only the institution and state, but has become a joint product of both the institution and public sector funding organizations and a variety of more specialized, possibly private sector, organizations. The development of consumer guides such as the U.S News and World Report annual issue on colleges and universities, Barron's Guide to American Colleges, and The Times Good University Guide in the United Kingdom are examples of this tendency.

It is also likely that the new purchasers of information about higher education institutions will want it in a variety of different forms. The information needs of prospective students will differ materially from both public and private funders of research and parties interested in external services and outreach programs. Thus, the development of performance measures must be broadly conceived and understood that its elements must be accessible to multiple constituencies for multiple purposes.
Quan ngại
Despite their increasing popularity, numerous concerns exist about whether performance measures can accomplish their broad and ambitious goals. These concerns need to be addressed before adopting a set of performance indicators. Some of these concerns include the following issues.

First, and most important, there are serious issues of validity and reliably in the selection and application of performance indicators (Kells, 1993). The most prevalent concern about validity is that often the measures selected are those most readily available or those that are the easiest to collect, rather than those most important to the mission or goals of the program or institution. Moreover, some performance indicators might result in deflecting attention from more critical issues or other unintended outcomes since considerable time and resources are often spent for collecting data through a variety of surveys and other instruments.

There may also be a lack of a common set of measures for similar institutions since each institution usually attempts to develop its own measures. This event makes difficult a comparison of institutions based on their performances. Certainly for accountability to parents and students there needs to be some minimum number of common indicators such that public judgments can be made when comparing across institutions (Linke, 1992).

Types of performance measures vary greatly because this depends on the unit developing the measures. For example, while most legislators or governing boards generally focus on input and output measures from readily available quantifiable data such as the number of students served, degrees granted, retention, and
completion rates and per student expenditure, institutions are more interested in process and outcome related measures such as student experiences and faculty scholarship and research accomplishments which are difficult to quantify in terms of simple statistics, counts, or indicators. Since easily quantifiable information is not necessarily the most informative and useful for decision-making and quality improvement, such a focus becomes a particular concern for several areas in which no readily available data may exist.

The purposes of performance measures also vary depending on the unit that develops them. For example, while legislatively mandated performance measures are usually intended to require higher education institutions to demonstrate accountability and achievement of their missions and goals, institutionally developed measures are often designed to influence the institution's priorities, monitor the process, communicate its achievements and success, and improve its quality.

Moreover, there is often a tension between the demand for accountability and institutional autonomy. An appropriate balance between the legitimate need for information and public accountability and institutional autonomy is a particular concern to many institutional administrators and faculty who are skeptical about performance measures due to concerns about intrusion of the state in institutional autonomy.

While there is an increasing use of quantitative (objective) measures, the use of qualitative measures needs to be also emphasized. However, the difficulty in measuring and collecting the data remains as a major impediment in using such measures. At the least, every developed nation that is conducting graduate programs
needs an external agency to assess the quality of each of its programs. We are not here recommending accreditation standards, although in some program areas it may be worthwhile to have minimum standards, but what we are recommending is national assessment of the quality of each graduate program based on both national rankings by external peers and by quantitative measures similar to what one finds in the National Research Council's doctoral program assessments in the United States (Goldberger, Maher & Flattatu, 1995).

The publication of performance data (particularly in the form of ranked and comparative data) can become a controversial issue in higher education, since they can be taken out of context and misused. Nevertheless, it is essential that the publication of performance data take place in order to insure appropriate public
accountability.

Finally, there might be disagreement between the government and institutions on the use of the indicators. While there is an increasing interest among public policymakers in the use of performance measures for funding, many in higher education institutions oppose such uses. A set of nation-wide measures may not be consistent with individual institutional goals and directions. The connection
between indicators and selective funding even within the institution is not as clear as is often assumed.

No comments:

Post a Comment