Steven Gordon is an Associate Professor of Information Systems
Babson Park, Massachusetts 02157-0310
Center for Information Management Studies (CIMS)
Babson Park, Massachusetts 02157-0310
Phone 617-239-4531 FAX 617-239-6416
Professionals in the computer industry have used the term benchmarking since the early 1960s.  Initially benchmarking meant comparing the processing power of products produced by competing manufacturers within a realistic business or scientific environment. Computer manufacturers published their benchmarking results in their sales brochures and their marketing literature. To resolve competing claims, academic and commercial concerns developed benchmark standards such as whetstones and dhrystones.  Software companies whose products' speed was an important competitive consideration, such as vendors of database management software and sort programs, also used the term "benchmarking" to describe the comparison of their products' performance. 
By the late 1970s, information systems (IS) professionals began to broaden the meaning of benchmarking to extend beyond the product-to-product speed comparison of individual hardware and software components to the comparison of total throughput and efficiency.  Initially, since they could not easily compare their data with competitors, they compared it to their own experience from year to year, tracking such measures as the total MIPS of their processing systems and the number of transactions per second they could handle. Later, IS professionals also sought practical measures of productivity and output that could be compared to industry norms. Generally, organizations used broad measures, such as the ratio of IS budget to sales or expenses, the return on investments in information technology (IT) and systems, and the percentage of IS projects completed on time and within budget. Some organizations complemented these measures with more group-specific or task-specific ones to evaluate individual services or products. For example, to evaluate the help desk, they might use measures such as the ratio of help-desk workers to the number of help-desk calls and the percentage of problems reported to the help desk that are resolved on the initial call.
Norms for broad measures such as the ratio of IS expenses to sales were established by trade organizations that collected such data from their membership. Additionally, many organizations, unhappy with the broad comparisons published by trade organizations, formed consortia of relatively similar companies to share each others' measures confidentially. Consultants who collected metrics from clients and other sources also, for a fee, sold such information, aggregated to protect confidentiality, to interested parties.
The term "benchmarking" was first applied to business practices by Xerox Corporation circa 1979.  As the term "benchmarking" gained popularity among the general business community, many IS professionals began to wonder if non-IS professionals defined the term the same way, and if not, whether and how IS professional should react to the new meaning. Most of this confusion was caused by the fact that in the popular press "benchmarking" retained two meanings. One, commonly called metric benchmarking, is indeed what IS professionals have been used to. Metric benchmarking is the use of quantitative measures as reference points for comparison against prior experience, industry norms, or best-in-class organizations. The other meaning, commonly called best practice benchmarking, is the identification, and potentially the adoption, of best practices or techniques for performing common tasks. The next section compares the pros and cons of these two types of benchmarking, recognizing, of course, that one type of benchmarking does not preclude the other. In subsequent sections we focus on best practice benchmarking, although not to the exclusion of metric benchmarking. This focus reflects the large volume of existing research and publication concerning metric benchmarking and the relative absence of and need for similar information about best practice benchmarking, particularly in the field of information systems.
Metric Vs. Best Practice Benchmarking
The major drawback to metric benchmarking is that it fails to identify the cause of and possible solutions for sub-par performance on any measure. For example, suppose a company finds that, relative to the norm, a low percentage of its help-desk calls are resolved on the initial call. It may conclude that its help-desk staff is insufficiently trained, that its systems are relatively complex and hard to diagnose, or that the norm has been established at companies whose help desk is so poor that users, in frustration, have learned to work around it when faced with complex problems. Each of these conclusions implies a different solution. In contrast, best practice benchmarking of the help desk function, rather than relying on statistical measures, would involves a detailed study of help-desk processes at other organizations. This type of benchmarking, rather than simply identifying areas for improvement, is more likely to produce a plan for continuous improvement or radical reengineering of the process or processes under study.
Several drawbacks to best practice benchmarking may limit its usefulness. First, the level of effort required to study even a few processes is high relative to metric benchmarking. As a result, the returns to the benchmarking effort have to be substantial in order to justify its undertaking. Second, as this form of benchmarking studies only a small number of organizational units, there is no guarantee that it will uncover exemplary, or even representative practices. Third, benchmarking partners may be wary of sharing their knowledge, especially if they believe it to be a source of competitive advantage. Finally, considerable judgment is required to ascertain whether the practices that work well in one organization can be effectively transplanted to another organization, especially when industry, culture, size, and function may differ.
Although companies benchmark IS/IT functions for a variety of reasons, the reasons most commonly cited include justifying the company's investment in IT, evaluating the performance of the IS group and its management, and improving the IS functions within the organization. Benchmarking also often occurs as one component of a more extensive cost assessment or cost reduction effort, a total quality management (TQM) program, or a strategic planning effort.
The budgeting process periodically motivates IS managers to perform some benchmarking. Most organizations subject development and acquisition of new systems to stringent return-on-investment hurdles. With the increasing popularity and availability of outsourcing services, many organizations require a justification of existing systems as well. Metric benchmarking allows a company to compare its investment in IT and IS to other similar companies. A company that spends less than similarly sized companies in the same industry may be operating more efficiently than its competition. Alternatively, it may be spending less because it has neglected to use IT to achieve competitive advantage, to match its competitors' services, or simply to save more money elsewhere in its budget. Benchmarking might spur such a company to increase spending in IT or it may help the company identify a low cost IT strategy that works effectively. Conversely, a company that spends more than similar companies in the same industry may be operating less efficiently than its competition, using IT to achieve competitive advantage, or investing in IT to reduce other expenses.
Another reason to benchmark is to assess job performance and to set performance goals. The satisfaction of supervisors, subordinates, and peers is, of course, a key measure of job performance. However, if a company is complacent, its urge to achieve satisfaction is likely to entice it to set goals that are too easy to reach. In the absence of objective measures and external comparisons, lack of performance may not be noticed until it is too late to recover. Complacency is a potential problem at all levels of management. Every manager should be asking the question, "how high can I realistically set goals for my direct reports?" Internal benchmarking helps to identify trends relevant to answering this question. However changes in technology reduce the value of such trend analyses. For example, the use of software productivity tools makes historical records of software development productivity obsolete; replacement of mainframe systems with LAN-based systems may render historical statistics on down-time obsolete. One true way to assess your performance when you are in a high state of flux is to compare your performance to others in a similar state.
Finally, benchmarking is a cornerstone of continuous improvement. It supports answering questions such as "what functions are most in need of improvement" and "how are others doing the same thing better?" The hallmark of a good manager is a healthy level of dissatisfaction with the status quo. Benchmarking enables this dissatisfaction to be channeled into productive change.
What Do IS Groups Benchmark?
Fairly significant differences exist between the types of processes benchmarked by companies doing metric benchmarking and those searching for best practices. Companies doing metric benchmarking seek out processes that are easily measured and for which comparisons with representative companies are likely to be available and meaningful. Figure 1 provides a list of such processes and some common measures on which they may be benchmarked.
Benchmarking for best practices has been shown to be most profitable when applied to functions that are semi-stable and repeatable.  Processes that are done once or twice a year, such as budgeting the IS function, and those that are not repeatable, such as the development or purchase of a particular piece of software or equipment are not likely candidates for benchmarking.
Xerox uses the ten questions listed in Figure 2 to identify areas for best practices benchmarking. First, and most importantly, benchmarkers should identify what factors are most critical to the success of the IS function and the organization as a whole. These factors are not necessarily the same. For example, the factor most critical to the success of the IS function might be to keep costs low while the factor most critical to the success of the organization might be to keep customers satisfied. These critical success factors might point to several different processes to benchmark, and all should be considered. In selecting among these, preference should be given to those that have the most potential for improvement and those that currently cause the greatest problems. The questions in Figure 2, by focusing on the combination of importance and potential improvement, can help companies identify processes to benchmark.
Figure 3 displays the results of a 1992 study by the Society of Information Management (SIM) and Ernst & Young of the benefits, by industry segment, of improving different practices.  Prototyping, cross-functional teams, joint application design, and business process reengineering were the top four practices most often identified in this study for providing value and improvement. These and the other items in this figure offer some suggestions as to possible benchmarking opportunities. However, this study captured the state of the industry at just one point in time. As industry experience with many of these processes and as technology and tools evolve, different processes and products come to the forefront. Furthermore, what is most important for the majority of companies may not be important for your company, or your company may already have achieved superior performance in the areas identified.
Who Initiates Benchmarking?
The motivation for benchmarking often determines who in the organization begins the process. For example, if the motivation is to justify the IS/IT budget, the CIO or the manager responsible for the budget will likely initiate the process. Alternatively, the mandate may come from someone such as the President or CEO who has the ultimate authority for allocating budgetary resources. Questioning the CIO's budget, he or she may ask, "how does this compare to what our competitors are spending in IT?" or "can't the proposed initiatives and operations be accomplished without such a large expenditure."
When benchmarking is done as part of a continuous improvement effort, it may be initiated a "Quality Office" or "Quality Officer" either within the IT organization or outside. Once TQM becomes embedded in the organizational culture, benchmarking will likely become part of the problem-solving toolset of all managers. These managers may then initiate a benchmarking effort as needed to address problems they observe. In addition, metric benchmarking will likely be institutionalized and performed periodically without any apparent champion.
Who Performs The Benchmarking?
Benchmarking is generally performed by a team of employees, sometimes with the assistance of an outside consultant who has had previous experience with benchmarking and the process or processes being benchmarked. The team usually includes a project manager, data collectors and analysts, a facilitator trained in benchmarking who may or may not have expertise in the area being benchmarked, and various support personnel who work only part-time with the team. Among the support personnel, the benchmarking team should probably include a lawyer for dealing with the legal issues surrounding the sharing of competitive information, personnel from library services or others specifically trained in searching for information outside the organization, clerical and administrative workers, and senior management.
Who Do IS Organizations Benchmark Against?
IS organizations can benefit in different ways from different types of benchmark partners. In this section we look at the advantages of benchmarking within the IS organization, within the company but outside IS, against competitive organizations, and against the best of breed (also known as best in class or BIC).
The benefits of benchmarking within the IS organization itself are that such benchmarking establishes a baseline, data are readily available, cooperation can likely be assured, and priorities for external benchmarking can be developed. Figure 4 identifies some other reasons to benchmark internally.
Some processes that occur within the IS organization, such as purchasing and quality assurance, may also occur in other divisions, other business groups, or other business units within the same company. Other processes may be analogous to processes that occur elsewhere within the company. For example, the operation of the help desk may be similar to the operation of a customer support desk for the products that the company manufactures. The existence of parallel or nearly parallel processes at many places within an organization provides an opportunity for benchmarking within organizational boundaries. Such benchmarking can be performed more expeditiously than benchmarking across organizational boundaries, and the benefits of the lessons learned are magnified because they affect so many parties internally. They also offer an opportunity to recognize and reward excellence within the organization.
Another source of benchmarking is the competition. Indeed, in some sense, the competition is the best source for determining how well you are doing. But, who competes for the services you provide? Don't make the mistake of looking to the IS organization of your company's competitors. The competition for internal information services groups consists of outsourcers of IS/IT services. If they can provide the services you provide more efficiently than you can, then you have something to learn from what they do. If you fail to learn, you may not survive -- your function, too, may be outsourced. Unfortunately, the number of companies that provide IS outsourcing services are relatively few, most are large, and most feel that they have little to learn from companies whose business is not the provision of IS services. As a result, it may be hard to find an outsource provider to benchmark against.
Finally, best of breed benchmarking looks at superior IS organizations in other companies. It really doesn't matter whether or not the other company is a competitor. However, the other company should be one that faces similar information systems needs and provides similar services. It should also be reasonably similar in size, degree of globalization, and management complexity.  Surprisingly, research indicates that if you are just beginning to benchmark, looking at the best organization may not be as satisfactory as looking at organizations that are better than you but not too far ahead. Apparently, looking too far ahead may be demoralizing and result in attempts at change that the organization is not prepared to make.
How Can You Find External Benchmark Partners?
Organizations searching for benchmarking partners most commonly consider those that have received special awards, citations, or media attention; those referred to or cited by professional associations and independent reports; and those recommended by other professionals, associates and consultants.  Among formal awards and citations, the Malcolm Baldridge National Quality Award, Deming Application Prize, and the European Quality Award are given to organizations committed to total quality and who excel at implementing a TQM approach. Even though recipients of these awards have not been judged exclusively on their information systems, their IS processes likely reflect the organizational focus on quality, increasing chances that they would be good partners for best practices benchmarking. However, receipt of a Baldridge or Deming Award, or a similar award for business practice excellence does not guarantee excellence in overall IS practice and certainly does not guarantee excellence in every phase of information processing and technology. Benchmarking companies must carefully evaluate the potential contribution of partners identified in this fashion.
One award more narrowly focused on IS is the Partners in Leadership Award from SIM International. This award recognizes the joint efforts of a CEO or senior line manager and a senior IT executive in such areas as "improving the quality and speed of customer service, shortening cycle times and reducing costs, differentiating products and services, and improving business processes leading to improved financial and business performance."  Another award focused on IS is Computerworld's annual Premier 100, a ranking and rating of the IS effectiveness of publicly traded corporations earning over $300 million. Unfortunately, the judges for Computerworld assess companies largely on statistical measures, and some of these are inconsistent with measures normally used to benchmark excellent performance. For example, Computerworld considers a high ratio of IS budget to total revenue to be a positive factor in its award (reportedly as evidence that the company is committed to technology)  whereas most organizations consider low values of this ratio to be indicative of IS efficiency. CIO, in conjunction with Booz-Allen & Hamilton, conveys the ESPRIT Award honoring companies for "excellence in strategic partnering for return from information technology."  Computerworld's Smithsonian Award recognizes companies for their innovative application of technology.
Trade and professional journals such as Computerworld, Datamation, CIO, Information Week, Client/Server Computing, DBMS, Journal of Systems Management, and others often highlight organizations that have applied information technology in a new, unusual, or exemplary fashion. Often these experiences relate to fairly narrow applications or processes, such as software quality testing, database backup procedures, the application of CASE tools, or the tuning of LAN servers. You may need to read widely or use an electronic index such as AB/Inform or Dialog or a printed index such as the Computer Literature Index to find companies noted for the processes that you wish to benchmark. However, this effort may be worthwhile because it allows you to select partners who have worked hard on the areas you wish to benchmark.
Professional associations such as Babson's CIMS and Boston SIM provide a forum where practitioners and consultants with expertise in one area of IS relate and evaluate their experiences with new processes and tools in that area. Attendants at such conferences can easily identify companies that are potential benchmark partners and consultants who, because of their experience, can help them identify such partners. In addition, reports produced by these organizations often identify either metrics, best practices, or both in specific areas.
Finally, the American Productivity & Quality Center (AP&QC) based in Houston, Texas, maintains a database for to its members of companies that have done benchmarking in a variety of areas. This database, including abstracts of the benchmarking studies, can be searched on line, and additional information about prior studies and best practices can be obtained through face-to-face meetings. Similar databases are being developed by other companies for whom benchmarking is central to their TQM efforts. 
How Do Companies Measure The Success Of A Benchmarking Effort?
Surprisingly little has been written about how organizations judge the success of their benchmarking efforts. Many, in their evaluation, subsume benchmarking under the umbrella of a broader TQM or reengineering effort and make no attempt to individually assess its contribution. Others measure the success of benchmarking by the extent to which it produces change. Unfortunately, while change can be measured fairly easily, the impact of change is measured over time, and can rarely be attributed to the success of a single initiative. If benchmarking is viewed as a learning process, its success must be measured individually by its participants as well as collectively for the organization..
How Likely Are Companies To Achieve Success?
According to one study, companies have been successful by their own measures in approximately 70 to 90 percent of their benchmarking efforts. The highest rates of success come when performing metric benchmarks of the IS/IT infrastructure and the lowest success rates when performing metric benchmarks on strategic issues. Best practice benchmarking achieved success rates between 80 and 88 percent, again depending on what types of practices were benchmarked. 
What Are the Keys to Success?
Perhaps the most important key to the success of a benchmarking effort is to view it primarily as a learning process.  The implication of this perspective is that the IT process owners should come away from the benchmarking effort with new insights about their own practices. These insights may or may not immediately lead to specific change, but they should prepare participants for understanding when such change is appropriate and enable them to recognize what alternative are applicable.
Experienced practitioners often note that one key to success is to start small. The major danger of starting with too large an effort is that too many resources are consumed in the benchmarking effort before any results can be realized. In addition, the demand for change may be more than the organization can assimilate in a short period of time. Companies that start small build a history of success and gain the experience required to undertake more substantial efforts.
Another key to success is to have the commitment of top management. Benchmarking can be costly and time consuming. Upon completion of a benchmarking effort, more time and possibly more financial resources are needed to implement recommendations that come from the benchmarking team. Even more time passes before those recommendations produce a return. If management is not committed to benchmarking, initiatives may be cut short before they can have an impact on the organization.
Finally, success requires that organizations act on their benchmarking results. Benchmarking studies should not be sitting on bookshelves. They should contain concrete recommendations that can be translated into action. Lack of action leads to demoralization among the benchmark team members and leaves future benchmarking teams without any incentive to find new opportunities.
|Process||Benchmarks||---------------------------------------||----------------------------------------------------------------------||Communications||Percentage of cost for telecommunication||LAN contention in peak periods||WAN cost per packet, per byte, and per message||Customer Satisfaction||Overall satisfaction of users/managers with info svcs||User satisfaction with contacts with IS organization||User satisfaction with response to problems||Manager satisfaction with cost & speed of development||Financial||IT expense as a percent of revenue||IT investment as a percent of assets||Total system cost||Average cost per job||Average cost per input screen||Average cost per report produced||Help Desk||Percentage of problems solved by 1st contact||Average time to problem solution||Number of problems handled per FTE||Number of problems handled||Operations||Availability (% of time)||Mean time between failure||CPU Usage (% of capacity)||Disk Usage (% of capacity)||Average MIPS||Number of jobs handled||Quality Assurance||Defects found per 1000 lines of code||Percentage of erroneous keystrokes on data entry||Staffing||Percentage of professional staff with college degree||Payroll as percent of IS budget||Percentage of staff with advanced degrees||System Development||Projects completed in period||Avg function points per employee per period||Lines of code per employee per period||Fraction of projects done on time & on budget||Technology||Percent of IS expense in R&D||Percent of employees having a workstation||Training||Courses taken per IS employee per year||Average courses taken per IS employee||Average IS courses taken per non-IS employee|
Source: Michael J. Spendolini, The Benchmarking Book (NY, NY: American Management Association, 1992): 71.
Source: Richard W. Swanborg, Jr., Benchmarking IS leading practices, American Programmer (Arlington, MA: Cutter Information Corp., 1993): 5.
Source: Kathleen H.J. Leibfried and C.J. McNair, Benchmarking: A Tool For Continuous Improvement (NY, NY: HarperCollins Publishers, Inc, 1992): 61.