---------------------------------------------------------------------------- The Florida SunFlash TPC Benchmark B Summary of Full Disclosure Report SunFLASH Vol 24 #11 December 1990 ---------------------------------------------------------------------------- Sun SPARCserver 490 Sun SPARCserver 2 Sybase SQL Server 4.0.1 November 5, 1990 Sun Microsystems, Inc. 2550 Garcia Avenue Mountain View, Ca. 94043 Table of Contents I. Quick Summary of Benchmark Results II. Introduction to Performance Test III. Transaction Processing Performance Council A. What is TPC? B. TPC Benchmark B Definition C. How does TPC-B differ from TP1? D. What is Think Time? IV. Benchmark Results A. SPARCserver 490 B. SPARCserver 2 C. 1000 User Test V. Test Configuration and Costs A. SPARCserver 490 B. SPARCserver 2 C. 1000 User Test VI. Test Summary -------------------------------------------------------------------------------- I. Quick Summary of Benchmark Results -------------------------------------------------------------------------------- Sun and Sybase are the first vendors to jointly perform a new DBMS benchmark, TPC Benchmark B, as defined by the Transaction Processing Performance Council. This benchmark measures not only database throughput, but throughput to a large number of users. In this benchmark, the Sun SPARCserver 490 and SPARCserver 2 were tested with Sybase SQL Server 4.0.1 over a wide range of client sizes. The test results are summarized below: -------------------------------------------------------------------------------- Table 1. TPC-B SPARCserver Test Results ---------------------------------------- Response System # Users TPS Time Cost/TPS =================================================================== SPARCserver 490 40 56.3 0.7 sec N/A 50 57.2 0.9 sec $7.8k 200 52.1 3.6 sec N/A SPARCserver 2 24 51.2 0.5 sec N/A 30 51.9 0.6 sec $3.7k 36 51.6 0.7 sec N/A SPARCserver 490 1000 31.2 13.4 sec N/A -------------------------------------------------------------------------------- On the SPARCserver 490, Sybase SQL Server reached a peak throughput performance of 57.2 transactions per second (tps), and an average response time of 0.9 seconds. Over 95% of these transactions had response times of less than 2 seconds. At this performance level, cost/tps was $7.8k. With 200 users on a SPARCserver 490 running the Sybase SQL Server performance was recorded at 52.1 tps with an average response time of 3.6 seconds. In addition, the Sybase SQL Server product was able to demonstrate its robustness in very large enterprises, by being able to support 1000 simultaneous users at 31.2 tps on a single SPARCserver 490. On the SPARCserver 2, Sybase SQL Server reached a peak throughput of 51.9 tps with an average response time of 0.6 seconds. Over 99% of these transactions had response times of less than 2 seconds, with cost/tps of $3.7k. Both the SPARCserver 490 and SPARCserver 2 tests were audited by Tom Sawyer, Senior Consultant, Codd and Date, Inc. The 1000 user test was witnessed by Tom Sawyer. -------------------------------------------------------------------------------- II. Introduction to RDBMS Performance Testing -------------------------------------------------------------------------------- RDBMS performance has always been critical for on-line applications. However, typical benchmarks performed by database vendors did not measure the performance of an RDBMS in an on-line environment with hundreds of users distributed across a network. Instead, traditional database benchmarks focused simply on the peak transactions per second (tps), or throughput of a database engine as a proxy for actual performance with a large number of users. However, peak throughput performance is only one aspect of performance. A fast database engine may not necessarily be able to deliver that performance to a large number of users. A faulty client/server architecture, high operating system overhead, or poor memory management can zap the performance of a fast database engine. The ability to deliver a high level of throughput performance to a large number of users with rapid response times, is a more comprehensive and meaningful measure of RDBMS performance. -------------------------------------------------------------------------------- III. Transaction Processing Performance Council -------------------------------------------------------------------------------- A. What is TPC? The Transaction Processing Performance Council (TPC) is comprised of representatives from all the major hardware and database vendors and was established to define a set of fair and comparable database benchmarks. The goal of these benchmarks is to make it easier for customers to compare the performance of DBMS products in a variety of different environments, on any hardware platform. The TPC has defined two benchmarks to date: TPC Benchmark A and TPC Benchmark B. These benchmarks are the first of a new generation of DBMS benchmarks that will determine the performance delivered in a variety of database environments. The TPC-B Benchmark replaces the TP1 benchmark. Sun will no longer perform or publish any TP1 results. B. What is TPC Benchmark B? The TPC Benchmark B (TPC-B) determines the throughput performance of a DBMS on a hardware platform measured over a range of transaction generators. These transaction generators represent any kind of connection to the server that generates transactions, such as users, batch jobs, other servers, real-time data feeds, etc. For the purposes of data presentation, the term "users" is used in this report. The result of TPC-B is a performance curve which characterizes the performance of a DBMS within a range of users. A minimum range of this performance curve is defined by the TPC-B specification, but the vendor may choose to report results with as large a number of users as can be supported. Sun and Sybase viewed TPC-B as a vehicle to determine not just the peak throughput performance, but also the capacity performance of Sybase on the Sun SPARC platform with a large number of users. Therefore, Sun and Sybase chose an extremely demanding test range for the TPC-B benchmark. (200 and 1000 users) TPC-B is based on a banking transaction, which consists of the following: 1. An account record is randomly retrieved, updated and rewritten to reflect deposit/with-drawal. 2. A new account balance is retrieved. 3. A branch totals record is retrieved, updated and rewritten to reflect the transaction. 4. A teller record is retrieved and updated. 5. A history of the transaction is written. TPC-B specifies that 3 performance points must be measured in the TPC-B in order to define a performance curve. The 3 points are defined as the following: 1. A peak throughput performance fugure. This figure is measured at a certain number of users, defined as X 2. A throughput performance figure measured between 70% and 80% of X. 3. A throughput performance figure measured at least 120% of X. C. How does TPC-B differ from TP1? o TPC-B is rigorously defined and documented, therefore TPC-B results are highly comparable. o TPC-B measures the performance of a DBMS over a range of clients, not just at the peak level. o A variety of tests are performed to insure the integrity of the data in an on-line transaction. o Log mirroring and checkpointing must be enabled throughout the length of the test. o All hardware, software and 5 year maintenance costs must be compiled and published, along with a cost per tps rating. o Test results are closely scrutinized by the TPC members. This discourages inaccurate or faulty data from being published. o All test results and test configurations must be published in a Full Disclosure Report that is avaialble to the public. o Think-Time is set to zero as required by the TPC-B specification D. What is Think Time? Traditional DBMS benchmarks have had little relation to actual performance due to their "zero think-time" assumptions. Think-time is the human element of a transaction. Database users need to interact with a database: visualize results on a display, think about their next action and physically initiate a transaction through a keyboard. All these activities take time, hence in an actual application, a single user's transactions are not initiated continuously one after the next. The human element of database performance is taken into account in DBMS benchmarks by setting the "think time" to some number greater than zero. (e.g., 10 to 100 seconds). Consequently, each user is initiating transactions with a fixed period of time between completion of one transaction, and the initiation of another. While a non-zero think time more accurately respresents actual application performance, most DBMS vendors have continued to perform benchmarks with a zero think-time. This represents an unrealistically high load from the number of users involved in the benchmark, and consequently makes it difficult to support many users in a benchmark of this kind. In the TPC Benchmark B specification, the TPC defined think-time to be zero, thereby making support of a large number of users even more demanding. Sun and Sybase demonstrated capacity performance with TPC-B by testing well beyond the peak performance range, (200 and 1000 users) with zero think-time. This represents a much more demanding application than 200 or 1000 real users would create. By itself, raw throughput performance is irrelevant without the ability to spread that performance across a large number of users. Conversely, the ability to physically support a large number of users is of little value unless the DBMS can provide rapid response time to each user. -------------------------------------------------------------------------------- IV. Benchmark Results -------------------------------------------------------------------------------- A. SPARCserver 490 TPC-B Test Results The SPARCserver 490 is Sun's largest and most powerful database server. Sun and Sybase believed that they could demonstrate the powerful performance of both the SPARCserver 490 and the Sybase SQL Server by running the TPC-B benchmark with a very large number of users. The configuration used for these tests is listed in Section V. of this document. The test points chosen and their corresponding throughput performance and average response times are summarized below: Table 2 TPC-B SPARCserver 490 Test Results ___________________________________________ # of Response System Users TPS Time Cost/TPS ========================================================================== SPARCserver 490 40 56.3 0.7 sec N/A 50 57.2 0.9 sec $7.8K 200 52.1 3.6 sec N/A The test results clearly demonstrate the scalability of Sybase running on Sun. As the number of users was increased from 50 to 200, the performance of the Sybase database engine stayed relatively contant, with only a modest degradation in performance. The average response time also scaled linearly with the addition of more users. From sub-second average response time at less than 100 users, the average response time was 3.6 seconds at 200 users with zero think-time. For the peak figure of 57.2 tps, over 95% of the users had response time of less than 2 seconds. B. The SPARCserver 2 TPC-B Test Results The SPARCserver 2 was tested over a much smaller range of users than the SPARCserver 490. This is because the SPARCserver 2 is a smaller, departmental server and is intened to support a smaller number of users. Table 3 TPC-B SPARCserver 2 Test Results ________________________________________ # of Response System Users TPS Time Cost/tps ==================================================================== SPARCserver 2 24 51.2 0.5 sec N/A 30 51.9 0.6 sec $3.7K 36 51.6 0.7 sec N/A The SPARCserver 2 delivered excellent throughput performance over a smaller test range. Peak throughput performance was achieved at 51.9 tps. At this peak level, average response time was 0.6 seconds. Over 99% of these users had response times of less than 2 seconds. C. The 1000 User Test In order to demonstrate the robustness of the Sybase SQL Server on Sun in a very large user environment, an additional test was performed supporting 1000 users. To support this large number of users, 64MB of additional system memory and 3 front-end machines were added to the single SPARCserver 490 as listed in Section V. Since the purpose of this test was to demonstrate robustness, a cost analysis was not performed on this configuration. All other components of the test were identical to the other TPC-B tests. Specifically, the identical database, scaling factor, and the TPC-B transaction workload were used. The 1000 user test was also witnessed by Tom Sawyer of Codd and Date, Inc. The benchmark results at 1000 users were exceptional. The Sybase SQL Server, running on a single SPARcserver 490, supported 1000 simultaneous users at 31.2 tps. At this level of performance, the average response time per user was 13.4 seconds with zero think-time for all 1000 users. Table 4 The 1000 User Test Results __________________________________ # of Response System Users TPS Time ================================================================== SPARcserver 490 1000 31.2 13.4 sec This test further validates Sybase Client/Server Architecture and proves that Sybase has the robustness to support a very large number of users and still provide excellent performance. The Multi-Threaded Server technology allows a large number of users to be added to the Sybase SQL Server without spawning additional operating system processes. The Sybase server managed 1000 separate processes and the resources those 1000 processes required: scheduling, locking and managing concurrent queries. -------------------------------------------------------------------------------- V. Test Configuration and Costs -------------------------------------------------------------------------------- TPC-B was performed with the configurations below. The Sybase SQL Server 4.0.1 was run under SunOS 4.1 with SunDBE on the SPARCserver 490, and on the SPARCserver2. The transaction generators were each a separate process, and resided on SPARCstation front-ends. For the SPARcserver 490 and the SPARcserver2 tests, only one front-end workstation was used to generate the transactions. A. SPARCserver490 as the server and SPARcstation 2 as the client ---------------------------------------------------------------- Component Product Quantitiy ==================================================== Server Processor SPARCserver 490 1 Client Processor SPARCstation 2 1 Server Memory 32MB Client Memory 16MB Disk Controller IPI 3 Server Disks IPI 10 Client Disks SCSI 1 Tape Drive 1 Operating System O/S 4.1 SunDBE 1.0 1 Database Sybase SQL Server 4.0.1 EBF396 1 Cost Summary: SPARCserver 490 system and Sybase SQL Server $276,100 SPARCserver 490 and Sybase SQL maintenance 144,915 SPARCstation 2 clinet system and Sybase Open CLient 18,975 SPARcstation 2 and Sybase maintenance 7,185 -------- $447,175 Price per TPS (K$/tps) @57.2 TPS = 7.8 The SPARcserver 490 test was run with 5,800,000 Accounts, 580 Tellers and 58 Branches. The benchmark ran for 16 minutes at a steady state. B. The SPARcserver 2 as the server and the SPARCstation 1+ as the client ------------------------------------------------------------------------ Component Product Quantitiy ==================================================================== Server Processor SPARCserver 2 1 Client Processor SPARCstation 1+ 1 Server Memory 16MB Client Memory 16MB SBus Expansion Boards 2 Server Disks SCSI 12 Client Disks SCSI 2 Tape Drive 1 Operating System O/S 4.1.1 SunDBE 1.1 1 Database Sybase SQL Server 4.0.1 EBF396 1 Cost Summary: SPARCserver 2 and Sybase SQL Server $94,180 SPARCserver 2 and Sybase SQL maintenance 71,940 SPARCstation 1+ client system and Sybase Open CLient 18,975 SPARCstation 1+ and Sybase SQL maintenance 8,325 -------- $193,420 Price per TPS (K$/tpsB-Local) @51.9 TPS = 3.7 The SPARCserver 2 test was run with 5,400,000 Accounts, 540 Tellers and 54 Branches. This benchmark also ran for 16 minutes at steady state. C. The 1000 User Test Configuration ----------------------------------- With the exceptions noted below, the 1000 user test configuration was completely identical to that used for the SPARCserver 490 benchmark. That is, the identical database scaled for 58 TPS, identical TPC-B workload and identical hardware configuration was used, with the following exceptions: 96MB of memory on the SPARCserver 490, compared with 32MB for the TPC-B audited results of the SPARCserver 490. 3 additional front-end machines, which ran only client sessions Fillfactor of 10% on the teller table, compared with 30% for the TPC-B audited results on the SPARCserver 490. This benchmark also ran for 16 minutes steady state. The 1000 User Test was witnessed by Tom Sawyer from Codd and Date, Inc. -------------------------------------------------------------------------------- VI. TPC-B Benchmark Summary -------------------------------------------------------------------------------- Sun and Sybase are the first systems vendors to perform and publish the TPC-B Benchmark, with Sybse SQL Server running on the SPARCserver 490 and the SPARCserver 2. Sun and Sybase chose to perform the benchmark over a large range of users in order to represent the load of a large user environment, thereby demonstrating the power of the SPARCserver 490 and SPARCserver 2 as database servers. The TPC-B Benchmark shows that with the Client/Server architecture, Sun and Sybase can support a large number of users with a high level of throughput performance and excellent response times. Sun and Sybase offer the following advantages: *Large Environment Solution- Sun SPARCservers running the Sybase RDBMS has the capacity to meet the needs of the most demanding customer environments by delivering a high level of throughput to a large number of users. *Scalability- As company's grow more users can be added without losing productivity. *Cost per User- Support for a large number of users means that the cost per user is very low. *Solid Foundation- Sun SPARCservers, from the speedy, low-cost SPARCserver 2 to the high-performance SPARCserver 490, span a broad range of performance, expandability, packaging and price to support the most demanding database server applications. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ For information send mail to info-sunflash@sunvice.East.Sun.COM. Subscription requests should be sent to sunflash-request@sunvice.East.Sun.COM. All prices, availability, and other statements relating to Sun or third party products are valid in the U.S. only. Please contact your local Sales Representative for details of pricing and product availability in your region. Descriptions of, or references to products or publications within SunFlash does not imply an endorsement of that product or publication by Sun Microsystems. John McLaughlin, SunFlash editor, flash@sunvice.East.Sun.COM. (305) 776-7770.