Performance Management 33rd Edition

From wiki.gis.com
Jump to: navigation, search
System Design Strategies (select here for table of contents)
System Design Strategies 33rd Edition (Fall 2013)
1. System Design Process 2. GIS Software Technology 3. Software Performance 4. Server Software Performance
5. GIS Data Administration 6. Network Communications 7. GIS Product Architecture 8. Platform Performance
9. Information Security 10. Performance Management 11. System Implementation 12. City of Rome
A1. Capacity Planning Tool A2. ESD Planning Tools A3. Acronyms and Glossary Preface (Executive Summary)


Fall 2013 Performance Management 33rd Edition

Esri has implemented distributed GIS solutions since the late 1980s. For many years, distributed processing environments were not well understood, and customers relied on the experience of technical experts to identify hardware requirements to support their implementation needs. Each technical expert had a different perspective on what hardware infrastructure might be required for a successful implementation, and recommendations were not consistent. Many hardware decisions were made based on the size of the project budget, rather than a clear understanding of user requirements and the appropriate hardware technology. Many GIS implementation projects would fail due to poor system design and lack of performance management.

Esri started developing simple system performance models in the early 1990s to document our understanding about distributed processing systems. These system performance models have been used by Esri system design consultants to support distributed computing hardware solutions since 1992. These same performance models have also been used to identify potential performance problems with existing computing environments.

The Capacity Planning Tool was introduced in 2008 incorporating the best of the traditional client/server and web services sizing models providing an adaptive sizing methodology to support future enterprise GIS operations. The new capacity planning methodology is much easier to use and provides metrics to manage performance compliance during development, initial implementation, and system delivery.

This chapter introduces how these design models can be used for performance management.

System performance factors

Figure 10.1 Several key system performance factors work together to provide required user workflow productivity. A properly balanced resource investment will provide the optimum user performance.

Figure 10.1 identifies some key components that contribute to overall system performance. Software technology selection and application design drives the processing loads and network traffic requirements. Hardware and architecture selection establishing processing capabilities and how the processing loads are distributed. Network connectivity establishes infrastructure capacity for handling the required traffic loads.

Warning: Weakest system component determines overall system performance (performance chain).
Best practice: Balanced system design provides optimum user performance at lowest system cost.

Software technology factors

Software design efficiency and level of analysis establishes complexity of the application functions. Data source structure and the size and composition of the data contributes to the complexity of the information the application must work with.

Application:

  • Core software and client application efficiency.
  • Display complexity includes layers per display, features per display extent, functions used to complete the display, and display design for each map scale.
  • Display traffic
  • User workflow activity including user productivity, implementation of heavy workflow tasks, and workflow efficiency (mouse clicks to final display, communication chatter)

Data source:

  • Data source technology including DBMS (data types, indexing, tuning, scalability), file source (File format, structure, indexing, scalability), imagery (Image format, file size, indexing, pre-processing, on-the-fly processing), or cached data source.
  • Geodatabase design including table structure, dependencies, and relationship classes.
  • Data connection including SDE (direct connect, applications server connect) or file source (internal disk, direct attached, network attached).

Hardware technology factors

Hardware design and performance characteristics determine how fast the servers can do work and the volume of work they can handle at one time.

  • Workstation/application server/GIS server including processor core performance, platform capacity (servers), physical memory, network connection, graphics processing unit.
  • Data server including processor core performance, platform capacity, physical memory, and network connection.
  • Network communications including bandwidth, traffic, latency, and application communication chatter.

The system design solution must provide sufficient platform and network capacity to process software loads within peak user performance needs.

Best practice: CPT Standard Workflows provide proper processing load profile.


How is performance managed?

System architecture design provides a framework for identifying a balanced system design and establishing reasonable software processing performance budgets. Performance expectations are established based on selected software processing complexity and vendor published hardware processing capacity. System design performance expectations can be represented by established software processing performance targets. These performance targets can be translated into specific software performance milestones which can be validated during system deployment. Software processing complexity and/or hardware processing capacity can be reviewed and adjusted as necessary at each deployment milestone to ensure system is delivered within the established performance budget.

Our understanding of GIS processing complexity and how this workload is supported by vendor platform technology is based on more than 20 years of experience. A balanced software and hardware investment, with capacity based on projected peak user workflow loads, can reduce cost and ensure system deployment success.

Figure 10.2 Performance management involves building a design solution based on appropriate workflow performance targets and managing compliance throughout design and implementation to deliver within those targets.

Most project managers clearly understand the importance and value of a project schedule in managing deployment risk associated with cost and schedule. The same basic project management principals can be applied to managing system performance risk. Figure 10.2 shows some basic concepts that can be used in managing performance.

System architecture design framework:

  • CPT provides balanced standard and custom workflow load profiles.
  • Workflow complexity assessment is used to assign reasonable software processing performance budgets.

Workflow complexity assessment:

  • Light complexity represents simple user displays with minimum functional analysis (light processing loads).
  • Medium complexity represents standard workflow performance targets that satisfy most workflows that apply best practice design standards. Medium complexity is roughly twice light complexity processing loads.
  • Heavy complexity represents workflows that include more complex map displays or data models that generate 50 percent more processing than medium complexity workflows.
  • Additional complexity selections (medium light, medium heavy, 2x medium, and 3x medium) are available for establishing more refined performance targets.

Workflow complexity guidelines:

  • Light complexity is the minimum loads expected based on software technology selection.
  • Medium complexity would support up to 80 percent of selected software technology deployments.
  • Heavy complexity may increase system cost or reduce user productivity to levels that do not satisfy business needs.

Faster hardware processing allows more complex analysis to be included in the user workflows. These heavier complexity workflows (2x medium, 3x medium) may not handle a large number of concurrent users, but with today's technology they can deliver map display results in a reasonably response time.

Best practice: Performance expectations are established based on selected software processing complexity and vendor published hardware processing speed (per core performance).

Performance management:

  • System design performance expectations can be represented by established software processing performance targets.
  • These performance targets can be translated into specific performance validation milestones
  • Performance can be validated at established software development and system deployment milestones.
  • Software processing complexity and/or hardware processing capacity can be adjusted at each milestone to deliver within established performance budget.
Best practice: Esri understanding of GIS processing complexity and how this workload is supported by vendor platform technology is based on more than 20 years of experience. A balanced software and hardware investment, with capacity based on projected peak user workflow loads, can reduce cost and ensure system deployment success.


Six blind men and the elephant

Computer platforms must be configured properly to support system performance requirements. There are many factors that contribute to user performance and productivity. Enterprise GIS solutions include distributed processing environments where user performance can be the product of contributions from several hardware platform environments. Many of these platform resources are shared with other users. Understanding distributed processing technology provides a fundamental framework for deploying a successful enterprise GIS.

Figure 10.3 The fable of 'Six blind men and the elephant' demonstrates the value of working together to understand a complex dicipline

The importance of working together to understand the technology is illustrated in Figure 10.3. There is a famous poem by John Godfry Saxe that tells a story about six blind men that went to see an elephant, though all of them were blind. As they observe the elephant they form their own understanding of what the elephant looks like.

Six blind men went to see an elephant, though all of them were blind. As they touched the elephant, they each formed their own understanding of what the elephant looks like.

  • The first blind man approached the side of the elephant, which felt like a wall.
  • The second felt the tusk, which was very like a spear.
  • The third held the trunk, which felt like a snake.
  • The fourth felt about the knee, which was like a tree.
  • The fifth touched the ear, which felt like a fan.
  • The sixth grabbed the tail, which felt like a rope.

After examining the elephant, the six blind men met to discuss their findings. Each shared what he learned about what an elephant looks like based on his experience. They were all partly right, and at the same time they were all wrong.

It has been my experience that even the least of us can contribute something unique and important to help us all better understand the technology. We build better technical solutions when we learn to work together.

Each blind man experienced only a part of what the elephant really looked like. The whole elephant was a combination of the parts each blind man visualized from their own discovery. Sharing what each knew helped them all better understand the whole.

Best practice: System performance models can help you better understand the technology by bringing the component pieces together into a comprehensive whole.

User workflow terminology

Figure 10.4 Light, casual, and power are adjectives often used to represent the level of application use by different GIS user types. Service time is a function of workflow complexity.

The study of work performance is not new, and there are a considerable number of theories and ideas published on this topic. Understanding the fundamental terms and relationships that define work performance and applying these fundamentals to computer processing helps you better understand the technology and make more appropriate design choices.

A display transaction is the unit of work used to model system performance. Display complexity is used to determine the amount of work required for a single display transaction. Once you define the work required for a single display (traffic and processing load), the estimated transaction throughput can be used to define the assigned system loads. Once you know the system loads, you can identify the system capacity.

Often within work environments we know the number of concurrent users on the system, but we do not know the peak throughput. An estimate of the peak throughput can be derived by multiplying the number of concurrent users by the average user productivity. Figure 10.4 shows the relationship between the types of users (light, casual, power) and how they are represented by user productivity. Figure also shows the relationship between display complexity and service time (processing time for a display transaction).

User productivity

Proper use of the following terms is important when using peak concurrent users as an estimate for display transaction throughput. It is more accurate to use projected throughput estimates when they are available.

The user is the person that interfaces with a software application through a computer display.

Three common types of users:

  • Light user productivity of one display per minute (DPM) might be a work activity where the application display is used as a reference while talking with customers on the phone, completing paperwork, or doing an activity that does not require a new display.
  • Casual user productivity of 6 DPM might involve medium-level application use while doing other tasks.
  • Power user productivity of 10 DPM would be an experienced data entry or designer that uses the application as the primary tool for doing work.

During a user-needs assessment the user workflow is often represented by a use case.

Rule of thumb:

  • 10 DPM is maximum user productivity for a power desktop user.
  • 6 DPM is maximum user productivity for a web client workflow.


Service time and display complexity

Service time is the total transaction processing time.

Workflow processing time is determined by software technology selection and display complexity.

  • Light complexity is half the processing time of medium complexity.
  • Medium complexity is the workflow processing baseline used to represent software workflows.
  • Heavy complexity is 50 percent more processing than the medium complexity baseline.

Work transaction service time is a key term used to measure software performance.

Best practice: Medium standard workflow is a reasonable initial planning performance target for most software technology selections.


System performance terminology

Most of the performance factors used for system design capacity planning involve simple terms and relationships.

  • A work transaction (display) is an average unit of work.
  • Throughput is a measure of the average work transactions completed over a period of time (displays per minute or transactions per hour).
  • Capacity is the maximum rate at which a platform can do work.
  • Utilization is the percentage of capacity represented by a give throughput rate.
Figure 10.5 Display transaction and system throughput are ways to describe a workflow processing load. Processor utilization and server capacity are ways to measure platform workflow processing loads.

Figure 10.5 shows display transaction and workflow throughput the relationship between throughput;;, capacity, and utilization..

How do you define processing load?

Display transaction is the processing load to render a new user display.

  • The software program provides a set of instructions that must be executed by the computer to complete a work transaction.
  • The processor core executes the instructions defined in the computer program to complete the work transaction.

Transactions with more instructions represent more work for the computer, while transactions with fewer instructions represent less work for the computer.

Throughput is a measure of the average work transactions completed over a period of time.

  • Expressed in displays per minute or transactions per hour.
  • Average processing load is applied to the server.

As workflow throughput increases, the processing load on the server increases.

How do you measure processing load?

Platform capacity is the transaction rate when processors are busy 100 percent of the time (100 percent throughput).

  • Expressed in displays per minute or transactions per hour.
  • Maximum server throughput is always less than full platform capacity.

Platform utilization is the percentage of time the processor is busy.

  • Processor is busy whenever it is servicing a transaction request.
  • Processor is not busy when waiting for a service transaction request.

Utilization of 60 percent means the processor is busy 60 percent of the time.

Platform utilization is reported as an average value over a period of time.

  • Short sampling periods (1 sec) result in a graph of very high and low utilization spikes (processor on and off record) and is not very useful.
  • Longer sampling periods (30 sec) will average out the processor transaction load and provide an easier to read result.
  • If the sampling period is too long (30 minutes), it is possible to underestimate peak utilization values.
Best practice: The sampling period should be long enough to include a statistically significant large random sample of transaction arrivals (100 or more transaction arrivals per sample period).

System workflow terminology

Most of the performance factors used for system design capacity planning involve simple terms and relationships. A work transaction (display) is an average unit of work, throughput is a measure of the average work transactions completed over a period of time (displays per minute or transactions per hour), capacity is the maximum rate at which a platform can do work, and utilization is the percentage of capacity represented by a give throughput rate. You can calculate display service time if you know the platform throughput and corresponding utilization, calculated at any throughput level.

Calculating user display response time for shared system loads is a little bit more difficult. Only one user transaction can be serviced at a time on each processor core. If lots of user transaction requests arrive at the same time, some of the transactions must wait in line while the others are processed first. Waiting in line for processing contributes to system processing delays. User display response time must account for all the system delays, since the display is not complete until the final processing is done.

Fortunately, computing transaction service response time is a common problem for many business applications. The theory of queues or waiting in line has its origin in the work of A. K. Erlang, starting in 1909. There are a variety of different queuing models available for estimating queue time, and I went back to one of the textbooks used during my graduate school days to incorporate these models for use in system design capacity planning. The simplest models were for large populations of random arrival transactions, which should certainly be the case in a high capacity computer computation (we are dealing with thousands of random computer program instructions being executed within a relatively small period of time - i.e. minutes).

Figure 10.6 Queue time is a function of service time and server utilization. Response time is the sum of all component service times and queue times.

Figure 10.6 shows the relationship between queue time, service time, platform utilization and response time. It is important to recognize that the accuracy of the queue time calculation impacts only the expected user response time, and does not reduce the accuracy of the platform capacity calculations provided by the earlier simple relationships. For many years, Esri capacity planning models did not include estimates for user response time.

Workflow response time is important, since it directly impacts user productivity and workflow validity. If display response times are too slow, the peak throughput estimates would not be achieved and the capacity estimates would be not be conservative. Including user response time in the capacity planning models provides more accurate and conservative platform specifications, and gives customers with a better understanding of user performance and productivity.

So in summary, queue time is any time the software program instructions must wait in line to be processed. Queue time is based on a statistical analysis of the transaction (processing request) arrival time distribution. Simply stated, this is the probability of having to wait in line when arriving for a service. For very large populations with random arrival times, the probability distribution for having to wait for service is predictable.

Queue time

System queue time is predictable for large random populations.

  • Large population: Peak throughput rates normally involve several thousand transactions per hour with each map service transaction including hundreds of program instructions.
  • Random distribution: Hundreds of transactions are generated by each user to generate the throughput loads.
  • Large random distribution of display transactions follow a predictable random arrival distribution.

Queue time varies based on system utilization and transaction service time.

  • Queue time increases as platform utilization approaches full capacity.
  • Queue time increases with increasing transaction service times.

Response time

Response time is the total time to complete an average display refresh.

Response time is important because it can impact user productivity.

  • As queue time increases, response time will increase and user productivity may decrease.
  • Power users can be very sensitive to minor changes in display response times.
  • User productivity can have a direct impact on business operations and system ability to meet user needs.
Best practice: The faster a user can work, the more work the user can do within a specified period of time.


What is workflow productivity?

Figure 10.7 User productivity determines average display cycle time. Think time gives the user a chance to review the display and input next display request.

Figure 10.7 shows workflow productivity in terms of transaction cycles per minute and shows the relationship between cycle time, response time, and think time. Productivity and cycle time are a measure of user activity in doing work.

Productivity is expressed in user displays per minute (DPM/client).

User productivity rule of thumb:

  • Maximum Desktop user productivity = 10 DPM/client (power user)
  • Maximum server user productivity = 6 DPM/client
  • Casual user productivity = 4–6 DPM/client
  • Light user productivity = 1 DPM/client


User productivity should be evaluated for each user workflow (establish average user DPM for each business workflow).

  • Workflow productivity is defined on the CPT Workflow tab.
  • Productivity can be adjusted on the CPT Design tab (allows use of single workflow with different user productivities).
Best practice: Conservative productivity estimates are provided for each CPT Standard Workflow.



Cycle time

Cycle time is the average time between each display request.

  • For cycle time in sec/display, Cycle time = 60 sec divided by user productivity

Cycle time rule of thumb.

  • Minimum desktop user cycle time = 6 sec/display (power user), productivity is 10 displays per minute.
  • Minimum Web server user cycle time = 10 sec/display, productivity is 6 displays per minute.
  • Casual user cycle time = 10–15 sec/display, productivity is 4-6 display per minute.
  • Light user cycle time = 60 sec/display, productivity is 1 display per minute.

Think time

Think time is the average available user input time.

Capacity planning models use two types of think time.

  • Computed think time is cycle time minus response time.
  • Minimum think time is the minimum acceptable user input time.
  • Margin is the computed think time minus minimum think time.
Warning. Minimum think time must not exceed computed think time for a valid user workflow.


What is a valid user workflow?

Figure 10.8 A valid workflow provide sufficient time for user to review the display and enter the following display request.

Figure 10.8 shows a valid workflow. All user workflow performance terms work together during each display transaction to satisfy business performance requirements.

Workflow specifications:

  • User productivity = 10 DPM/client (user workflow performance needs)
  • Display cycle time = 6 sec (60 seconds in a minute divided by 10)

For a given display executed on a given platform:

  • Display service time is a constant value.
  • In a shared server environment, queue time increases with increasing user loads (increasing server utilization).
  • As queue time increases, display response time increases.
  • For a fixed user productivity (10 displays per minute), computed user think time will decrease with increasing display response time.

Computed user think time is greater than minimum think time for valid workflow.

Warning: At some point, computed user think time will be less than minimum think time (invalid user workflow).


User productivity adjustment

Figure 10.9 The CPT Design identifies an invalid workflow when computed think time is less than minimum think time. The CPT Adjust function reduces user productivity value until computed think time = minimum think time. Workflow is valid once computed time is equal to or greater than minimum think time.

During peak system loads, queue time can increase to a point where computed think time is less than minimum think time as shown in Figure 10.9. The user productivity must be adjusted (reduced) to represent a valid user productivity.

CPT identifies an invalid workflow by changing the workflow productivity.

  • Workflow productivity must be reduced to identify a valid workflow.
  • CPT includes a RESET ADJUST function that will automatically reduce workflow productivity to the proper reduced value.

CPT Design ADJUST function:

  • Valid system solution is reached when computed user think time is equal to or greater than minimum think time for all workflows.
  • Valid solution is identified on the CPT display once valid workflow is established.

CPT Design ADJUST process:

  • Iterative calculation that reduces user productivity for all invalid workflows and then re-computes the system solution.
  • If adjusted productivity provides minimum think time less than computed think time, the next iteration will increase productivity slightly and re-compute the system solution.
  • Iterations continue until the most critical adjusted computed think time = minimum think time.
Best practice: Enable iterative calculations in Excel Options > Formula.
  • Maximum Iterations: 500
  • Maximum Change: 0.001
Warning: Excel will provide a Circular Reference Warning if the Enable iterative calculations is not selected. Iterative calculations are required for many of the CPT sizing calculations.
CPT Design user workflow productivity adjustment
Best practice: System design should be upgraded to satisfy user productivity needs.


What is a batch process?

A batch process is a workflow that does not require user interaction. User inputs are provided before the process is executed. The processes then runs without user input until the job is done. Figure 10.10 shows a diagram representing a batch process.

Figure 10.10 Batch process loads are sequential in nature and productivity depends on computed response time.

Most heavy GIS functions can be modeled as a batch process. GIS heavy batch processes, when deployed on Server, are often called geoprocessing services.

Geoprocessing functions can be deployed as a network service configured to handle multiple user service work requests.

  • Geoprocessing function runs as a sequential batch process.
  • Each concurrent geoprocessing instance consumes a single platform core.

Advantages of configuring geoprocessing functions as a network service:

  • Service work request is sent to a processing queue to await execution.
  • Specific number of server cores can be allocated to execute the service.
  • User can do other work while waiting for the work request to be serviced.

Batch process loads are modeled as a workflow with zero (0) think time (no user input between display transactions).

  • Batch productivity is calculated based on computed response time (60 seconds/response time = batch DPM).
  • Batch process queue time is limited to service contention (no random arrival queue time).
  • Displays are requested sequentially following each refresh.
  • Batch processes deployed on a single platform with local data source tend to consume a single processor core.
  • CPT Design tab will distribute loads across available cores resources based on batch workflow profile (limiting system component will determine peak batch productivity).

Batch processing examples:

  • Map caching
  • DE reconcile and post
  • Geodatabase replication
  • Heavy map printing jobs
  • Heavy routing analysis
  • Heavy imagery processing
  • Heavy geospatial analysis
  • Heavy network analysis
Best practice: Any heavy system-level geoprocessing function that may be requested by more than one user at a time should be separated from the user application workflows and executed as separate network batch process work request services.
Warning: CPT Design productivity adjust function must be used to computer system loads and batch process productivity. Each concurrent batch process are identified in the CPT Design as a user (column C) or client (column D) instance.
CPT Design tab configured with a batch process
Best practice: Workflow selection should have same load profile (client, web, GIS server, SDE, DBMS) as the batch process you wish to model. Total processing time is not important for modeling load profile.

The batch process productivity must be computed to identify a valid workflow. Productivity will depend on the server loads and available system resources. A single batch process can take advantage of only one processor core.

Best practice: Recommended design practice - any heavy function (runs more than 30 seconds) that might be requested by several users at a time should be configured as a batch process (network services). Processing queue must be established for user work request input. Each batch instance (network service) will process requests sequentially based on available processor resources. User can be notified once their work request is services.

Platform throughput and service time

Figure 10.11 provides a chart showing the relationship between utilization and througput; a simple relationship that can be used to identify platform capacity.

The most important system performance terms define the average work transaction (display), work throughput, system capacity, and system utilization. Figure 10.11 provides a chart showing the relationship between utilization and throughput; a simple relationship that can be used to identify platform capacity.

Capacity (DPM) = Throughput (DPM)/Utilization

Best practice: If you know the current throughput (users working on the system) and you measure the system utilization (average computer CPU utilization), then you can know the capacity of the server.

The relationship between throughput, capacity, and utilization are true based on how these terms are defined.

  • Throughput is the number of work transactions being processed per unit time.
  • Capacity is the maximum throughput that can be supported by a specific hardware configuration.
  • Utilization is the ratio of the current throughput to the system capacity (expressed as percentage of capacity).

The processor core is the hardware that executes the computer program instructions.

  • Number of processor core identifies how many instances can be serviced at the same time.
  • Service time is a measure of the average work transaction processing time.

Work transaction service time is a key term used to measure software performance.

  • The software program provides a set of instructions that must be executed by the computer to complete a work transaction.
  • The processor core executes the instructions defined in the computer program to complete the work transaction.

Transactions with more instructions represent more work for the computer, while transactions with fewer instructions represent less work for the computer.

The complexity of the computer program workflow can be defined by the amount of work (or processing time) required to complete an average work transaction.

  • Service time on the CPT Workflow tab is presented relative to a platform performance baseline.
  • Faster platform processor cores execute program instructions in less time than slower processor cores.
  • Service time can be computed using a simple formula based on number of processor cores and platform capacity.

Service time (sec) = 60 sec x #core/Capacity (DPM)

Service time can be computed based on measured throughput and utilization.

Figure 10.12 Service time calculations for peak loads generated at each web service instance configuration.

Figure 10.12 shows service time results for five different throughput loads.

  • Number of deployed service instances determine peak loads.
  • Throughput and utilization are measured for each of the five separate test configurations.
  • Capacity of 714 DPM was calculated from each test load.
  • Service time of 0.34 sec was calculated from each test load.
Best practice: You can calculate capacity from throughput and utilization measurements at any system load.
Note: Real operational environments can provide a very good measure of capacity.

Once you know the platform capacity, you can compute the platform service time.


Platform performance and response time

Figure 10.13 Display response time increases with increased platform loads.

Figure 10.13 provides a chart showing the relationship between utilization and response time.

You can calculate display service time if you know the platform throughput and corresponding utilization, calculated at any throughput level. Calculating user display response time for shared system loads is a little bit more difficult.

Calculating user response time:

  • Only one user transaction can be serviced at a time on each processor core.
  • If many user transaction requests arrive at the same time, some of the transactions must wait in line while the others are processed first.
  • Waiting in line for processing contributes to system processing delays.
  • User display response time must include time for all the system component processing times and system delays, since the display is not complete until the final processing is done.

Any system time where a transaction request must wait in line for processing is called queue time.

Response time is the sum of the total service times (processing times) and queue times (wait times) as the transaction request travels across system components to the server and returns to deliver the final user display.

Response time (sec) = Service time (sec) + Queue time (sec)

Warning: Queue time increases to infinity as any processing component of the system approaches full capacity.

Response time is importance, since it directly contributes to user productivity.

Productivity = 60 sec/(response time + think time)

Warning: As queue time increases response time will increase and productivity will decrease.

Platform queue time

Computing response time is a common problem for many business applications. To get it right, you have to understand queue time. The theory of queues or waiting in line has its origin in the work of A. K. Erlang, starting in 1909.

Figure 10.14 Transaction request queue time will vary with platform utilization and number of platform core.

Figure 10.14 shows a formula for queue time and also a graph showing the relationship between queue time and platform utilization. The number of platform processor core determines the sensitivity of queue time to platform utilization.

The simplest queuing models work for large populations of random arrival transactions, which should certainly be the case when modeling computer computations (thousands of random computer program instructions being executed within a relatively small period of time—e.g., seconds).

The queue time calculations used in the Capacity Planning Tool is a simplified model developed from Operations Research Queuing theory.

  • The second half of the model (single core section) is quite straight forward, and there is general agreement that this simple model would identify wait times in the case of a single service provider (single core platform or single network connection).
  • The multi-core case is a little more complicated, and unfortunately is the more common capacity planning calculations we need to deal with in multi-core server platform configurations.

Queue time model

The single-core platform queue time increases with increasing service time and platform utilization.

Queue time (single-core) = service time (sec) x utilization/(1 - utilization).

Queue time is zero (0) when utilization is zero (0) and increases to infinity as utilization approaches 100 percent.

In the multi-core platform case, it is important to include the probability of a processor core being available to service the request on arrival(not busy).

  • The more processor cores in the server, the more likely one of these cores will be available for processing when the service transaction arrives.
  • The equation simplifies to the simple single-core formula when the number of processor cores = 1.

Multi-core availability = 1/{1 + utilization x (cores - 1)}

Queue time = Multi-core availability x Queue time (single-core)

The derived queue time formula provided above has been compared against several benchmark test results, and the computed response time was reasonably close to the measure test results (showed conservative response times—slightly higher than measured values).

It is important to recognize that the accuracy of the queue time calculation impacts only the expected user response time, and does not reduce the accuracy of the platform capacity calculations provided by the earlier simple relationships.

  • For many years, Esri capacity planning models did not include estimates for user response time.
  • Workflow response time is important, since it directly impacts user productivity and workflow validity.
  • If display response times are too slow, the peak throughput estimates would not be achieved and the capacity estimates would be not be conservative.
Best practice: Including user response time in the capacity planning models provides more accurate and conservative platform specifications, and gives customers with a better understanding of user performance and productivity.

Queue time derivatives

Peak system loads with display response time = 2 seconds

Multi-core servers provide better quality of service than single-core servers during heavy loads.

  • Eight 1-core servers provide throughput of 14,400 TPH with two-second response time.
  • Four 2-core servers provide throughput of 17,856 TPH with two-second response time.
  • Two 4-core servers provide throughput of 22,176 TPH with two-second response time.
  • One 8-core server provides throughput of 25,344 TPH with two-second response time.
Warning: More cores per server improves throughput only when display service times are the same for all configurations.
CPT Design evaluation of physical and virtual multi-core performance

Virtual Server processing overhead

Virtual Server processing load (service times) increase with larger number of core (vCPU).

Best practice: Multiple 2-core virtual server configurations provide the best overall per-core throughput.

Based on initial test results, CPT releases applied 10 percent processing overhead per core for virtual server environments. More recent test results show newer virtual environments require less overhead, and CPT virtual server overhead planning factors were reduced with the July 2013 release.

Arc13CapacityPlanning0701 applied virtual server processing overhead:

  • 1-3 core/node, 10 percent overhead
  • 4-6 core/node, 20 percent overhead
  • 7 or more core/node, 30 percent overhead

How to size the network

Figure 10.15 Display response time increases with increased network loads.

Figure 10.15 provides a chart showing the relationship between network utilization and response time. Performance models used to support network communications follow the same type of terms and relationships identified for server platforms.

Some of the same performance terms are referenced by different names.

  • Network transaction = display
  • Network throughput = traffic
  • Network capacity = bandwidth
  • Network utilization = utilization

The network connection (switch port, router port, network interface card, hardware bus adapter, etc.) is the hardware that processes the network traffic.

  • Most local networks are identified as single path systems.
  • Multiple NIC cards or multiple network paths can improve throughput utilization.

Additional performance terms:

  • Network service time = network transport time
  • Network queue time = network congestion delays
  • Network latency delay time = measured latency (round trip travel time) x chatter (round trips)
Best practice: CPT includes network as additional system component when computing system performance.
Warning: Network performance can be the most critical design constraint for many distributed system design solutions.

What is system performance?

Figure 10.16 System performance must consider service time and queue time contributions for components across the distributed system environment.

Figure 10.16 shows the information provided by the CPT Workflow Performance Summary. Workflow service times and queue times are shown in a stacked bar chart. Response time, shown at the height of the stack, is the total time required to complete the work transaction.

The Workflow Performance Summary chart shows the performance of 10 separate benchmark tests.

  • Test were performed on 2-core servers.
  • Number of concurrent batch processes was increased with each test run.
  • First two tests (1 and 2 batch processes) response time was about the same.
  • Response time increased linearly for tests with more than 2 batch processes.

Response time includes all of the processing times and queue times experienced in completing an average work transaction.

  • Platform service and queue times
  • Network transport and queue times
  • Latency travel time delays
  • Client service time

Performance Validation

Planning provides the first opportunity for building successful GIS operations. Getting started right, understanding your business needs, understanding how to translate business needs to network and platform loads, and establishing a system design that will satisfy peak user workflow requirements is the first step on your road to success.

Planning is an important first step – but it is not enough to ensure success. If you want to deliver a project within the initial planning budget, you need to identify opportunities along the way to measure progress toward your implementation goal. Compliance with performance goals should be tracked throughout initial development, integration, and deployment - integrate performance validation measurements along the way. Project success is achieved by tracking step by step progress toward your implementation goal, making appropriate adjustments along the way to deliver the final system within the planned project budget. The goal is to identify problems and provide solutions along the way - the earlier you identify a problem the easier it will be to fix. System performance can be managed like any other project task. We showed how to address software performance in Chapter 3, network performance in Chapter 5, and platform performance in Chapter 7. If you don’t measure your progress as these pieces come together, you will miss the opportunity to identify and make the appropriate adjustments needed to ensure success.

There are several opportunities throughout system development and deployment where you can measure progress toward meeting your performance goals. The CPT Test tab includes four tools you can use to translate live performance measurements to workflow service times – the workflow performance targets used to define your initial system design.

Map display render times

In Chapter 3 we shared the important factors that impact software performance. For Web mapping workflows, map complexity is the primary performance driver. Heavy map displays (lots of dynamic map layers and features included in each map extent) contribute to heavy server processing loads and network traffic. Simple maps generate lighter server loads and provided users with much quicker display performance. The first opportunity for building high performance map services is when you are authoring the map display.

There are two map rendering tools available on the CPT Test tab that use measured map rendering time to estimate equivalent workflow service times. One tool is available for translating ArcGIS for Desktop map rendering times (MXD) and the other tool is for translating ArcGIS for Server map service rendering times (MSD). With both tools, measured map rendering time is translated to workflow services times that can be used by the CPT Calculator and Design tabs for generating your platform solution. The idea is to validate that your map service will perform within your planned system budget by comparing the workflow service times generated from your measured rendering times with your initial workflow performance targets. If the service times exceed your planned budget, you should either adjust the map display complexity to perform within the initial planning budget or increase your system performance budget. The best time to make the map display complexity adjustment is during the map authoring process. Impacts on the project budget can be evaluated and proper adjustments made to ensure delivery success.

Measured MSD render time

MSD render time can be measured when publishing your map service using the service editor preview tool.

Warning: Make sure to measure a map location that represents the average map complexity or higher within your service area extent.
Measured MXD render time

MXD render time can be measured using the [MXDperfstat] ArcScript performance measurement tool.

Warning: Make sure to measure a map location that represents the average map complexity or higher within your service area extent.
Measured throughput and platform utilization

If you know your platform configuration, your measured peak workflow throughput, and the associated platform utilization the CPT can calculate the workflow service times. The Test tab translation tools can be used to input throughput (transaction per hour), the platform configuration (server platform selection), and the measured platform utilization and excel will translate these inputs to equivalent workflow service times.

Best practice: Performance metrics can be collected from benchmark test or live operations.
Warning: Make sure all measurements are collected for the same loads at the same time.
Measured peak concurrent users and platform utilization translator

If you don’t have measured throughput, concurrent users working on the system can be used to estimate throughput loads. This is a valuable tool for using real business activity to validate system capacity (business units identify peak user loads and IT staff identify server utilization observed during these loads). The Test tab can be used to input throughput (peak concurrent users), the platform configuration (server platform selection), and the measured platform utilization and excel will translate these inputs to equivalent workflow service times.

Best practice: Analysis assumes peak users are working at web power user productivity (6 DPM) over a reasonable measurement period (10 minutes).
Warning: Make sure all measurements are collected for the same loads at the same time.
Move Test tab derived workflow service times to project workflows.

The CPT Workflow tab is where the results of your performance validation efforts come together. You can bring all your test results together, along with the original workflow service times, to validate that you are building a system that will perform and scale within your established project performance budget.

Best practice: Performance management, including performance validation throughout development and system delivery, is the key to implementation success. It is important that you identify the right technology and establish reasonable performance goals during your initial system design planning. It is even more important that you monitor progress in meeting these goals throughout final system development and delivery.

Capacity Planning

The models supporting Esri capacity planning today are based on the performance fundamentals introduced in this section. Platform capacity is determined by the software processing time (platform service time) and the number of platform core, and is expressed in terms of peak displays per minute. Platform capacity (DPM) can be translated to supported concurrent users by dividing by the user productivity (DPM/client).

The performance fundamentals discussed in this chapter are basic concepts that apply to any computer environment, and an understanding of these fundamentals can establish a solid foundation for understanding system performance and scalability. Software and hardware technology will continue to change, and the terms and relationships identified in this section can be used to normalize these changes and help us understand what is required to support our system performance needs.

The next chapter will provide an overview of the Capacity Planning tools introduced throughout the previous chapters. The CPT videos at the end of this chapter focus on system performance validation – showing how the fundamental performance terms and relationships are used by the CPT to connect user requirements with system hardware loads, and how these loads are used to identify appropriate hardware requirements. Performance validation during system design and deployment is also a key topic, sharing how the CPT Test tools can be used to translate real performance measurements to equivalent workflow service times for performance validation.

CPT Video: Performance management

Previous Editions

Performance Management 32nd Edition (Spring 2013)
Performance Management 31st Edition (Fall 2012)
Performance Fundamentals 30th Edition (Fall 2011)
Performance Fundamentals 29th Edition (Spring 2011)
Performance Fundamentals 28th Edition (Fall 2010)
Performance Fundamentals 27th Edition (Spring 2010)

System Design Strategies (select here for table of contents)
System Design Strategies 33rd Edition (Fall 2013)
1. System Design Process 2. GIS Software Technology 3. Software Performance 4. Server Software Performance
5. GIS Data Administration 6. Network Communications 7. GIS Product Architecture 8. Platform Performance
9. Information Security 10. Performance Management 11. System Implementation 12. City of Rome
A1. Capacity Planning Tool A2. ESD Planning Tools A3. Acronyms and Glossary Preface (Executive Summary)

Page Footer
Specific license terms for this content
System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)