Performance Management (CPT Demos) 43rd Edition

From wiki.gis.com
Jump to: navigation, search
Capacity Planning Tool TABLE OF CONTENTS
1. System Design Process (CPT Demos) 43rd Edition 2. GIS Software Technology (CPT Demos) 43rd Edition 3. Software Performance (CPT Demos) 43rd Edition
4. Server Software Performance (CPT Demos) 43rd Edition 5. GIS Data Administration (CPT Demos) 43rd Edition 6. Network Communications (CPT Demos) 43rd Edition
7. Platform Performance (CPT Demos) 43rd Edition 9a. GIS Product Architecture (CPT Calculator Demos) 43rd Edition 9b. GIS Product Architecture (CPT Design Demos) 43rd Edition
10. Performance Management (CPT Demos) 43rd Edition 11a. City of Rome Year 1 (CPT Demos) 43rd Edition 11b. City of Rome Year 2 (CPT Demos) 43rd Edition


Arc18CapacityPlanning0901 release

Figure A1-10.1 Performance management involves building a design solution based on appropriate workflow performance targets and managing compliance throughout design and implementation to deliver within those targets.
Esri started developing simple system performance models in the early 1990s to document our understanding about distributed processing systems. These system performance models have been used by Esri system design consultants to support distributed computing hardware solutions since 1992. These same performance models have also been used to identify potential performance problems with existing computing environments.

The Capacity Planning Tool was introduced in 2008 incorporating the best of the traditional client/server and web services capacity planning models providing an adaptive sizing methodology to support future enterprise GIS operations. The capacity planning tool methodology is easy to use and provides metrics to manage performance compliance during development, initial implementation, and system delivery.

Figure A1-10.1 shows how system architecture design models can be used for performance management.

  • System architecture design provides a framework for identifying a balanced system design and establishing reasonable software processing performance budgets.
  • Performance expectations are established based on selected software processing complexity and vendor published hardware processing capacity.
  • System design performance expectations can be represented by established software processing performance targets.
  • These performance targets can be translated into specific software performance milestones which can be validated during system deployment.
  • Software processing complexity and/or hardware processing capacity can be reviewed and adjusted as necessary at each deployment milestone to ensure system is delivered within the established performance budget.

Multi-core platform performance

Figure A1-10.2 Web mapping performance impacted by multi-core queue times
.

Figure A1-10.2 shows the impact of multi-core server configurations on display performance.

CPT Design tab is configured with six separate hardware tier configurations. The virtual server machine configurations for each tier are configured with different number of core.

  • Platform Tier 01: Total of twelve 1-core VMs.
  • Platform Tier 02: Total of six 2-core VMs.
  • Platform Tier 03: Total of four 3-core VM.
  • Platform Tier 04: Total of three 4-core VM.
  • Platform Tier 05: Total of two 6-core VM.
  • Platform Tier 06: Total of one 12-core VM.

All virtual machines are supported by a common host server platform tier and support a common published service workflow.

  • AGS REST 2D Hvy 100%Dyn 13x7 PNG24 workflow.
  • Xeon Gold 6132 28 core host platforms.
  • Host server utilization is maintained below 80 percent.
  • Hypervisor processing load is supported by host platform, with sufficient processing resources to avoid virtual machine processing contention.

CPT is configured to generate 80 percent peak throughput transaction rates for each tier configuration.

Workflow performance summary

  • Notice service time (column D) for each tier is the same.
  • Notice the same peak throughput is supported by each configuration.
  • Compare display performance for each configuration (12-core VM response time is less than 30% of 1-core VM response time).
Best practice: Higher capacity multi-core server machines provided better performance at high utilization loads than single-core platforms.
Note: The difference in peak load display response times can be explained by queuing theory.


ArcGIS Server Virtual Machine (VM) performance

Server virtualization provides many advantages for managing a data center environment. These advantages include server consolidation and system provisioning. Host platform hypervisor manages virtual server core access to available host resources.

The selected host platform supports both GIS Server virtual machine and host hypervisor processing loads.

  • GIS Server VM processing loads with filtered access through virtual server core.
  • Hypervisor processing loads with direct access to available host platform core.

CPT Design configuration shows an AGS REST 2D V Hvy 100%Dyn 13x7 PNG24 workflow deployed in four separate ArcGIS Server site platform tier configurations.

  • 1x 8-core physical server configuration.
  • 1x 8-core virtual server configuration.
  • 2x 4-core virtual server configuration.
  • 4x 2-core virtual server configuration.
  • 8x 1-core virtual server configuration.


Host machine with Virtual Server machines and Hypervisor sharing same host platform physical core

Figure A1-10.3 CPT Design analysis: ArcGIS Server deployed in virtual server machines competing with hypervisor for access to host platform core
.

Figure A1-10.3 compares ArcGIS Server performance between an 8-core physical server configuration and four separate 8-core Virtual Server configurations. Each virtual server tier is supported by a dedicated 8-core host platform tier.

Platform tier configurations

  • 1x 8-core physical machine
  • 1x 8-core virtual machine
  • 2x 4-core virtual machines
  • 4x 2-core virtual machines
  • 8x 1-core virtual machines

Workflow performance summary.

  • Measured service time was the same for all five platform tier configurations.
  • Peak throughput (80% rollover) was same for all four platform tier configurations.
  • Host platform tier had same number of core as the virtual server tier.
  • 8-core, 4-core, and 2-core VMs had roughly the same response time as the 8-core physical machine (queue time was dominated by host platform contention). Host processor core were shared by virtual machines and hypervisor processing loads.
  • 1-core display performance was slower due to higher queuing delays.
  • Virtual Server machine throughput for all configurations peak at less than 60% utilization due to hypervisor contention on the host platform.


Figure A1-10.4 Peak throughput chart: ArcGIS Server deployed in virtual server machines competing with hypervisor for access to host platform core
.

Figure A1-10.4 shows virtual server throughput was roughly 60% of the physical server throughput due to host platform hypervisor loads.

Warning: Hypervisor load will restrict virtual server throughput when host platform has limited processing resources.


Host machine with 50 percent additional core available for hypervisor loads

Figure A1-10.5 CPT Design analysis: ArcGIS Server deployed in virtual server machines with dedicated access to host platform core
.

Figure A1-10.5 compares ArcGIS Server performance between an 8-core physical server configuration and four separate 8-core Virtual Server machine configurations. Each virtual server tier is supported by a dedicated 12-core host platform tier.

Workflow performance summary.

  • Measured service time was the same for all four platform tier configurations.
  • Peak throughput (90% rollover) was same for all five platform tier configurations.
  • Host platform tier had extra core to support hypervisor processing loads.
  • 8-core virtual machine had same response time as 8-core physical machine.
  • 1-core, 2-core, and 4-core virtual machine display performance was slower due to queuing delays.


Figure A1-10.6 Peak throughput chart: ArcGIS Server deployed in virtual server machines with dedicated access to host platform core
.

Figure A1-10.6 shows virtual server throughput was the same as the physical server throughput.

Best practice: Provide host platform with at least 35 percent more processing capacity than required by the virtual server machines.


Performance Validation

Planning provides the first opportunity for building successful GIS operations. Getting started right, understanding your business needs, understanding how to translate business needs to network and platform loads, and establishing a system design that will satisfy peak user workflow requirements is the first step on your road to success.

Planning is an important first step – but it is not enough to ensure success. If you want to deliver a project within the initial planning budget, you need to identify opportunities along the way to measure progress toward your implementation goal. Compliance with performance goals should be tracked throughout initial development, integration, and deployment - integrate performance validation measurements along the way. Project success is achieved by tracking step by step progress toward your implementation goal, making appropriate adjustments along the way to deliver the final system within the planned project budget. The goal is to identify problems and provide solutions along the way - the earlier you identify a problem the easier it will be to fix. System performance can be managed like any other project task. We showed how to address software performance in Chapter 3, network performance in Chapter 5, and platform performance in Chapter 7. If you don’t measure your progress as these pieces come together, you will miss the opportunity to identify and make the appropriate adjustments needed to ensure success.

There are several opportunities throughout system development and deployment where you can measure progress toward meeting your performance goals. The CPT Test tab includes four tools you can use to translate live performance measurements to workflow service times – the workflow performance targets used to define your initial system design.

Map display render times

In Chapter 3 we shared the important factors that impact software performance. For Web mapping workflows, map complexity is the primary performance driver. Heavy map displays (lots of dynamic map layers and features included in each map extent) contribute to heavy server processing loads and network traffic. Simple maps generate lighter server loads and provided users with much quicker display performance. The first opportunity for building high performance map services is when you are authoring the map display.

There are two map rendering tools available on the CPT Test tab that use measured map rendering time to estimate equivalent workflow service times. One tool is available for translating ArcGIS Desktop map rendering times (MXD) and the other tool is for translating ArcGIS Server map service rendering times (Preview). With both tools, measured map rendering time is translated to workflow services times that can be used by the CPT Calculator and Design tabs for generating your platform solution. The idea is to validate that your map service will perform within your planned system budget by comparing the workflow service times generated from your measured rendering times with your initial workflow performance targets. If the service times exceed your planned budget, you should either adjust the map display complexity to perform within the initial planning budget or increase your system performance budget. The best time to make the map display complexity adjustment is during the map authoring process. Impacts on the project budget can be evaluated and proper adjustments made to ensure delivery success.

Measured MSD render time

Figure A1-10.7 The CPT Test validation tool used for translated measured map service Preview render times to workflow service times.

Figure A1-10.7 shows a tool you can use to translate measured map publishing Preview render time to workflow service times. Preview render time can be measured when publishing your map service using the service editor preview tool.

Warning: Make sure to measure a map location that represents the average map complexity or higher within your service area extent and adjust preview to average client display resolution. Use a local FGDB data source to collect proper measurement.
Note: Pan or Zoom of the ArcGIS Server service editor preview window will provide render time for fresh dynamic display

The Measured Performance tool can be used to estimate workflow service times from a measured Preview render time.

  • Select Preview in cell B12.
  • Select Test Platform processor configuration in cell A14 (workstation or server platform used to render the map). Selection is from platforms in the CPT Hardware tab.
  • Select Software Technology map service in cell A16.
  • If using a platform with turbo-boost capability, set maximum turbo-boost MHz in cell D13.
  • Enter measured Preview display render time in cell A18.

Baseline workflow service time is provided in range D15:21.

  • Workflow service times are also provided on the CPT Workflow tab under the Test Workflows section.


Measured MXD render time

Figure A1-10.8 The CPT Test validation tool used for translated MXDPerfStat render times to workflow service times.

Figure A1-10.8 shows a tool you can use to translate measured MXDPerStat render time to workflow service times. MXD render time can be measured using the MXDPerfStat ArcScript performance measurement tool.

Warning: Make sure to measure a map location that represents the average map complexity or higher within your service area extent. Adjust map display to average client display resolution. Use a local FGDB data source to collect proper measurement.
Note: MXDPerfStat tool uses the Windows rendering engine to measure display performance at a selected location and map display extent, identifying render time for each scale included in the selected map document

The CPT Measured Performance tool can be used to generate workflow service times from the measured MXD display render time.

  • Select MXDperfstat in cell B12.
  • Select Test Platform processor configuration in cell A14.
  • Select Software Technology map service in cell A16.
  • If using a platform with turbo-boost capability, set maximum turbo-boost MHz in cell D13.
  • Enter measured MXDperfstat display render time in cell A18.

Baseline workflow service time is provided in range D15:21.

  • Workflow service times are also provided on the CPT Workflow tab under the Test Workflows section.

Measured throughput and platform utilization

Figure A1-10.9 The CPT Test validation tool used for translated measured map service throughput and platform utilization to workflow service times.

If you know your platform configuration, your measured peak workflow throughput, and the associated platform utilization the CPT can calculate the workflow service times. The Test tab translation tools can be used to input throughput (transaction per hour), the platform configuration (server platform selection), and the measured platform utilization and excel will translate these inputs to equivalent workflow service times. Figure A1-10.9 shows the inputs required for completing this transaction.

Best practice: Performance metrics can be collected from benchmark test or live operations.
Warning: Make sure all measurements are collected for the same loads at the same time.

The Live Results tool can be used to generate workflow service times from throughput and utilization measurements.

  • Enter throughput in cell A3.
  • Select test platform configuration in range E4:10.
  • Identify number of platform nodes in range D4:10.
  • Enter measured utilization for each platform in range B4:10.

Baseline workflow service time is provided in range G3:10.

  • Workflow service times are also provided on the CPT Workflow tab under the Test Workflows section.

Translate measured traffic to workflow transaction Mbpd.

  • Enter measured traffic in cell C3.
  • Enter test bandwidth in cell E3.
  • Workflow transaction Mbpd is provided in cell F3 and on the Workflow tab.


Measured peak concurrent users and platform utilization translator

Figure A1-20.26 CPT Test validation tool used for translated map service peak concurrent users and platform utilization to workflow service times.

If you don’t have measured throughput, concurrent users working on the system can be used to estimate throughput loads. This is a valuable tool for using real business activity to validate system capacity (business units identify peak user loads and IT staff identify server utilization observed during these loads). The Test tab can be used to input throughput (peak concurrent users), the platform configuration (server platform selection), and the measured platform utilization and excel will translate these inputs to equivalent workflow service times. Figure A1-20.26 shows the inputs required for completing this transaction.

Best practice: Analysis assumes peak users are working at web power user productivity (6 DPM) over a reasonable measurement period (10 minutes).
Warning: Make sure all measurements are collected for the same loads at the same time.

The Live Results tool can be used to generate workflow service times from peak concurrent users and utilization measurements.

  • Enter peak concurrent users in cell A5.
  • Select test platform configuration in range E4:10.
  • Identify number of platform nodes in range D4:10.
  • Enter measured utilization for each platform in range B4:10.

Baseline workflow service time is provided in range G3:10.

  • Workflow service times are also provided on the CPT Workflow tab under the Test Workflows section.

Translate measured traffic to workflow transaction Mbpd.

  • Enter measured traffic in cell C3.
  • Enter test bandwidth in cell E3.
  • Workflow transaction Mbpd is provided in cell F3 and on the Workflow tab.


Move Test tab derived workflow service times to project workflows.

Figure A1-10.10 Workflows generated on the CPT Test tab can be transferred to your project workflows on the CPT Workflow tab.

The CPT Workflow tab is where the results of your performance validation efforts come together. Figure A1-10.10 shows how each of these test results can be brought together, along with the original workflow service times, to validate that you are building a system that will perform and scale within your established project performance budget.

Moving workflows to your Project Workflow list.

  • Test workflow service times show up in the Test Workflows section on the Workflow Tab.
  • Add an extra workflow in your Project Workflows to use as a template.
  • Copy blue portion of Test Workflow.
  • Select first column cell of template workflow.
  • Paste special/values to your new workflow template in the Project workflows.
  • Complete the Description of new workflow in column AB.
  • Insert nickname in workflow cell in Column A.
Best practice: Performance management, including performance validation throughout development and system delivery, is the key to implementation success. It is important that you identify the right technology and establish reasonable performance goals during your initial system design planning. It is even more important that you monitor progress in meeting these goals throughout final system development and delivery.

CPT Capacity Planning videos

Chapter 10 Capacity Planning video shows how to use the CPT Design adjust function to identify performance impact of undersized systems, how to represent a batch process in your design, and how to use the CPT to translated measured system performance to workflow services times to validate deployed services are performing within the performance budget established in your system design.

Capacity Planning Tool TABLE OF CONTENTS
1. System Design Process (CPT Demos) 43rd Edition 2. GIS Software Technology (CPT Demos) 43rd Edition 3. Software Performance (CPT Demos) 43rd Edition
4. Server Software Performance (CPT Demos) 43rd Edition 5. GIS Data Administration (CPT Demos) 43rd Edition 6. Network Communications (CPT Demos) 43rd Edition
7. Platform Performance (CPT Demos) 43rd Edition 9a. GIS Product Architecture (CPT Calculator Demos) 43rd Edition 9b. GIS Product Architecture (CPT Design Demos) 43rd Edition
10. Performance Management (CPT Demos) 43rd Edition 11a. City of Rome Year 1 (CPT Demos) 43rd Edition 11b. City of Rome Year 2 (CPT Demos) 43rd Edition

Page Footer
Specific license terms for this content
System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)