Skip to content or navigation


Why Real Content Matters for Workstation App Performance Benchmarking

SPEC performance benchmarks are deeply important to the CAD/CAM and 3D communities, helping enterprises evaluate workstations and professional apps.

COMPANIES SPEND BILLIONS OF DOLLARS on computer graphics application software and billions more on hardware to run the software, so they want to know which platforms are best for balancing performance and cost. This is particularly important for graphics applications, where end-user requirements vary dramatically, from designing a smartphone case to modeling an airplane fuselage. 

SPEC Benchmarks

There was a time when vendors created their own performance benchmarks, but buyers couldn’t do fair comparisons among competing solutions, so the benchmarks were of limited value. The non-profit Standard Performance Evaluation Corporation (SPEC) was formed to create standardized performance benchmarks of value to the entire industry. They help buyers make smarter purchasing decisions, and hardware and software vendors use them to understand and compare performance so they can improve their products.

One of SPEC’s five groups, the Graphics and Workstation Performance Group (GWPG), includes the SPEC Application Performance Characterization (SPECapc) committee, which develops benchmarks spanning popular CAD/CAM, digital content creation, and visualization applications provided by independent software vendors (ISVs), including Solidworks, 3ds Max, Creo, and Maya.

This article explains the value of using “real content” – actual models used by the industry and hobbyists – in SPEC benchmarks, how SPEC obtains this content, and how you may be able to help.

Fair Benchmarks Need Real Models

The goal of SPEC is to create meaningful benchmarks that represent the real world—and that are relevant to each industry. Since the goal of workstation application performance benchmarks is to measure how real-world applications perform on particular hardware, the benchmarks must be based on workloads running in those applications. 

The SPECapc benchmarks require the actual licensed ISV application to be installed on a test system, so if this system is representative of the hardware a business is considering buying, SPECapc benchmarks provide useful data on how the application will perform on the new systems in their environment.

Benchmark for SPECapc for Solidworks 2022

The SPECapc for Solidworks 2022 Benchmark is a prime example of the valuable role SPEC’s GWPC (Graphics and Workstation Performance Group) provides the entire CAD/CAM/3D industry. It includes numerous real-world models and workflows to help Solidworks users test and evaluate workstation hardware.

However, system hardware is not the only consideration. If the benchmarks don’t accurately measure how users use those applications, then the results can be meaningless. For example, if a benchmark measures wireframe graphics modes or non-photorealistic rendering in Solidworks, but users don’t have those features in their workflow, then that particular benchmark provides little value and can even misrepresent the value of running the application on a specific hardware configuration. 

 

 

If the benchmarks don’t accurately measure how users use those applications, then the results can be meaningless.

 

 

It’s also essential to understand the range of uses for a particular application. For example, some users rely on high-resolution textures, while others use complicated shader networks that procedurally generate a chrome texture with reflection, etc. Some users model everything in extreme detail, while others rely on instancing. The combination of the application and the hardware works very differently in these different scenarios. Without this understanding, the benchmark could fail to measure workloads that are important to many users.

As a result, it is essential that a truly “useful” benchmark incorporates as many models as possible to reflect the application’s user community.

Collaboration: It’s Key

SPEC relies on three groups to help it obtain and incorporate new models into its benchmarks: hardware vendors, ISVs, and end users. All three groups have a huge stake in providing new models. Let’s review each case:

Vendors

AMD, Intel and Nvidia, hardware vendors that are also SPEC members, have all contributed models in the past. These companies have a wealth of knowledge related to graphics performance and digital content, so they are constantly working with particular models in various applications. These companies benefit from the benchmarks by being able to compare their solutions with the competition, which pushes them to improve their products so they can gain greater market share. 

ISVs

SPEC also contacts the ISVs (Independent Software Vendors) behind applications like Solidworks, 3ds Max, Creo, and Maya. ISVs know their products best and have insight into how customers use their applications. They are also important in helping SPEC incorporate new models to ensure the resulting measurements are accurate. ISVs are generally eager to contribute models in support of accurate benchmarks because better benchmarks help them provide better products that meet customers’ performance expectations.

benchmark

SPEC has released the new SPECapc for Maya 2024 benchmark. (Image: SPECapc for Maya 2024)

SPEC also goes to ISVs for help in implementing a new model in a benchmark because the more specific the model is to the application, the more applicable and useful the benchmark will be to particular market segments. Additionally, ISVs suggest the best methods for incorporating new application features into a benchmark release. For example, a GWPG member company wanted to implement Geometry Caching in the Autodesk Maya benchmark. After discussing the plan with Autodesk, the Maya team suggested and contributed a scene that would take advantage of and demonstrate the new feature.

End Users

Getting complex models from users is one of the best opportunities to improve SPEC benchmarks because these models, by definition, reflect how users are using the applications. Further, the broad user bases of these applications increase the likelihood of discovering new use cases that the ISVs, hardware vendors, and SPEC never imagined. 

The Collaboration Process 

Simply obtaining a model doesn’t mean SPEC can easily incorporate it into a benchmark. There is a collaborative process that necessarily requires back-and-forth among all the players to ensure an accurate measurement of the model. 

SPEC develops beta builds of a benchmark, and SPEC members test it on various sets of their respective hardware. SPEC then reaches out to the ISVs, and even users if appropriate, to help determine if the results are accurate and reflect real-world experiences. Perhaps the benchmark is crashing, or a result seems particularly high or low. By working iteratively among the three groups, SPEC can be sure that issues are solved expeditiously and that the benchmark is as accurate as possible. 

A great example of this collaboration is how SPEC incorporated the gigantic and highly complex SpaceShipCrawler (NASA Crawler Transporter Model) into the SPECapc for SOLIDWORKS 2020 benchmark. This crawler model is the platform used to move the Saturn 5 (and later the space shuttle) to the launch pad. Jay Patterson, a Solidworks reseller, took years to create the model, which has over 12 million points. A Solidworks representative showed the model to a SPECapc member, who then reached out to Jay to get permission to use the crawler model in the Solidworks benchmark. The model was so large that SPEC had to use the Large Assembly Mode within Solidworks to correctly load the model, and Dassault Systèmes assisted in the development and testing process. 

With millions of dollars at stake in hardware purchasing decisions—not to mention the success of the use case driving the purchase—enterprises need accurate benchmarks to help them in the decision process. ISVs and vendors also benefit from high-quality benchmarks because they can deliver better products. And the more real-world models that SPECapc can incorporate into its workstation application performance benchmarks, the better those benchmarks will serve application users and the industry. 

You may be able to help. Whether you’re a design engineer, game designer, or hobbyist, if you have an unusual or complex graphics model for one of the applications covered by a SPECapc benchmark, consider sharing it with SPEC at [email protected].

 


About the Author

Until recently, Trey Morton served as the SPEC Application Performance Characterization (SPECapc) Committee Chair for the Standard Performance Evaluation Corporation (SPEC), a non-profit corporation formed to establish, maintain, and endorse standardized benchmarks and tools to evaluate performance and energy efficiency for the newest generation of computing systems. 

INSIDER Xpresso keeps CAD industry professionals up-to-date on next-gen emerging technologies (emTech) that will revolutionize the worlds of AEC and manufacturing and design. As an Xpresso reader, you will hear from some of the most important voices inventing and using the very latest tech in areas such as AI, machine learning, algorithm-aided design (AAD), AR, VR, MR, 3D printing, 3D computer vision, robotics, and SmartCities technologies.

Each issue arrives in your inbox on the first Sunday of the month. Issue #1 arrived on March 3, 2019. Full archives and easy navigation for your pleasure. Enjoy! 

Sign-up for our monthly newsletter
architosh INSIDER Xpresso.

  • Architosh will never pass any of your information onto third parties.
  • For more information read our privacy policy.
  • It is easy to unsubscribe at any time. Follow the links in the newletter footer.

(Recommended. These infrequent sponsored emails help us to provide our Xpresso newsletter for free.)

 
INSIDER Membership

Read 3 free Feature or Analysis articles per month.

Or, subscribe now for unlimited full access to Architosh.