Decades of Leadership in the Software Testing Industry

AscentialTest has been forming in the minds of our founders for several years. After building QA Partner/SilkTest at Segue Software in the 90s, they got the opportunity to use their product extensively in the field. As consultants implementing test automation with various tools for more than 10 years, they’ve formed a thorough assessment of the state of the field. What they found surprised them: automated tests were too expensive to build.

Furthermore, the requirement for programming skills to move beyond the superficial tests that can be recorded left out too many members of the team. They also discovered that a large portion of test development time is spent ‘writing code’ to workaround object recognition deficiencies. Some users estimate that time to approach 80%. Clearly this explains why the decision to adopt automation is not always straightforward. With a fresh understanding of the challenges and our heads full of ideas, we’re energized and excited to bring about the next paradigm shift in test automation.

Brian has been working in the field of test automation for more than 30 years. Brian began as a QA Engineer using the testing tools available in the early 1980’s. He joined Segue Software in its infancy and served as the EVP of R&D during that company’s golden years. Brian formed Star Quality, a consulting firm specializing in test automation in the late 90s. After 12 years of experience in the trenches, he’s excited to be building the next generation testing tool that will increase the productivity and and effectiveness of test and development teams.

Dave is a pioneer in the field of automated testing. Dave developed ATF, one of the first automation tools, more than 20 yeas ago. He was a founder and Chief Architect of Segue Software, Inc., the original creators of QA Partner/SilkTest. Dave believes that no testing tool can be easy to use without a solid foundation. That’s why he is committed to providing AscentialTest’s universal agent with the most powerful object recognition engine in the industry.

Pricing Plans - Solo Testers to Large Enterprise Teams

Not seeing a pricing package that fits your needs? Want to compare us directly to a competitor’s offering? Tell us more, let us take a shot at providing a custom or lower cost package.

Subscription

Solo Testers

Starting at $200/mo

“The Selenium Killer”

Subscription

Teams

Starting at $380/mo

Subscription

Unlimited

Starting at $6k/mo

Perpetual License

Teams

Starting at $8.4k

Perpetual License

Unlimited

Other Resources

Upcoming Webinar

Building Reusable Verification Tests

Tue, March 25, 2025 10:00 AM - 11:00 AM EDT

The participant will learn:
* How indirection and dynamic instantiation are used to build generic test steps
* How to create universal verification steps and functions
* About built-in functions that provide information about the current state of the target application
* How metadata can be used to set object-related test data
* How to create a test that walks the main menu of an application, opening and closing each dialog

See How We Achieved an 83% Five Star Rating on Gartner

See how AscentialTest compares to the competition, using 18 key features and functions and across 14 different development platforms

  • Supported Platforms (14)
  • Integrated Test Planning
  • Object Recognition
  • Object Repository
  • Test Frameworks
  • Foreign Language UX
  • Drag Generations of Actions
  • Reusable Steps
  • Scripting
  • Manual Testing
  • Parallel Execution
  • Integrated Defect Tracking
  • Test Localization
  • Test Portability
  • Test Management
  • Test Execution Management
  • Integrated Version Control
  • Integrated Test Management

Eggplant

Gartner 5 Star Reviews:

0 %

TestComplete

Gartner 5 Star Reviews:

0 %

Tosca

Gartner 5 Star Reviews:

0 %

Ranorex Studio

Gartner 5 Star Reviews:

0 %

Features

Object Recognition

AscentialTest recognizes application elements regardless of complexity without coding hacks or reliance on fragile OCR

Drag & Drop

Creating tests with our drag & drop editors is faster and more efficient than recording or scripting

Reusable Elements

Reuse promote faster building and maintenance of tests

No More Test Frameworks

Stop creating test frameworks and focus on your testing

Test Management Tools

Testing projects can get messy without good management tools

Complex Objects

Let AscentialTest do the heavy-lifting when it comes to tables, grids and trees

Integrations

Testing productivity involves meaningful integration with collaboration tools

Powerbuilder Expertise

We are the recognized go-to testing platform for PowerBuilder, we cover PB v6.x all the way to the current version.

Platforms

PowerBuilder

Web Apps

.Net

Omnis

Terminal

PDF

API Testing

Zeenyx Academy

A Thousand G2 “Dislikes” (Part 2): Testing Tools Have Speed Issues


Table of Contents


Some Typical Complaints – Real Examples

Based on our survey of over a thousand G2 reviews on automated software testing tools, we found that users complain a lot about test execution speed, and with good reason, because the speed of execution can have a significant impact on testing cycles and an organization’s ability to meet its target deployment dates. The complaints were more or less evenly distributed across the different testing platforms. By our estimate, about 10-15% of all complaints were related to speed issues.

Here are some of the most egregious comments (actual quotes):

  • “quite resource hungry at times and can slow down.”
  • “tends to slow up when it comes to big object repository.”
  • “running time is slow between navigation”
  • “Less stability in the application and slow in execution”
  • “It can be a tad bit slow in object recognition and on some machines it gets stuck when drilling down to certain objects”
  • “The thing that i dislike most is that the performance of the tool is pretty slow as it compacts all the artifacts and also when the size of the objects are increased then the application gets crashed sometimes.”
  • “The workspace is very huge. As we move ahead with it the workspace size keeps increasing and the performance is slow after certain time.”

The Reasons Why Automated Tests Run Slowly

There are many reasons why automated tests run slowly and, while it’s difficult to determine the specific cause for slow playback time for a given automated testing tool without an architectural review, we can make some educated guesses:

  • Inefficient Object Recognition Execution Architecture (possibly because Reflection or Windows Accessibility is used) which results in heavy use of RAM
  • Poor Synchronization (requires the use of waits and sleeps in test scripts)
  • Lack of Distributed Testing Capability (across multiple target machines)

Don’t Bother Calling Support (It’s a Flaw In The Product Design)

While only the testing tool developers can identify the deficiencies in their respective architectures, users can easily tell if there are synchronization issues with their selected tool. If you find that your tests run smoothly when the target application is slow to respond but fail when responsiveness is improved, then the tool lacks the ability to detect whether the target application is busy.

You might wonder why these testing tool companies don’t just fix the problem? But don’t bother calling Tech Support, because the reason is not related to a ‘bug’ that can be fixed. The problem is much bigger than that. The cause of the problem is that the tool was not designed with performance in mind and the only true solution to the problem is to completely redesign the testing tool.

Pro Tip – Adding Sleeps is Not a Good Solution

If you do decide to call for technical support, the workaround you are likely given is to add sleep or wait statements to the tests to get them to work reliably. Unfortunately, those statements add a lot of extra time to the duration of the test run. Every time a sleep is added, time is wasted, because a sleep will always wait the specified time whether it is needed or not.

Since it is difficult to predict how much time is needed to maintain synchronization, testers have no choice but to add the maximum amount of time that the application might take to respond. Sleep time can add an additional 25% to the duration of a test.

If you do the math, you’ll find it adds a significant amount of time to the duration of a testing cycle. Let’s say you have 5000 tests that take an average of 4 minutes each:


5000 x 4 = 20000/60 = 333 hours

Adding 25% more time for Sleep statements yields:


5000 x 4 = 20000/60 = 333 hours +25% = 416 hours

Distributed Testing Reduces the Duration of Test Cycles

Unrelated to the efficiency of object recognition or synchronization mechanisms, some testing tools provide the ability to distribute automated tests across multiple physical or virtual machines. Regardless of the speed of execution of your selected testing tool, the ability to run tests on multiple machines improves throughput significantly. Rather than running tests one at a time, this feature cuts execution time in half just by adding a second target machine. Tools that provide this capability include a multi-machine controller that sends tests to physical or virtual target machines that are equipped with ‘runtime’ components that carry out test execution. The results are sent back to the controller machine or a centralized database where they are compiled for reporting and metrics.

Even without the architectural issues that result in slow execution speed as noted above, testing tools that lack distributed testing capability provide a throughput that is far below that of the competitive solutions that do.

How AscentialTest Overcomes Test Execution Speed Issues

We take execution time seriously at Zeenyx Software because we understand how increasing testing productivity and reducing the duration of testing cycles impacts deployment of critical software. Our founders have been in the business of building automated testing tools for decades and have learned how to optimize the speed of object recognition and synchronizatio

On web-based applications including browsers, CEF and MS Webview2, AscentialTest implements a ‘webview’ object that reports on the state of the target application. AscentialTest automatically waits while the web application is busy. As soon as the state is ‘ready’, the next event is executed. Not a second is wasted. Similar mechanisms have been implemented for other target platforms. In situations where it is not possible to determine the ‘ready state’ of an application, AscentialTest automatically waits for objects to appear based on a timeout set in the testing project or overwritten locally as needed. Two events, ‘WaitUntilExists’ and ‘WaitWhileExists’ are available for special cases. These events have been implemented to optimize responsiveness. ‘WaitUntilExists’ and ‘WaitWhileExists’ wait only for objects to appear or disappear respectively without wasting any unnecessary wait time.

AscentialTest also provides the ability to target remote physical or virtual machines and provides a multi-machine controller and scheduler to allow the user to specify how and when tests should be distributed across the available target machines. A customer (hint: office supply giant) shared their runtime statistics with us. Using a farm of virtual machines, they were able to reduce their testing cycle from 3 staff members over a two week period to one staff person over two days.

Speedtest: AscentialTest v. Selenium

The same customer reported that AscentialTest runs at least 4 times faster than Selenium. Consider the fact that their testing cycle includes over ten thousand tests. If the average test runs for 4 minutes, here is the comparison in execution time.

AscentialTest Hours to Execute 10000 TestsSelenium Hours to Execute 10000 Tests
6672668
*Assumes 10k test, 4 minutes faster

Video: AscentialTest in Action

Here is a short video to see AscentialTest in action as it runs automated tests on a PowerBuilder and a browser-based application.


As you can see, AscentialTest runs as fast as the target application can respond. At Zeenyx, we believe that it’s reasonable to expect optimal synchronization, which is what we have been able to achieve in AscentialTest.

Do Your Own Comparison

Fortunately for the test engineer, evaluating and comparing automated testing solutions, the speed of test execution is something that is easy to ascertain as part of a trial. We encourage you to perform your own side by side speed trial. Let us know how it goes!

Share the Post:

Related Content

General, Thought Leadership

1000 G2 Reviews Later – Software Testing Vendor Lock-In Drives Bad Customer Support

Instructional Content, Thought Leadership

It’s Finally Here: A Custom Elements Kit for Automated Testing

Instructional Content

PowerBuilder App Testing: What To Know, Which Tools Work

Thought Leadership

Why We Moved to a 6 Month Free Trial

Get Started Today: 6 Month Free Trial

Click here to download the Host ID generator. The download package includes full instructions. Please send us your details below to speed the process along.

Get a Competitive Quote

Our Distribution and Consulting Partners

Appeon

Cigniti

Loop Software

Marlabs

Matryxsoft Tech

Novalys

OCS Consulting

Loading...

What We Are Up To Next

TFS/Azure Integration

Check our new Azure extension for executing AscentialTest command lines in the Azure Marketplace.

We look forward to offering additional integration with Microsoft’s Team Foundation Server and Azure through defect tracking and version control soon.

Omnis Studio

Automated testing support for OMNIS Studio is coming soon. All OMNIS elements will be supported including complex grids, list boxes, tables and tree views.

Custom Objects

Add automated testing support easily for your custom application elements by responding to simple test API messages with json strings. This feature will open up the possibility of testing any GUI element and  will be offered free of charge.

Test Debugger

Set breakpoints in your automated tests, view global and local variables and execute statements to make it easier to debug tests running on any of our supported platforms.

Thank you for submitting your inquiry!

We will get back to you ASAP.  We aim to respond to all inquiries within 24 hours.