Decades of Leadership in the Software Testing Industry

AscentialTest has been forming in the minds of our founders for several years. After building QA Partner/SilkTest at Segue Software in the 90s, they got the opportunity to use their product extensively in the field. As consultants implementing test automation with various tools for more than 10 years, they’ve formed a thorough assessment of the state of the field. What they found surprised them: automated tests were too expensive to build.

Furthermore, the requirement for programming skills to move beyond the superficial tests that can be recorded left out too many members of the team. They also discovered that a large portion of test development time is spent ‘writing code’ to workaround object recognition deficiencies. Some users estimate that time to approach 80%. Clearly this explains why the decision to adopt automation is not always straightforward. With a fresh understanding of the challenges and our heads full of ideas, we’re energized and excited to bring about the next paradigm shift in test automation.

Brian has been working in the field of test automation for more than 30 years. Brian began as a QA Engineer using the testing tools available in the early 1980’s. He joined Segue Software in its infancy and served as the EVP of R&D during that company’s golden years. Brian formed Star Quality, a consulting firm specializing in test automation in the late 90s. After 12 years of experience in the trenches, he’s excited to be building the next generation testing tool that will increase the productivity and and effectiveness of test and development teams.

Dave is a pioneer in the field of automated testing. Dave developed ATF, one of the first automation tools, more than 20 yeas ago. He was a founder and Chief Architect of Segue Software, Inc., the original creators of QA Partner/SilkTest. Dave believes that no testing tool can be easy to use without a solid foundation. That’s why he is committed to providing AscentialTest’s universal agent with the most powerful object recognition engine in the industry.

Pricing Plans - Solo Testers to Large Enterprise Teams

Not seeing a pricing package that fits your needs? Want to compare us directly to a competitor’s offering? Tell us more, let us take a shot at providing a custom or lower cost package.

Subscription

Solo Testers

Starting at $200/mo

“The Selenium Killer”

Subscription

Teams

Starting at $380/mo

Subscription

Unlimited

Starting at $6k/mo

Perpetual License

Teams

Starting at $8.4k

Perpetual License

Unlimited

Other Resources

Upcoming Webinar

Building Reusable Verification Tests

Tue, Jun 25, 2024 10:00 AM - 11:00 AM EDT

The participant will learn:
* How indirection and dynamic instantiation are used to build generic test steps
* How to create universal verification steps and functions
* About built-in functions that provide information about the current state of the target application
* How metadata can be used to set object-related test data
* How to create a test that walks the main menu of an application, opening and closing each dialog

See How We Achieved an 83% Five Star Rating on Gartner

See how AscentialTest compares to the competition, using 18 key features and functions and across 14 different development platforms

  • Supported Platforms (14)
  • Integrated Test Planning
  • Object Recognition
  • Object Repository
  • Test Frameworks
  • Foreign Language UX
  • Drag Generations of Actions
  • Reusable Steps
  • Scripting
  • Manual Testing
  • Parallel Execution
  • Integrated Defect Tracking
  • Test Localization
  • Test Portability
  • Test Management
  • Test Execution Management
  • Integrated Version Control
  • Integrated Test Management

Eggplant

Gartner 5 Star Reviews:

0 %

TestComplete

Gartner 5 Star Reviews:

0 %

Tosca

Gartner 5 Star Reviews:

0 %

Ranorex Studio

Gartner 5 Star Reviews:

0 %

Features

Platforms

Zeenyx Academy

Managing Test Dependencies

Print Friendly, PDF & Email

While it is good practice to design automated tests to be independent, it is not always expeditious. There are situations where the data state of a test environment is difficult to manage or where the ‘stateful’ nature of a transaction requires components to be built one upon another. In these cases, a system of managing test dependencies may be required. 

The components of the solution

The mechanism for building a system for managing test dependencies is based on an .ini file that records the test status of each test. The format of the file is simple. Each .ini file section is labeled by the ‘Test Identifier’ that is built into AscentialTest. The value of the key field ‘Status’ is set to either ‘Pass’ or ‘Fail’ as displayed in the example below.

Project Settings

In order to implement the test dependency system described in this paper, the user must enable the ‘Test Identifier’ feature of AscentialTest:

Project Data

Enabling the ‘Test Identifier’ adds a field to the Project Data dialog. The user must provide a value, typically the name of the AscentialTest project:

Plan Data

Enabling the ‘Test Identifier’ also adds a field to the Plan Data dialog. The user must provide a value, typically the name of the AscentialTest plan:

Example: DependentTestAppState

The ‘TestCaseStatus.ini’ file described above is generated in the ‘OnFinish’ action of the AscentialTest appstate:

Notice that the action makes use of the attribute ‘TestId’ which is automatically assigned to each test in every test plan when the ‘Test Identifier’ feature is enabled. If the error count of the current test is equal to 0 (zero), the ‘Status’ written to the ‘TestCaseStatus.ini’ file is ‘Pass’. Otherwise, the value of ‘Fail’ is assigned.

ReturnPrerequisiteTestStatus

The function ‘ReturnPrerequisiteTestStatus’ returns the status of a test by reading the ‘TestCaseStatus.ini’ file using the ‘TestID’ that is passed in as a parameter:

CheckPrerequisiteTestStatus

The step ‘CheckPrerequisiteTestStatus’ is implemented in each test that has a dependency on a prior test. It calls the function ‘ReturnPrerequisiteTestStatus’ with the TestID of the prerequisite test. If ‘true’ is passed in for the parameter ‘RaiseErrorForNotRunTest’, the step raises an error, halts the test and passes execution control back to the ‘OnFinish’ of the appstate. If the value of ‘RaiseErrorForNotRunTest’ is false, the step raises a warning and passes a switch back to the calling test so that the test can halt execution and return to ‘OnFinish’:

Example: TestWithDependency

The image below displays a test that implements the step ‘CheckPrerequisiteTestStatus’. In this example, the user has decided to raise a warning so a conditional statement is required to return to ‘OnFinish’ if the prerequisite test has failed. If the user had decided to raise an error instead, then the conditional statement would not be required:

RemoveTestStatusFile

Removing the ‘TestCaseStatus.ini’ file at the beginning of a test run is a good idea to ensure that the test status from a prior execution is cleared. This can be accomplished by either a test or a step added to the first test in the plan as displayed below. The first option, adding the test ‘RemoveTestStatusFile’ to the plan is easier but it will increase the test count by one. The step which will get dragged into the first test in the plan will not impact the test count, but it does require that the first test always be executed. The choice is up to the user.

Example: Plan with Test Dependencies

The following image displays a plan with a dependent test selected so that the parameters ‘PriorTest’ and ‘RaiseErrorForNotRunTest’ are visible in the ‘Test’ tab below the test plan.

Example: Results file for Plan with Test Dependencies

The image below displays the results file for the sample plan. Notice the difference in the status of the dependent tests, based on whether or not the value ‘RaiseErrorForNotRunTest’ is set to true or false. In the first case, the dependent test displays with the test status of ‘Fail’. In the latter, the test status displays as ‘Pass’. The choice is up to the user.

 Conclusion

At some point in the future, it is likely that AscentialTest will have a built-in mechanism for managing dependent tests so that the status of a dependent test will display a status of ‘Not Run’ in the Test Set. In the meantime, the mechanism described above will save time by skipping the execution of tests that would not run successfully due to a failure in a prerequisite test.

Share the Post:

Related Content

Instructional Content, Thought Leadership

It’s Finally Here: A Custom Elements Kit for Automated Testing

Instructional Content

PowerBuilder App Testing: What To Know, Which Tools Work

Instructional Content, Thought Leadership

A Thousand G2 “Dislikes” (Part 2): Testing Tools Have Speed Issues

Instructional Content, Thought Leadership

A Thousand G2 Reviews Later: Software Testing Tools Are Too Expensive

Get Started Today: 6 Month Free Trial

Click here to download the Host ID generator. The download package includes full instructions. Please send us your details below to speed the process along.

Get a Competitive Quote

Our Distribution and Consulting Partners

Appeon

Cigniti

Loop Software

Marlabs

Matryxsoft Tech

Novalys

OCS Consulting

Loading...

What We Are Up To Next

TFS/Azure Integration

Check our new Azure extension for executing AscentialTest command lines in the Azure Marketplace.

We look forward to offering additional integration with Microsoft’s Team Foundation Server and Azure through defect tracking and version control soon.

Omnis Studio

Automated testing support for OMNIS Studio is coming soon. All OMNIS elements will be supported including complex grids, list boxes, tables and tree views.

Custom Objects

Add automated testing support easily for your custom application elements by responding to simple test API messages with json strings. This feature will open up the possibility of testing any GUI element and  will be offered free of charge.

Test Debugger

Set breakpoints in your automated tests, view global and local variables and execute statements to make it easier to debug tests running on any of our supported platforms.

Thank you for submitting your inquiry!

We will get back to you ASAP.  We aim to respond to all inquiries within 24 hours.