Automated Test-Bench for Mobile Applications
Bangalore India
Abstract:
Today Mobile Applications are getting more and more complex. More and more cutting edge applications are coming on mobile phones. The demand for new applications increasing with each new product in Mobile handset, the winner is the one who delivers quality product to his/her customers on time. The biggest challenge is not only in developing and customizing the new features quickly but also in making sure of the stability and robustness of the product.
Maturity requirements remain high with each new product.
This paper describes the design and implementation of an Automated Test-Bench Application for Mobile phones that can advance the complete development cycle of application development and validate the robustness of the solution.
Introduction
Handheld devices industry is growing at a very fast rate. The complexity of the software in it is also increasing exponentially with each new application that is added. Since the window to market is so small, time plays the major role. Enhancements of platforms, the capabilities of the devices and hence the applications have also changed. The biggest challenge is developing these solutions quickly, customizing it as per customer requirements and delivering all these as a stable and robust product.
Applications are the USP for the devices to succeed. Bringing up robust and validated applications are the major challenges with industries. The maturity of the delivered platform along with the time to market is what differentiates the business.
In a development cycle, to deliver such a solution we face a lot of challenges. Typical challenges faced today are:
1) Applications need to be developed and tested before the dependent layers get delivered. The same is true with features that need be developed with no actual hardware available to simulate the behavior.
2) The second challenge is the validation of the software developed. This becomes a very huge activity with numerous releases of the software made on a weekly basis in a development environment.
A typical regression cycle for mobile platform is a mammoth activity. The same repetitive tests need to be run for each release cycle.
For the application development the most preferred environment would a PC unit test setup. But this has its own limitations as real time simulations are difficult to achieve.
3) Validation time period is expensive It directly affects the time to market of the product and also the cost.
For any semiconductor company, like NXP Semiconductors which provides not only the chipsets but the complete system solutions with all its platforms fully validated for GCF, PTCRB, field tests and interoperability tests, it becomes important to address all these challenges to delivery quality product to its customers on time. This includes the applications delivered also.
To summarize, we need to improve the quality and reliability of the software developed, decrease both the development and testing cost and bring our product to market with a shorter time to market.
The automated test bench aims at helping us in achieving this.
Existing Work
As a reference platform and a turnkey solution provider, our development activity is spread across various layers of the platform. The development teams of each layer have their own local unit test setup, on which the behavior of that layer is tested.
Real problems come up when the system as a whole gets integrated and rigorously tested by integration, covering lot of cross boundary and stress test cases on the real system and network.
For applications we have a PC host test environment setup that stubs the lower platform layers and tests only the application. Here the test scenarios are pre-compiled and run on the host. This helps test just the basic user interface with no real time interaction with the platform.
But now with more and more features available from platforms like hardware accelerators etc., testing these on PC has become less beneficial and non-practical for such applications.
A typical integration life cycle is also a long one; where lot many test cases and cross cases need to be executed per feature. The result, features get tested on incremental basis, due to which some critical defects may get identified at later stages resulting in regression of already tested features. Also running an exhaustive regression cycle for each release becomes very difficult and time consuming for both developers and integration.
For executing stress tests it required the host to be connected to remote desktop to initiate commands. With more and more features developed this approach proved too time consuming and expensive.
Since the cycles of validating were very long we came up with this solution which can be used as an automated test bench for such platforms.
The current work described in this paper addresses all these shortcomings and aims to speed up the overall development and test life cycle, thus creating a robust and faster time to market solution.
Current Work
The current solution “Automated Test-Bench for Mobile Application” has been designed and developed to address the above mentioned challenges and advance the complete development life cycle and validate the robustness of the solution at each phase, with very minimal human interventions.
Overall System
The basic concept of the Test-Bench Application is to embed the test execution application within the mobile thus replacing the external user who would otherwise manually perform the testing.
For the application which is tested this will act as an external agent that will trigger and validate it. The command for the trigger and the data corresponding to it will be stored in the scenario file.
The Test-Bench application and the scenario will thus replace the external user performing the mobile testing and automate completely the testing cycle of a mobile phone.
Test Bench Architecture
The Test Bench Application resides as a process in the mobile.
The integration/regression/unit test cases are inputs to this application. These test cases are converted into a set of commands and fed into to a scenario file. These commands are read by the application and the tests executed automatically.
The Test Bench Application is comprised of the following subsystems:
- A parser, to interpret the test case commands
- A state machine algorithm to execute the test commands.
- A logging module to effectively log the execution flow and automation test results
- Scenario files to log the test cases.
Test Bench Architecture Static view
Test Bench Application Sub-system view
Explained below is the functionality of each block mentioned in the diagram above.
Application to be tested:
This is the application that needs to be tested. It could be the call application, the messenger application, multimedia applications or any third party application integrated onto the platform.
Platform:
This includes the mobile core platform blocks whose services the Test Bench application utilizes. These include the file system, memory, process management etc,
Scenario File:
The scenario file contains the automated test scenarios to be executed by the test bench application on the mobile.
The XML based scenario file has its own command list and tag syntax using which the designer or integrator writes the automated test scenarios to test the target application.
- To mention some commands
- Command to simulate key pad and text input.
- Command to wait on a list of asynchronous of events.
- Link to other scenario file
- Command to repeat a given scenario “N” number of times.
- Command to control the execution speed.
- Command to log execution flow, profiling and test result.
- Command to simulate platform events, like low battery, network strength low, so as to test cross cases on live system.
This is responsible for generating various types of logs requested by the scenario.
It could be profiling information, like time taken to capture and save a picture. Time to read or send a message etc,
It could be log to track the execution flow of the scenario, particularly helpful when doing stress testing.
Parser:
This is responsible for reading the scenario file, interpreting the command from the scenario file and passing this information to the state machine to execute the test case.
Message Handler:
The scenario file can embed messages that need to be simulated and sent to the test application. These include key pad message, text input messages, low battery etc., the message handler takes care of formatting the message and sending it to the target application tested.
State Machine:
This is the engine of the test bench application that controls the execution flow of the test case as described in the scenario file. It utilizes the services of all the blocks mentioned above to achieve the automation.
Execution flowchart
Trigger to activate the application
On mobile startup the Test Bench application will be in “Idle” state. In this state it will receive no messages to execute any scenarios.
User can activate the application via a GSM string on the mobile. E.g. typing #*3434# on mobile.
On invocation the Test Bench application will move to “Active” state. In “Active” state, it reads the master scenario file stored in the external storage.
The parser will then parse the commands from the scenario file and start executing each command one after another. The state machine will take care of sending commands to the tested application, waiting for response from it, synchronizing key presses, invoking nested scenario files, logging traces etc.,
The final result, success or failure is logged in specified file mentioned in the scenario file.
Design
The design is covered under the following areas.
State machine
A single state, state machine was used with sub states managed internally, thus keeping the execution simple.
Parser
An event based simple and light weight parser was developed, which reads through the scenario file line by line and executes each command sequentially. The concept is based on SAX parser.
Asynchronous Messaging
Any message that interrupts the normal sequential flow of execution of the system is termed as asynchronous message.
While executing a test case lot of asynchronous messages can interrupt the test case.
E.g. while playing a video; I can receive a battery low condition, in which case the test case for play video will be interrupted. In this case the test scenario should be able to dynamically load and execute the scenario file corresponding to the low battery test.
The following design was selected to handle such scenarios.
Any asynchronous message or condition, which the system needs to handle, should have a unique trigger Id. In our case the Screen Id (each screen/window shown to the user has a unique Id) was chosen as the trigger Id. Note you can choose a message/event Id also to indicate what event has occurred.
This Id is registered as an asynchronous message in the test scenario file along with a handler (which is another scenario file)
When the asynchronous message occurs in the tested application, it should notify to the test bench application of the occurring of this event.
On receiving this event, the test bench application will check if the Id is registered and if so starts executing the scenario associated with this Id/event.
Synchronization
Consider a typical test scenario for playing a Video from the media list screen. The scenario file will have commands to simulate key presses so as to navigate to media file list screen. To reach this point the scenario file can embed checkpoints to make sure that the navigation happening is correct. This is done by reading the active screen Id before each key press and confirming that the right screen is being displayed before proceeding. Incase there is a mismatch then we log the error and check if there is an asynchronous message registered for the new screen Id and execute it.
A given scenario or test case execution can thus synchronize itself with the tested application with this concept.
E.g. In the example above, a checkpoint in the scenario file could be to verify that the video file list screen is shown before proceeding with the key press for play. This will make sure that the scenario is synchronized with the actual application being tested.
Configurability
The target application to be tested can be configured via the scenario file. By this way the test bench application can send commands to any configured application/process with absolutely no code changes. This is very helpful when testing third party applications integrated onto the platform.
The log file name, type of logs and their details can also be configured via the scenario file.
The scenario file is stored on the file system. It is not required to be compiled with the application. This is a huge advantage as scenarios can be added on the fly.
The entire solution can be compiled out in the final delivery.
Results
Our bi-weekly release has a set of 100 regression test cases to be run by the development team. The cycle takes 2 man days. Sum this up for 3 products; the total effort comes to 6 man days. With automated test bench deployed this complete activity can be done in 2 man days for all 3 products and with very minimal human intervention for testing.
With the stress tests and cross over cases automated we now run these tests before each major release to integration, resulting in very less post release defects during integration. This has helped us reduce time and cost in development and test cycle.
The ability of the test bench to track and log the execution flow has in addition helped development to track and fix problems quickly now.
The activities currently ongoing on automated test bench are automation of stress test cases and integration test suite from product integration point of view. The target is to automate at least 60% of the integration tests where human intervention is not required. The result would be on similar lines as achieved for our bi-weekly release cycle.
Conclusion
As stated in the result, automating 100% of our pre-delivery testing using the automated test bench application has greatly reduced our bi-weekly delivery time and effort. Multiple product testing can now be done simultaneously with very minimal human intervention. This when deployed across teams working on different layers and product integration, will result in considerable saving in time and cost, thus helping achieve our goal to quickly deliver a stable and robust solution to our customers.
References
1) Perl & XML (O'Reilly Perl) by Erik T. Ray, Jason McIntosh
2) Java & XML - By Brett McLaughlin
3) Mobile Computing Principles: Designing and Developing Mobile Applications by Reza B'Far
4) Automated software testing By Elfriede. Dustin, John. Paul, Jeff. Rashka
5) Seven Steps to Test Automation Success by Bret Pettichord
6) Classic Testing Mistakes by Brian Marick
|
NXP Hot IP
Related Articles
- Optimizing Automated Test Equipment for Quality and Complexity
- Borrowing from software to use SystemVerilog test bench debug & analysis
- Using a processor-driven test bench for functional verification of embedded SoCs
- How to Avoid Fall in Expectations for Automated Driving
- Pytest for Functional Test Automation with Python
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |