Test Generation from Use Case Specifications for IoT Systems
Introduction
Testing is a cornerstone of software engineering, ensuring the quality, reliability, and user satisfaction of IoT systems before deployment. While traditional software testing is well-established, testing complex and dynamic systems like the Internet of Things (IoT) presents unique challenges. IoT systems are inherently heterogeneous, consisting of multiple interconnected layers: devices, networks, clouds, and applications, each requiring rigorous testing.
To address these challenges, end-to-end (E2E) testing has emerged as a critical approach, validating interactions across the entire IoT ecosystem. This blog explores an approach to automate the generation of tests for IoT systems based on use case specifications (UCSs) as input.
Use Case Specification (UCS)
A Use Case Specification is a textual document that defines the functional requirements of a system. It outlines the specific details of a use case, including its actors, preconditions, main actions, and postconditions. UCS provides a structured way to describe how a system should behave in response to specific user interactions or events, serving as a bridge between user needs and technical implementation.
Restricted Use Case Modeling (RUCM)
Restricted Use Case Modeling (RUCM) is a systematic methodology designed to enhance the precision and clarity of Use Case Specifications [1]. By introducing a predefined template with standardized keywords and restriction rules, RUCM reduces ambiguity, imprecision, and incompleteness in UCS. This structured approach facilitates the automated analysis and extraction of behavioral information from use case specifications, enabling efficient derivation of test cases. RUCM's ability to standardize and formalize use case documentation makes it particularly valuable in generating accurate and reliable test artifacts. Below is an example of simple UCS.
1.Use Case:Fitbit Sends Data and Buddy Moves
1.1 Precondition
1.2 Basic Flow
1.The WIMP RECEIVES heartrate data FROM fitbit Over HTTP
2.The WIMP VALIDATES THAT heartrate is between 60 and 100 bpm
3.The WIMP SENDS MOVE command to Buddy with distance and speed Over WebSocket
4. Buddy SETS EnableWheels to TRUE for both Left and Right Wheels
5. Buddy MOVES at specified distance and speed.
6. Buddy SENDS MOVE execution status to WIMP Over WebSocket
7. WIMP RECEIVES MOVE execution status from Buddy Over WebSocket
Postcondition
WIMP receives “WHEEL_MOVE_FINISHED” notification status
1.3 Bounded Alternative Flow
RFS 1-7
1.IF WIMP RECEIVES no heartrate data from Fitbit THEN
2.WIMP SENDS ROTATE command to Buddy with angle and rotation speed Over WebSocket
3.Buddy SETS EnableWheels to TRUE for both left and right wheels
4.Buddy ROTATES at specified angle and speed.
5.Buddy SENDS ROTATE execution status to WIMP Over WebSocket
6.WIMP RECEIVES ROTATE execution status from Buddy Over WebSocket
Postcondition
WIMP RECEIVES "WHEEL_MOVE_FINISHED" execution status Over WebSocket
1.4 Specific Alternative Flow
RFS 4
1.IF EnableWheels is not set to TRUE THEN
2.Buddy SENDS Execution status to WIMP
3.WIMP RECEIVES execution status from Buddy Over WebSocket
3.ENDIF
Postcondition
WIMP RECEIVES "NOK" execution status
Assumptions
We want to create tests with the following assumptions:
Assumption 1 (A1): Use case specifications may not initially include detailed execution targets like endpoints, protocols, or addresses. These specifics will be provided later by the development team.
Assumption 2 (A2): The application layer is responsible for managing communication with all IoT components, including devices, cloud databases, and data persistence mechanisms, ensuring smooth test script execution across the system.
Assumption 3 (A3): The application layer serves as the primary entry point for testing, handling interactions with all interconnected nodes to support end-to-end testing effectively.
Elements of Tests
We proposed a test structure containing the following fields: action or operation, inputs, execution target, and expected output.
We adopted the above structure from the proposed test case for complex embedded systems [2], by including the execution target, since IoT systems involve many components exchanging data and executing different commands.
Approach
We combine custom implementations for complex tasks with leveraging LLM for specific tasks. The custom implementation involves representing the given UCS in JSON format, generating UCTM based on the UCS in JSON format, and extracting test scenarios from the UCS following the UCTM. We then use the LLM to generate tests based on the extracted scenarios. Below is an example of a generated test.
{
"TC_ID": "TC001",
"name": "rotate buddy when no data",
"steps": [
{
"operation": "Check_hr_data",
"target": {"name":"WIMP"},
"inputs": {
"dataAvailable": false
},
"expectations": {
"msg": "NO_HEARTRATE_DATA"
}
},
{
"operation": "Send_Rotate_Cmd",
"target":{
"name":"WIMP",
"protocol":"Websocket",
"method":"Send",
"host":"192.168.0.x",
"port":8000,
"endpoint":"/rotate"
},
"inputs": {
"angle": 90,
"speed": 1
},
"expectations":
{"msg": "COMMAND_SENT"}
}
]
}
The above generated test may not be executable. We need to involve an engineer or developer to provide some details to match our test with the actual implementation. This is done through the use of mapping table which highlight how each operation is executed in the system under test. Using this mapping table, the testers can refine the above generated test to make it ready for execution.
Test Execution
We send the generated test to the application layer of the system under test (SUT). Based on Assumption 3, the application layer analyzes the submitted test to identify the action to be executed, the inputs, and the execution target, then forwards the actions to the components responsible for execution. The actual execution status returned is compared with the expected output specified in the USC to determine whether the SUT meets the expected behavior.
Limitations
One of the limitations of this approach is relying on complete and well written UCS including all possible keywords, allowing extraction of different aspects of the desired tests. It also relies on manual intervention, which can not fully be eliminated.
Conclusion
Achieving end-to-end (E2E) testing in IoT remains a significant challenge. In this study, we proposed an approach to generate tests and define a test structure that can be executed in IoT systems for E2E testing. Although the results are promising, this approach has several limitations that future studies should address.
References
- Yue, T., Briand, L. C., & Labiche, Y. (2013). Facilitating the transition from use case models to analysis models: Approach and experiments. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(1), 1-3
- Wang, C., Pastore, F., Goknil, A., & Briand, L. C. (2020). Automatic generation of acceptance test cases from use case specifications: an nlp-based approach. IEEE Transactions on Software Engineering, 48(2), 585-616