Usage Log Automation at Dashlane
At Dashlane, we strive to deliver a great customer experience for anyone using our app, anywhere in the world, on any device. To deliver on that ambitious mission, we need to be able to understand how our users use our product both at a macro and micro level.
Like many other popular applications, we collect some anonymous usage logs from users in order to gain that understanding. These logs help us learn things like:
- What features our users use
- How often users use each feature
- Whether our features are operating properly
- Which features crash and why
What are usage logs?
Usage logs are anonymous information (i.e., information that does not identify, and cannot be used to identify, any individual user) sent from the Dashlane application to Dashlane’s secure servers. When specific actions are taken in Dashlane, a log is sent so we know how users are using the software. When a specific action occurs, we expect the same set of logs to be generated in the desired way.
How does it work?
The tracking works as follows:
- The user takes an action, or something specific happens in the application.
- The application sends a message to our server, explaining what action took place.
- The server stores the information anonymously in a database.
- The aggregated anonymized data is analyzed by our analysts in order to gain deeper insights into general user behavior inside the app, or by our QA or technical team to understand any crashes or bugs.
Is it a big deal if we miss logging user actions?
Actually, yes, it is!
If we miss logging something, we may stop developing a feature that our users love to use, we may not become aware of crashes that our users are experiencing, or we may decide to remove a feature because of a false impression that users aren’t using it.
So, it’s very important to test that we’re logging user actions accurately.
Validating usage logs manually is very time consuming and complex, so we test them via automation at Dashlane.
Automating usage logs testing
The basic logic used for the validation is to compare the generated usage log with an expected standard specification (spec) for the corresponding scenario in the corresponding device. We can see them in two phases.
Phase 1: Spec generation and saving it in Amazon S3
- User actions in the application are simulated via test scripts which we call as automated UI Tests.
- The Generate Spec web service converts the generated usage logs to a standard template called a spec file, which determines what fields to expect and what values or patterns to be expected for those fields.
- The generated specs are saved in Amazon S3 storage space via another web service called Save Spec.
- The QA team validates the spec and the usage log for the first time. Once the spec is approved by QA, it’ll be used as a standard usage log template for the specific set of actions.
Phase 2: Comparison
- On each new version of Dashlane, we run the test scripts to perform actions and we compare the generated log against the template or standard spec saved earlier in the Amazon S3.
- The Compare Spec web service gives the comparison result.
Results
- The comparison result indicates if the usage logs are as expected. If not, it also gives a detailed error report.
- Here is a simplified comparison result which differs in load duration of an item in the application.
If you are working on automating tests for a cross-platform application like Dashlane, I strongly recommend using common code for automated tests or comparison wherever possible. It eases the maintenance and brings a lot of stability to tests.
We’re Hiring!
Our team is growing fast. Currently we’re looking for a talented Test Manager and few QA Analysts for our Paris office.
Apart from that, we’re always open for talented people. If learning is one of your key criteria for a job, look at our open positions and reach out to us!
Sign up to receive news and updates about Dashlane