Testing is easy. Writing a testable code is not.
Writing testable code can be hard. Writing testable code that requires asynchronous work is harder.
A large percentage of applications that implement combine use it mostly to support concurrent pieces of code, rather than synchronous. Obviously, adopting a Combine operation to support both async and sync operations is possible, but in my honest opinion using a Combine to perform a work that isn’t related to any of the following:
- Execute a background work
- Combine multiple sources
- Async init
It’s a waste of readability.
In this article, I would like to tell you how to properly test
debounce method, in order to avoid a long code execution and flaky tests.
Let’s start with
debounce And let me briefly remind you what this method is and the definition:
According to Apple documentation:
Publishes elements only after a specified time interval elapses between events.
debounce will pause for a specified time after each value is received, and then at the end of that pause it will send the last value.
When can it be used? Basically whenever you try to limit a number of send events, and not every event is important, rather, the last result of a specific sequence of events is crucial to know. The most common case is to limit the number of requests to the API when a user enters a query.
In a hypothetical situation, let’s say a user is looking for a specific product(iPhone) and the request is sent to an API after each keystroke. The diagram below shows how each request is sent to the server. As you may see, in naive implementation every letter triggers a new API request.
Sending a request that didn’t really matter is inefficient and can also lead to inconsistency. For example, in the case where one of the previous requests was executed before the last one. Obviously, you can cancel every request before executing the next one. But it brings a new part of code into the codebase, that has to be tested and maintained. Here is where
debounce comes to our help.
By giving valid parameters you are sure that creating a new request is delayed until the scheduled time has elapsed. In the example above, with a delay time of about 0.5s which is enough for most users to type the next char, only one request will be created:
Request — (IPHONE). Obviously, if the user has a slow typing speed, the request will be created multiple times, and it’s the developer’s responsibility to cancel it before running the next one.
Now let’s make it right, ensuring testability and a good abstraction level, that makes out code testable, readable, and maintainable.
Start with a diagram, that describes what I want to create:
Every client that has an access to SearchStream is able to
subscribe to search results and receive the newest data from it. Users also can search a specific query using a
search method. Whenever client(user) runs a search function, the searched string is passed to the
SearchStreamwhich raises a publisher that passed searched values back to the client.
Let’s make two assumptions that will be key to writing this module:
SearchStreamhas to be testable in synchronous tests, using a
XCTest.waitForExpectationswill be not needed.
- The client is not able to ignore a debounce method and expect an immediate result.
At the beginning let’s start with the protocol, that describes a
associatedtype ResponseType— type that is returned
var searchResult: AnyPublisher<ResponseType, Never>— stream which publishes next result of searching
func search(_ query: String)— method run after every text change
Making a SearchStream protocol not a class causes that code to have another abstraction layer and can be adapted to different requirements. For example, one stream can return values from the backend, and the second one can search within the database for previously saved results.
Again, let’s take a look at the debounce method definition:
During testing there are two important things:
scheduler: It’s a protocol that defines when and how to execute a closure. We need a two of them:
RunLoop.main— this is the run loop on the main thread, it will be used on production code to receive values on main thread.
ImmediateScheduler— special scheduler, that performs synchronous actions.
Setting a scheduler properly allows to fulfill the first assumption:
SearchStream has to be testable in synchronous tests, using a
XCTest.waitForExpectationswill be not needed.
dueTime: Specify how much time to wait before publishing an element.
Setting this value to 0.0 to fulfill the second assumption:
Client is able to ignore a debounce method and expect a result immediately
To handle these parameters, let’s wrap up it into a handy struct:
It is a good idea to add an extension, supporting access to each of the aforementioned schedulers.
This is an example how what a sample
SearchStream can look like. I want to bring your attention to how the constructor is designed, and how easy is to test it.
The easiest class to test is a class that have no external dependency. It’s very rarely, mostly classes that support basic logical operation, like string modification, mathematical equation, data transform, or similar.
The reality is that most classes have external dependencies, like network, file system or other internal logic. This also works for
And here is what the implementation looks like for both production and test targets.
SearchStream is the same class, dependencies are different, which allows you to control how a
SearchStream should behave and set expected results.
The test code may look like this:
- Thanks to
ImmediateSchedulerevery test case is independent, there is no Expectation or
waitForExpectationscode that are often added to support concurrency.
APISearchStreamhas injected dependency test case is also able to verify whether search stream executes method within injected code.
Let’s sum up what is most important from this article:
- Try to avoid a
waitForExpectationsmethod, it could slow down your tests. The more waiting in your test code, the longer you have to wait for all the test cases, and you’re putting unnecessary strain on CI.
- Always try to split your code into separate, weakly dependent parts, which makes your code smaller, less complicated, and testable.