Clean Architecture on Android


The art of planning, designing and structuring buildings concepts

-Joe Birch


Last Thursday and Friday I went to AppDevCon in Amsterdam, and among others I saw two very interesting talks about mobile app architecture. First, “Getting Clean, Keeping Lean” by Joe Birch (Buffer) and second, “Hidden mysteries behind big mobile codebases”, by Fernando Cejas (Soundcloud). Both of them discussed the architecture applied to their apps, and they got me thinking about app architecture.. again. In this post I will share some of my findings with you.

Clean architecture

Both Joe and Fernando actually refer to something called clean architecture. Clean architecture proposes four layers, to decouple application components, improve code quality and simplify maintenance. Strictly decoupling the layers is probably the most important thing to keep in mind when using clean architecture. Let’s take a closer look at those layers.

Frameworks and drivers

The first, outermost layer is “Frameworks and Drivers” containing everything platform specific. In case of Android this contains things like custom views, fragments, activities and the native storage methods like SharedPreferences and Sqlite.


The second layer (or module) contains presenters aka. “interface adapters”. This layer build on the underlying layer and adapts it to the UI (frameworks and drivers) layer. The UI delegates almost everything to this layer to decouple it from the native framework as much as possible.

Business rules

The third layer contains “business rules”, or “use-cases”. It has for example methods to retrieve data, for a specific use-case, from the underlying layer. Albeit from the server or from local cache. Where it comes from does not matter. This layer only knows about the layer it is wrapping, the domain logic or entity layer.

Entity layer

The “entity (or domain logic) layer” contains the actual methods to fetch the data from wherever it needs to come from and provides it’s own representation of this data. It also does not know anything about native networking, communication or shared preferences. These are implementation details hidden behind interfaces.

Visually these layers look like an union, each layer wrapping the other. These layers look like this:

Clean Architecture*

As you can see each layer is wrapped by the next. You can also see the dependencies are pointing inwards. So presenters is only aware of use cases, not entities, and UI is only aware of presenters, not use cases or entities.

Note that if a layer does not work for you, because for some reason it does not make sense, don’t hesitate to skip it. Or in case you need an additional layer because that makes total sense four your app, just add it.


Both speakers, Joe and Fernando, explained how this architecture led to better testability of their applications. Which brings me to the next point: “Because only the Frameworks and Drivers layer, is coupled to the Android framework, all other code can (and should) be plain java modules.” Joe even takes it a step further and argues that by first having the inner three layers in place, you can work on stability even before you start working on the UI.

Plain Java Modules

I am talking about modules in the application in this case. In Android’s case this means a separate Gradle module for each layer. Because this allows you to develop them as normal java code, with normal unit tests. It also forces dependency inversion on you because you can’t access components upwards. And because it is plain java, you do not require a phone or an emulator to run your tests. Which means, faster build times and a faster development cycle.

One thing to keep in mind is that the models from one layer should not be reused in another layer. This may seem like overhead, but simple delegates can minimize the impact of this. Joe proposes to use mapper object that convert the model object to your own type as soon as they cross the layer’s boundaries. You could choose to utilize Android’s annotations for this as well. There are annotations to mark classes and packages as part of a certain group and you can then restrict access from other groups. For example @RestrictTo(RestrictTo.Scope.GROUP_ID)

Separation of the layers*

Mixing with databinding

I like Android’s databinding framework a lot, so I have been thinking about the best way to mix clean architecture with Android’s databinding framework. It is actually very simple. You need just two components for each of your bindings. The first component is a data model and the second component is a behavioral delegate. The behavioral delegate receives all ui-events and forwards them to the presenter. And in turn the presenter can act as needed. The data model is simply the object containing your data. You could also use two-way binding to make it easier to send updates events straight into the presenter-layer.

This would result in activities and fragments that almost don’t contain any code, which obviously leads to a better separation of concerns. And because the next layer contains most of the complexity, and because it is a plain java module, it is a lot easier to test.


Which brings me to a library Fernando showed. At Soundcloud they developed a library called LightCycle, which allows you define objects within your activities and fragments, that automatically get the required lifecycle events forwarded to them. This allows you to create separate controllers for your activities and fragments. Because these controllers are no longer hard-wired to the activity or fragment, you can instantiate them in your tests and easily simulate the activity’s or fragment’s behavior.

Mixing with Reactive frameworks

When you combine clean architecture with a reactive framework (which seem to be used by almost everybody presenting at the conference) this architecture really starts to shine. It makes it very easy to automatically refresh the ui whenever an update happens. The architectural diagram of this look as follows:

Observable stream of data between each of the Layers*

As you can see the data is backed by observable repositories, this can for example be an RX-observable or an Agera repository. So registering a listener in the top (ui) layer, will cause the observable to be observed and load its data. It would be very simple to map one type into the other using Agera. Let’s take a look at the following code sample, we could place at a layer boundary:

// Source layer, for example injected using DI
Repository<Result<SourceType>> mSourceRepo;
// My layer, mapped from source value 
// could be injected as a singleton
MutableRepository<Result<LayerType>> mDataRepo;
// On update of layer below
void doUpdate() {

As you can see we can observe the source repository. In the doUpdate method, which handles the update, we can map the value to the correct type for this layer. Within this layer the local data repository is exposed and can be observed by the next layer. Remember that observing repositories should be driven by the lifecycle of the components to prevent leaks. Usually this would mean subscribing from the onStart callback en unsubscribing from the onStop callback.

Package structure

Before concluding I would like to take a little side-step. One of the things that struck me most in Joe Birch’s talk is that he argues that by seeing the package structure of an application you should be able to see what the application does. He argues code organization starts with a clear and understandable package structure. Having a package named activities, fragments, views and adapters tells nothing about the application (besides that is probably is an Android application), it only tells you where to find what kind of classes and not what they actually do. So, imagine you are new, and you need to change something in the instant messaging component of the app, wouldn’t it make sense if that was somewhere in a package named messaging or im?


Obviously every architecture or way of working has its downsides. Joe also points out the following disadvantages to clean architecture:

  • Adds initial overhead
  • Takes time to get used to
  • Can feel overkill for some tasks
  • Difficult to make decisions

However, the advantages easily outweigh the disadvantages:

  • High test coverage
  • Easy to navigate package structure
  • Easier to maintain
  • Allows us to move faster
  • Focussed classes and test classes
  • Separation of concerns
  • Define, test, stabilize before UI
  • Futureproof implementations

I hope you agree that these advantages by far outweigh the disadvantages. To summarize, using this architecture should allows us to develop faster, test better and apply solid principles by design. Using the correct tools allows you to move even faster because you need less boiler plate.

I must say I am really looking forward to experiencing this architecture. Perhaps in the next project I’ll be working on there will be an opportunity to try this.

* Images copied come from //

Going wild with WordPress notifications

It is a nice security feature to receive a notification on my phone when someone logs in to my WordPress website

You want to receive a notification on you Android device, when someone logs in on your WordPress site? Perhaps as a security measurement?

In this post we will build a system that allows you to do so. In the first part of the post I will guide you though the WordPress side of this and in the second part I will show you how to create a very basic Android app that receives the notifications. At the end we will have two separate projects:

  • A firebase-actions plugin for WordPress
  • A Firebase Actions Android app that can receive the notifications

Note that in case you are more interested in developing an iOS app this shouldn’t be a problem, the steps for iOS should be almost the same as those for Android.

The application will be secured by using a server token to communicate with Firebase and on the Android side, only apps signed with your certificate will be able to register for cloud messages.

All code for this post is available on github:

  1. WordPress plugin:
  2. Android App:

Part 1: Creating a WordPress plugin

The first step is creating a WordPress plugin that can receive the events as they happen. As you may know, WordPress is written is PHP. Unfortunately PHP is not my most fluent languages, but sending a simple HTTP message to Firebase should not be too much of a problem. So, let’s get started!

Connecting to Firebase

First things first, to connect to firebase we first need to set up Firebase cloud messaging. Which is not that hard. The first step is to create a Firebase account at Next, you need to go to the Firebase console and get your server key and sender id. We will need these later when the plugin is ready to use.

Creating the WordPress plugin

To create a new plugin we simply need to create a new folder in WordPress’ plugins folder. I named mine firebase-actions. The one thing we need in there is a firebase-actions.php file that acts as the entry point of the plugin and is the file WordPress will load. This php file has the same name as the folder containing it.

The next step is to connect to interesting Wordpress hooks and send events to Firebase as these events happen. For the first implementation I choose five hooks that could be interesting. These hooks are login, authenticate, save_post, publish_post and publish page. So in that file, add the following code:

add_action( 'wp_login', __NAMESPACE__ . '\\fa_init_wp_login', 10, 2 );
add_action( 'wp_authenticate', __NAMESPACE__ . '\\fa_init_wp_authenticate' );
add_action( 'save_post', __NAMESPACE__ . '\\fa_init_save_post' );
add_action( 'publish_post', __NAMESPACE__ . '\\fa_init_publish_post', 10, 2 );
add_action( 'publish_page', __NAMESPACE__ . '\\fa_init_publish_page', 10, 2 );

To clarify, The first line connects the ‘wp_login‘ event, to the fa_init_wp_login function. So whenever there is a login event, the fa_init_wp_login function will be called. 10 is the priority of this connection and 2 tells WordPress that we would like to receive two parameters from the login call.

The implementation of the callback looks like this:

function fa_init_wp_login( $user_login, $user ) {
   _do_post( "login", 'User: ' . $user_login . ' logged in', $user_login, null );

So the only thing it really does is forward the event to the _do_post method. This method does the actual work:

function _do_post( $refPath, $title, $message, $url ) {
    $options = get_option( 'fa_options' );
    $server_key = $options['server_key'];

    $data = [
        'title' => print_r( $title, true ),
        'body' => print_r( $message, true ),
        'url' => print_r( $url, true ),
        'request_time' => print_r( $_SERVER['REQUEST_TIME'], true ),
        'remote_addr' => print_r( getenv( 'REMOTE_ADDR' ), true ),
        'forwarded_for' => print_r( getenv( 'HTTP_FORWARDED_FOR' ), true )

    $topic = '/topics/' . $refPath;

    $body = [
        'data' => $data,
        'to' => $topic

    $headers = array
        'Authorization: key=' . $server_key,
        'Content-Type: application/json'

    $ch = curl_init();
    curl_setopt( $ch, CURLOPT_URL, '//' );
    curl_setopt( $ch, CURLOPT_POST, true );
    curl_setopt( $ch, CURLOPT_HTTPHEADER, $headers );
    curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true );
    curl_setopt( $ch, CURLOPT_SSL_VERIFYPEER, false );
    curl_setopt( $ch, CURLOPT_POSTFIELDS, json_encode( $body ) );
    $result = curl_exec( $ch );
    curl_close( $ch );
    error_log( "fcm result: " . $result );

As you can see, this method connects to Firebase, using the server key. Next it sends the message and it adds a few more details about the request. Perhaps the most interesting part is the way the data is composed before it is sent to the server. Firebase requires your data message to be strings only. For that reason I print_r($var, true) all of the values. This ensures every value is sent as a string. That is all we need to be able to send an event to Firebase.

As you can see, the $server_key variable get initialized from the options. But for this to work, we actually need to create an options page so we can save the configuration options within the WordPress system.

Creating a configuration page

A configuration page makes it easier to dynamically configure the plugin. Most of the details about configuration pages are out of scope for this post. You can find everything related to doing this in the tutorial to create option pages at the WordPress Codex. I will highlight the interesting bits.

To get your admin page in the admin section, you need to register two callbacks. Like this:

add_action( 'admin_menu', array( $this, 'add_plugin_page' ) );
add_action( 'admin_init', array( $this, 'page_init' ) );

This is again just a simple add_action call, just like registering the hooks for the interesting events. The next step is configuring the settings page. The full implementation looks like this:

class FirebaseSettingsPage {
	 * Holds the values to be used in the fields callbacks
	private $options;

	 * Start up
	public function __construct() {
		add_action( 'admin_menu', array( $this, 'add_plugin_page' ) );
		add_action( 'admin_init', array( $this, 'page_init' ) );

	 * Add options page
	public function add_plugin_page() {
		// This page will be under "Settings"
			'Firebase Actions Admin',
			'Firebase Actions',
			array( $this, 'create_admin_page' )

	 * Options page callback
	public function create_admin_page() {
		// Set class property
		$this->options = get_option( 'fa_options' );
<div class="wrap">
<h1>Firebase Actions</h1>
<form method="post" action="options.php">
				<?php // This prints out all hidden setting fields 
                settings_fields( 'my_option_group' ); 
                do_settings_sections( 'my-setting-admin' ); 
                submit_button(); ?>

	 * Register and add settings
	public function page_init() {
			'my_option_group', // Option group
			'fa_options', // Option name
			array( $this, 'sanitize' ) // Sanitize

			'setting_section_id', // ID
			'Firebase Actions Settings', // Title
			array( $this, 'print_section_info' ), // Callback
			'my-setting-admin' // Page

			'Server key',
			array( $this, 'server_key_callback' ),

			'sender_id', // ID
			'Sender id', // Title
			array( $this, 'sender_id_callback' ), // Callback
			'my-setting-admin', // Page
			'setting_section_id' // Section

	 * Sanitize each setting field as needed
	 * @param array $input Contains all settings fields as array keys
	public function sanitize( $input ) {
		$new_input = array();
		if ( isset( $input['sender_id'] ) ) {
			$new_input['sender_id'] = sanitize_text_field( $input['sender_id'] );

		if ( isset( $input['server_key'] ) ) {
			$new_input['server_key'] = sanitize_text_field( $input['server_key'] );

		return $new_input;

	 * Print the Section text
	public function print_section_info() {
		print 'Configure your settings below:';
		$options = get_option( 'fa_options' );

		if ( ! $options ) {
			print '
<b>Warning:</b> No configuration found. You need to set the server key and sender id first';

		$server_key = $options['server_key'];
		$sender_id  = $options['sender_id'];

		if ( ! $server_key || ! $sender_id ) {
			print '
<b>Warning:</b> No configuration found. You need to set the server key and sender id first';

	 * Get the settings option array and print one of its values
	public function sender_id_callback() {
			'<input type="text" id="sender_id" name="fa_options[sender_id]" value="%s" />',
			isset( $this->options['sender_id'] ) ? esc_attr( $this->options['sender_id'] ) : ''

	 * Get the settings option array and print one of its values
	public function server_key_callback() {
			'<textarea id="server_key" name="fa_options[server_key]">%s</textarea>',
			isset( $this->options['server_key'] ) ? esc_attr( $this->options['server_key'] ) : ''

The interesting bits are: the page_init method and the sanitize method. The first method adds the additional settings fields to the screen. The second method cleans up the user input and returns the cleaned up array, which is saved by WordPress.

The generated page then looks like this:

Configuration page

That is all for the WordPress plugin!

Part 2: The Android App

Now that we have a plugin that can send the data to the Firebase server, the next step it to create a simple Android App that receives the notifications from Firebase.

Connect to Firebase

The first part is to connect the App to Firebase. You can do this from the tools -> Firebase menu in Android Studio. Just open the cloud messaging part and follow the instructions on the screen.

The setup screen should look like this:

Firebase assistent

Once that is done, and the dependencies are set we can add the code to receive the messages. In case you run into problems linking the App, you need to download the google-services.json from the Firebase console and replace the one in your Android project.

Registering for notifications

Update the generated MainActivity with the code to register to the topics.

public class MainActivity extends AppCompatActivity {

    protected void onCreate(Bundle savedInstanceState) {


This is all that is needed to register to Firebase. The topics are the same topics we used in the WordPress plugin:

    _do_post( "login", 'User: ' . $user_login . ' logged in', $user_login, null );

That first parameter in the _do_post call, is the topic to post to.

Handling notifications

Notifications are received by a FirebaseMessagingService. If you followed the instructions when you connected to Firebase in Android Studio, you should have created such a class.

My implementation looks like this:

public class MessageService extends FirebaseMessagingService {

    private static final String TAG = "MessageService";

    public void onMessageReceived(RemoteMessage remoteMessage) {

        String from = remoteMessage.getFrom();

        // Check if message contains a data payload.
        Map<String, String> data = remoteMessage.getData();
        if (data.size() > 0) {
            Log.d(TAG, "Message data payload: " + data);
            switch (from) {
                case "/topics/log":
                case "/topics/login":
                case "/topics/new_page":

        // Check if message contains a notification payload.
        if (remoteMessage.getNotification() != null) {
            Log.d(TAG, "Message Notification Body: " + remoteMessage.getNotification().getBody());


    private void onNewPageMessage(Map<String, String> data) {
        String title = data.get("title");
        if (!TextUtils.isEmpty(title)) {
            String body = data.get("body");
            Notification notification = new NotificationCompat.Builder(this)
                    .setGroup("Warmbeer blog")
                    .setStyle(new NotificationCompat.BigTextStyle()
            showNotification(5, notification);

    // More methods handling the other topics

    private void showNotification(int id, Notification notification) {
        NotificationManager nm = (NotificationManager) getSystemService(NOTIFICATION_SERVICE);
        nm.notify(id, notification);
        Notification group = new NotificationCompat.Builder(this)
                .setGroup("Warmbeer blog")
        nm.notify(12, group);

When a message is received is the onMessageReceived method will be called for you. The first thing we do there, is getting a hold of the topic this message was sent to. With the topic we can determine what data the message contains and extract that data to create a notification.

The code to generate the new_page notification in the WordPress plugin is as follows:

function fa_init_publish_page( $ID, $post )
    $author = $post->post_author; /* Post author ID. */
    $name = get_the_author_meta( 'display_name', $author );
    $title = $post->post_title;
    $permalink = get_permalink( $ID );
    $subject = sprintf( 'New page published: %s', $title );
    $message = sprintf( '%s published a new page: “%s”.', $name, $title );
    $message .= sprintf( 'View: %s', $permalink );
    _do_post( 'new_page', $subject, $message, $permalink );

As you can see, the more data we add to the notification, the more we can use in the app. After processing the data, the notification looks like this:

What’s next?

Now that we can post notifications from WordPress, it may also be useful to post information from the server itself. Like for example the output of certain cron jobs.

For example, let’s say we have an SSL certificate from let’s encrypt on the server, that we try to update every week. It would be useful to know the output of the command. From a weekly cron job we can call a script like this:

/usr/bin/letsencrypt renew | tee -a /var/log/le_renew.log |

As you can see the output is sent to another script that deals with posting the output to the Firebase server. This script looks like this:


jq -n --arg message "$output" \
      --arg topic "ssl" \
   '{to: "/topics/log", data: { topic: $topic, message: $message}}'|
curl -H "Content-Type: application/json" \
   -H 'Authorization: key=SERVER_SECRET_HERE' \
   -X POST \
   -d@- \

By using this setup my WordPress server sends me the output of the certificate renewal call every time it tries to renew. This is a very useful way to track the status of my certificate. This could be used for tons of other things as well. For example checking if a reboot is required because of an update, or to send me the output of certain other cron-jobs.

As you can see it was very easy to connect WordPress events to an app, and creating a simple WordPress plugin is really simple as well. Right now I created an Android app, but as noted before, creating an iOS app with this would also be quite easy. Firebase has support for iOS, so I can imagine it would also not be too difficult to create an iOS app for this. I am really interested to hear what else you can build with this!

Cheers, Nick!

Messing with the drawable state

In this post you will learn about some of the details concerning Views and states. And you will learn how you can use your own custom states and manage them in a simple way. For the most part we discuss  StateListDrawable. However everything you will lean here works with ColorStateLists as well. Sample code for this post is available on github.

Using drawables

Sometimes it is very tempting to just set the background of a View manually, depending on the state of your Activity or Fragment. Unfortunately this requires additional code and state transitions won’t work anymore.

In this post I will show you two flavors of another way of doing this. Both of these flavors use the Android framework and will work in any app and with any Drawable.

Our case

We have a TextView somewhere in an app, that displays the current trend of something important for the user. In this case “Solar Radiation” and “Energy Output”. The trend has three possible values: moving up, moving down or stable. And finally, there should just be a single method to set the state and there should be a simple fade between the state changes.

The Android system already has a Drawable that implements all of this behavior, and you probably already know about this Drawable. The Drawable I am talking about is StateListDrawable, more commonly known as a selector. Selectors allow you to respond to state changes, and you can declare them simply using xml.

Creating the drawable

So let’s create that drawable! First we will define the possible states for the trend: state_up, state_equal, and state_down in our attrs.xml file like this:

<attr name="state_up" format="boolean"/>
<attr name="state_equal" format="boolean"/>
<attr name="state_down" format="boolean"/>

Now that we have the different states, we we will create a drawable that uses these different states:

<selector xmlns:android="" xmlns:app="">
   <item android:drawable="@drawable/ic_state_up" app:state_up="true" />
   <item android:drawable="@drawable/ic_state_equal" app:state_equal="true" />
   <item android:drawable="@drawable/ic_state_down" app:state_down="true" />
   <item android:drawable="@drawable/ic_state_down" />

That’s all for step one, we now have the selector we will use to transition between our custom states. As I said, there are two flavors we’ll explore. The first flavor is (not completely coincidental) exactly how Android implements view-state management in the framework. With view-state management I am talking about things such as selected state, checked state, activated state and focussed state. For us this involves creating a subclass of the view we want to use, and integrating the view-state into this class.

The second flavor makes use of a special DrawableWrapper that will manage the state. This is a bit more generic as we no longer need to subclass the view and tightly couple the state to it. Instead we can couple the state into a drawable subclass which we can use within all views.

Flavor 1: View subclass

As noted before, this implementation mirrors the way the framework implements state management. Once you know understand the steps involved here, it becomes real easy to do this yourself. The steps we need to follow are as follows:

  1. Create a subclass of the view we need, for example TrendView.
  2. Add an int-def that contains the possible states we can set.
  3. Add a method to the view to set our custom states.
  4. Implement refreshDrawableState to calculate the new state.


I usually choose to implement the state as an IntDef with a setter for the state. This has the lowest memory footprint as discussed on To quote what is written there:

For example, enums often require more than twice as much memory as static constants. You should strictly avoid using enums on Android.

First we add the IntDef and the different states to our class, and a setter setTrend to set the state. This method just sets our internal state, and in order to prevent additional work we also verify that the new state is actually different from the old state. After assigning the variable, we call refreshDrawableState. Calling refreshDrawableState triggers the invalidation process for the drawable state and will result in a call to onCreateDrawableState.

public @interface Trend {}

int mTrend;

public void setTrend(@Trend int trend) {
   if (mTrend != trend) {
      mTrend = trend;

Next we add a state-set for each of the possible states. These state-sets are merged together when the state is being calculated, and each state-set may consist of multiple identifiers. In our case each of them just contains a single constant for its state. These are the same constants we defined in attrs.xml earlier. To clarify, a single state-set looks like this:

private static final int[] UP_STATE_SET = { 

The final part of the puzzle is onCreateDrawableState. In this method we just switch on our state, and merge our state with the thus far calculated drawable state. First we call super.onCreateDrawableState with extraState + 1 (because we want the parent class to reserve one additional slot for our state) and then we merge the states together. And finally, we return the result of the merge.

The full class now looks something like this (without the constructors):

public class TrendView extends AppCompatTextView {

   public static final int STATE_UP = 0;
   public static final int STATE_DOWN = 1;
   public static final int STATE_EQUAL = 2;

   public @interface Trend {}

   private static final int[] UP_STATE_SET = {

   private static final int[] DOWN_STATE_SET = {

   private static final int[] EQUAL_STATE_SET = {

   int mTrend;

   public void setTrend(@Trend int trend) {
      if (mTrend != trend) {
         mTrend = trend;

   protected int[] onCreateDrawableState(int extraSpace) {
      // Only add 1 because we only have one state active at
      // any time
      final int[] drawableState =
         super.onCreateDrawableState(extraSpace + 1);
      switch (mTrend) {
         case STATE_UP:
            drawableState, UP_STATE_SET);
      case STATE_DOWN:
               drawableState, DOWN_STATE_SET);
      case STATE_EQUAL:
               drawableState, EQUAL_STATE_SET);
      return drawableState;

Now we can simply call setTrend on our custom View and the drawable will automatically be updated as well. In fact any Drawable that has the same states will work just fine.

Flavor 2: Using a wrapper drawable

As you might have suspected, the state being part of the view is not ideal. We can’t reuse the states or the drawables, in other classes without subclassing those views. It would be easier if we could decouple the state from the View. Let’s take a look at a different approach!

Why we need a wrapper

This part of the post is called using a wrapper drawable. Let’s first try to understand why we need a wrapper. What is wrong with calling setState directly?

As we have seen in the previous approach, the drawable state is controlled by the view. The state calculated in onCreateDrawableState is applied to each of the drawables by calling the setState method.  This means that in case the view’s state changes, it will synchronize its state to the drawable and call setState. So in case we manually call setState, we risk the view overriding our state with its own state.

Our solution

That is why we are going to create a StateDrawableWrapper. This wrapper will have two tasks, its first task is to prevent the view from propagating its state to the wrapped drawable, and its second task is applying our custom state to the wrapped drawable.

To accomplish the first task, we will override the setState method to prevent the View from updating the state. For now we will just return false from this method. For the second task we will add a new method to the wrapper to set our own state, called setCustomState. This method will simply call setState on the wrapped drawable. This wrapper might look like this:

public class StateDrawableWrapper extends DrawableWrapper {

	int[] mStateSet;

	public StateDrawableWrapper(Drawable drawable) {

	public boolean setState(int[] stateSet) {
		// do nothing
		return false;

	public void setCustomState(int[] stateSet) {
		if (!Arrays.equals(mStateSet, stateSet)) {
			mStateSet = stateSet;
	public int[] getCustomState() {
		return mStateSet;

Above drawable wrapper is all there is to this. It prevents the view from updating the state of the wrapped drawable, and it can set our custom state on the View. And, it even works fine with transitions.

The one thing that is still missing here is the ability to also handle the states the view sets. For example selection and pressed states. In my next post I will discuss a way to implement this in a different way that also works correctly with the view’s state.

Thank you for reading! Cheers and see you next time.

Packages are like classes

Packages with classes grouped by type, are like utility classes. This may not always be what you want.

This post is not specific to Android, or Java for that matter. Some other languages may offer similar features.


Packages are something we usually don’t give much thought. For that reason developers sometimes just use them to group classes together, for example classes that perform similar actions or share a common ancestor. I think packages can offer a lot more than just grouping similar classes and should be thought of as an OOP (Object Oriented Programming) concept. That’s why I say: “Packages are like classes”. Now let’s investigate what other possibilities of organizing code are available with packages, and how that relates to classes.

Packages in app-architecture

Like I said before, I consider packages an OOP concept, just like inheritance and delegates. Using packages as such will help you design better libraries and apps, because using these features will help you to hide the implementation details, clients don’t need to know about. (Clients being any code using your code, even in the same app)

Access modifiers

Before diving deeper into this, let’s discuss how we organize code and how we use access modifiers to determine what methods should be visible.

The first building block is code. Your application consists of code. This code is grouped into methods. These methods are grouped into classes and the classes, in turn, are grouped into packages. And finally we use access modifiers to control the visibility of these methods. By doing so we can make them internal to the class or to their subclasses. Or private within the package. Package private is like protected, except that the methods and fields are not available to subclasses (in other packages). The full access table looks like this:

no modifierYYNN


Now think about this for a moment, think how you usually use these various modifiers to hide certain methods from external clients or subclasses. Think about how private methods contain very specific implementation details you don’t want a client to use.

Think again!

Now imagine, that you can also use these modifiers on your classes, as members of your package. Now think about grouping classes in a package “because they are all fragments or activities” or “because they are all helpers”. Think about them as if they were methods of a class, which ones would you like your clients to use, and which ones would you like to hide.

When you group your classes by type, you lose package private, and thereby the option to minimize the visibility of these classes. You can’t hide your implementation details because the class needs to be accessible from other packages as well, for example to create an instance.

Grouping by type

So what packaging by type really brings is the same thing static utility classes bring. They allow you to group a set of classes together. You can see this for example in Java’s collections framework. Most of the implementations are intentionally grouped in the java.util package. This is intentional because they are all supposed to be used by any client that needs them.


So spreading your implementation details over multiple packages leaks your implementation details to the client. If you are creating a library this is even worse. These implementation details are now part of your public API. You need to maintain it, and make sure it is keeps working for all consumers of your library. People will also start using it in ways you never intended and are going to start reporting bugs on those use-cases.

So next time you create a class, specifically for a single feature, think whether it is something that should be public or package private. And remember, you can always refactor your code if you are not satisfied with the result.

Making some noise!

Besides developing Apps I like creating digital music. And I would like to proudly announce my very first own album. Self titled, Noise Monk. This album contains mostly new work and two songs I created a long time ago.

Head over to the music page and let me know what you think about my album.

Front side cover
Album cover


First impressions of Agera

Agera makes development way easier, more modular and far more maintainable

I will probably use Agera whenever I can. It simplifies everything. And especially when you combine it with Dagger, it makes your life as an Android developer a lot easier.


Agera has only been released a few weeks ago. It is a library by Google, that allows you to write reactive Android applications. The best thing however is, that it has been developed specifically for Android. Which most importantly makes it easy to deal with the lifecycles of the different components.

Probably the most important thing you need to remember about reactive programming is that it works through activation. Repositories do nothing until they are observed. So this means, you can create them but they will delay loading data until they are activated. When they are not active they will still take note of changes in the underlying data sets, but they will not reload until they become active again.


Before Agera existed, so let’s say up until very recently. I always used loaders to load locally stored data and I used retrofit to get data from the network. With Agera I can do both; network data and locally stored data, on any thread of my choosing. And the best part is that I can even post-process remote data with local data, or the other way around.

The case

I choose a case where I can really put Agera to the test. Not just loading data from one source, but also composing the data from different sources. Just like I do in Appsii’s Apps page. There are two completely different types of data-sources several to load data from. Firstly the user’s tags and tagged apps, and launch history. They are all loaded from a SQL database. The second data-source is Android’s package manager. The package manager is used to query the installed apps.

When app of this data is loaded, it needs some post-processing; the appropriate tags must be set on the apps and the apps need to be grouped, and sorted based on their tag information.

And keep in mind that even though the apps-page needs this information as a whole, other parts of the application do not need the grouped version of the data. But they may only need the tags that exist in the application. So I want to build a repository that gives me just the tags, and I want my app-page data repository to use this as an input. Effectively creating building blocks that I can compose at will.

Primary types

Let’s first quickly discuss Agera’s primary types we’ll be using. The first component is the Supplier. A Supplier is some source of data. Next is Function. Function is used for transformations. It has an input and an output. Result is another type that is used as the return type for most of the functions in the stream. An Observable is something that can be observed. In other words, listened to by other objects.

A repository is a supplier and an Observable. It is also the main type in Agera.

Loading the tags

The tags are all stored in an sqlite database, accessed using a content-provider. To load the tags from the database we first need a supplier.

static class AppTagCursorSupplier implements Supplier<Result<Cursor>> {

    Context mContext;

    public AppTagCursorSupplier(AppsiApplication app) {

    public Result<Cursor> get() {
        Uri uri = AppsContract.TagColumns.CONTENT_URI;
        ContentResolver contentResolver = mContext.getContentResolver();
        Cursor cursor = contentResolver.query(uri, AppTagQuery.PROJECTION, null, null, AppTagQuery.ORDER);

        if (cursor == null) return Result.failure();
        return Result.success(cursor);

This supplier just returns a cursor that can be used to access the data. Next, we need to transform that data using a transformation function. The transformations takes the cursor and transforms it into a list of app-tag objects.

private static class AppTagCursorTransform implements Function<Cursor>, Result<List<Apptag>>> {

    public Result<List<Apptag>> apply(@NonNull Cursor cursor) {
        int count = cursor.getCount();
        List<Apptag> result = new ArrayList<>(count);

        while (cursor.moveToNext()) {
            long id = cursor.getLong(AppTagQuery._ID);
            boolean defaultExpanded = cursor.getInt(AppTagQuery.DEFAULT_EXPANDED) == 1;
            String name = cursor.getString(AppTagQuery.NAME);
            int position = cursor.getInt(AppTagQuery.POSITION);
            int columnCount = cursor.getInt(AppTagQuery.COLUMN_COUNT);
            int tagType = cursor.getInt(AppTagQuery.TAG_TYPE);
            boolean visible = cursor.getInt(AppTagQuery.VISIBLE) == 1;
            AppTag tag = new AppTag(id, name, position, defaultExpanded,
                    visible, columnCount, tagType);


        return Result.success(result);

Now that we have a way to access the data and a way to transform it into something useful, let’s define a repository that does this for us. For this use the complex repository builder. First we start using an empty repository.

This repository observes the content provider containing our tags. Next we tell it to update per loop. Different algorithms allow you to throttle the throughput. Next, we tell it to (from this point on) to execute on a dedicated executor for app-data.

Next, we tell it to get the data from our supplier (skip on error) and then we transform the data with the Function above.

final public Repository<Result<List<AppTag>>> provideAppTagsRepository(
        AppsiApplication app,
        @Named(NAME_APPS) Executor appsExecutor) {

    return Repositories.repositoryWithInitialValue(Result<List<AppTag>>absent())
            .observe(new ContentProviderObservable(app, AppsContract.TagColumns.CONTENT_URI))
            .attemptGetFrom(new AppTagCursorSupplier(app))
            .thenTransform(new AppTagCursorTransform())

Now we have a compiled repository that will load it’s data in the background. One thing to remember is that this repository will not do any work until it becomes observed (activated).

Component lifecycle

Because of it’s architecture Agera fits the Android lifecycle perfectly. Just remember to register and unregister the listener (Updatable) in the right lifecycle method.

ActivityonStart / onResumeonStop / onPause
FragmentonStart / onResumeonStop / onPause
ViewonAttach ToWindowonDetach FromWindow

For Activities and Fragments make sure you register and deregister either in onStart and onStop, or in onResume and onPause. Depending on your situation. For views use the callback that informs you that your view has been attached to, or detached from it’s window.

If you liked this post, keep an eye on this blog as I will write about some more advanced use-cases in a future post.


Increasing app performance with FlatBuffers

Shouldn’t the fact that there is a better way, be reason enough? Is a thousand times faster reason enough?

When we talk about data serialization we usually mean converting between two data formats. A format we can save to disk, or send over the network, and a format we keep in memory. Now would it not be awesome to have a format that we can use for both? Cross language and no conversions, just flat, direct access. This is exactly what FlatBuffers is.

Optimized for performance

Flatbuffers are fast, very fast, just because the format on disk is the same as the format in memory. Because the format is fully indexed, you can just access the part of the data you are interested in, instantly. The only overhead is reading the file into memory. And there are some things we can do to improve the reading and writing speed as well, as we will see later on.


Another huge pro of FlatBuffers is that they are completely type safe. Because all clients are generated from the same schema, every client knows what is where in the data file. So it knows there is a float at index x. Whenever that value is read it will be a float and whenever you write it, it will be float. All generated accessors are typed, so there is no way to accidentally write the wrong type at that position. This type safety is preserved cross platform and cross language. Anything written in C, can safely be read in Java, C#, Go or even PHP and vice versa.


FlatBuffers are also a lot smaller than other data formats. There is no need for field names and there is no need for separators and markup to improve readability. Nobody needs to read the raw data anyway. When you are debugging your code you can still access every field using it’s accessor. Actually it is not different from using Json, but instead of reading from the Json data, you read from the FlatBuffer object. The APIs ensure the data is written and read correctly.

Because the data format contains almost no overhead, it is also faster to transmit it over the network and it takes less space on disk. This is also true when applying gzip compression. The format can still be compressed very well.


Something you might usually forget about when serializing data are allocations. Allocations cause delays and the memory needs to be freed at some point. Platforms that use a Garbage Collector, and especially Android, benefit greatly when less garbage is generated. When used correctly, FlatBuffers generate almost no garbage at all. You should be aware that object reads, including strings, do allocate a new object every time you read it.

So, cache this if possible when you are accessing a nested object repeatedly. You can also choose to wrap your FlatBuffer Object in an Object that caches the allocated objects for you. This allows you to minimize the amount of allocations even further, at the cost of an extra wrapper class. Just remember to measure before you start applying optimizations.


Unfortunately there is one catch when using FlatBuffers. FlatBuffers are immutable. There are two approaches you can take to update them.

Your first choice is to regenerate your FlatBuffer with the updated data. And even though this is still very fast there may be a better option. But it does not work in all cases.

The second and most of the time, better option, is to generate mutators when you run the schema compiler. This allows you to update all fields that have a static size. In practice this is everything except strings and arrays. And although you can’t change the length of the array you can still manipulate the objects inside them. You just can’t add or remove items.

Actually there is a third option. Facebook has been using FlatBuffers for some time now, and they have implemented their own method of storing updates. The store the updates in a secondary FlatBuffer so they do not have to regenerate the entire buffer every time. Read more about this.

FlatBuffers in practice

All I can say is: “When you use ’em, use ’em right”. There are multiple ways of using FlatBuffers when you write them to disk or to the network. FlatBuffer performance is great and there are a few things you can do to make them perform even better.

Writing to FlatBuffers

But before we talk about persisting them, there are a few things you should be aware of. FlatBuffers are created using the builder. The most important thing to know is that you cannot write nested objects. So when you start writing an object, you first write all nested objects to the buffer, before you start writing the object itself. This may seem a bit odd at the beginning, but actually it does not really matter.

Saving FlatBuffers

You have two choices here, you can convert it to a byte[] and write it to an OutputStream like most people usually do, or you can choose to use something a bit more powerful, channels. Channels are a way that java exposes lower level operating system APIs to java. For example, they allow you to allocate direct buffers or to map a file or socket directly into memory and read data from them or write data to them. These are low level operating system operations and a lot faster than using streams. In practice this means you flip the ByteBuffer and write it to the channel as long as more bytes are remaining.

Reading FlatBuffers

When reading FlatBuffers, you have the same choice, either use an InputStream, or an operating system optimized FileChannel (or SocketChannel). The cool thing here would be that if the file isn’t too large, you can map it into memory your memory space and create a FlatBuffer on top of it. This is even faster because the data does not need to be copied into local memory. Instead you will be reading from a directly allocated ByteBuffer instantly. Just call getRootAsMyObject on the MyObject class, providing the ByteBuffer to start reading data.

Evolving your data format

As your projects evolve, so do your data formats. And FlatBuffers supports evolving data models as well. You can add and remove fields as needed and everything will just keep on working. New fields will be ignored on older clients and old fields will be ignored on newer clients. For example when a field becomes deprecated, just add deprecated to your definition and no accessors will be generated for the field. One thing to note is that new fields must always be added after existing fields. This makes sure they do not conflict with existing field indexes.

Use cases

The most obvious use cases for FlatBuffers are sending data across the net and when persisting to disk. But there are some other useful cases for this. For example the Nearby API allows you to send data using byte arrays between devices. Another good example is sending data from a phone to a watch. Both of these cases become a lot simpler when using FlatBuffers. The data format suddenly becomes known and well documented (schemas) and can evolve without compatibility problems. And in case of the watch, you can store the data you received locally and load it instantly when the app is restarted.

The numbers

Google has published several benchmarks comparing FlatBuffers to different serialization technologies. And FlatBuffers are about a 1000 times as fast as other solutions like Json. Facebook reduced the time needed to load a story from 36ms to 4ms and reduced transient memory allocations by 75 percent. See the conclusion ofthis document.


It all started with a simple question: “Why can’t data on disk have the same structure as the data in memory?” It is funny, that something we never give much thought, can have such an impact on performance. I will definitely start making use of FlatBuffers in my projects whenever there is a good use case for it. And it turns out, most of the time there is. Apps tend to spend a lot of time dealing with data.

Next we need to get to people that design our rest APIs, to embrace FlatBuffers and add support for them as well. Shouldn’t the fact that there is a better way to do things, be the biggest motivator to start using it, especially when it is at least a 1000 times faster?

This post first appeared on the warm beer blog.

Testing Android Applications with Mockito and Dagger 2

I have been experimenting with Dependency Injection on Android for a few months now. But I never found a satisfying way to inject different dependencies when running them under test, without using a framework that uses reflection. A few days ago I read a very interesting article by Chiu-Ki Chan which revealed a very interesting way to work around this limitation. This fully fixed the problem I was running into and allows me to test my apps in a very clean and simple way.

I will discuss dependency injection with Dagger 2, but I won’t be exploring all of it’s options, such as scoped dependencies.

Dependency Injection

Dependency injection (DI) is a way to delegate the initialization of your dependencies out of your class. This means they are injected into your class in a different way. The Framework basically provides you with all of your dependencies.

Now let’s think how we call constructors. Usually we call them ourselves and initialize our state and dependencies. In case of DI, we don’t do this. We do create a constructor that receives all of the dependencies as parameters. This constructor is called by the DI framework. The Framework will construct a graph of dependencies and will initialize them as needed (depending on the Framework).


Dagger is a DI framework developed by Square Inc.. In this article I’ll be using Dagger 2, which is developed by Google in cooperation with Square Inc.. Dagger 2 fixes the shortcomings of Dagger, and does contrarily to other DI Frameworks not use reflection. Everything is done at compile time using the Dagger compiler. This makes sure any issues are reported at compile time and greatly reduces the overhead of the Framework.

Understanding Dagger 2

Knowing all this, let’s first get the basics of Dagger 2 clear. In Dagger we have two basic concepts, Modules and Components. I will try to explain what each of them is. A Component is used to group modules together to satisfy all dependencies. Modules are used to provide the dependencies. Dagger 2 generates a factory for each of the dependencies that a module can provide. These factories use your provide method from the module.

As a result, you could swap Dagger Components to provide dependencies from different Modules and change the implementations. The concepts described in this paragraph are very important to understand, so please take your time to understand them as it is not easy to wrap your head around this immediately.

Using Dagger 2 in Android

We just talked about Dagger modules, they are not to be confused with Gradle modules. Gradle modules are part of your project. There is an app module, which contains the source-code of your app. You may also have a wear module and library modules in your project. Anyway, in this section module refers to Gradle modules.

To get started with Dagger in Android we need to configure Gradle. We need the android-apt Gradle plugin to add a new dependency scope to the app module in your project. This is used to be able to include the dagger compiler, without exposing it’s APIs in the project (it is a compile time dependency). So add android-apt to the top level build.gradle.

classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8'

Next you need to add the Dagger and Dagger-compiler dependencies to the project. In the dependencies section of your app module add the following dependencies:

compile "${DAGGER_VERSION}"
compile 'javax.annotation:jsr250-api:1.0'

Now the project is set up to include the dagger compiler. All we need to do now is apply the apt gradle plugin. At the top of the build.gradle file add:

apply plugin: ''

Now every time we build the dagger compiler will generate the required classes for us.

Dagger Components and Modules

Let’s say you created a class ApplicationComponent, then Dagger will generate the class DaggerApplicationComponent for you.

An ApplicationComponent could looks like this:

@Component(modules = {ApplicationModule.class})
public interface ApplicationComponent {

  void inject(MainActivity activity);


It specifies an inject method for each of the classes into which we need to manually inject our dependencies. These are typically fragments, activities and other classes we do not instantiate ourselves. (Remember that although we instantiate the fragments ourselves, they are also created by the framework, for example when restoring state). Our ApplicationComponent uses the ApplicationModule to provide the dependencies. This class looks like this:

public class ApplicationModule {

  private final Application mApplication;
  public ApplicationModule(Application app) {
    this.mApplication = app;

  Context provideApplicationContext() {
    return mApplication;

  SharedPreferences provideSharedPrefs() {
    return PreferenceManager.getDefaultSharedPreferences(mApplication);

As you can see, this module can provide a Context and it can provide a SharedPreferences instance. Now these can be automatically injected into constructors of other classes. Note that Dagger can always inject classes that have a public default constructor. You don’t need to create an @Provides annotation for that.

Bootstrapping Dagger 2

To allow the entire application to use dagger, we need to initialize it in a common place. To do this we create our own Application subclass, and add it to the Manifest.


Now in the onCreate method of our own Application class, we create a method that initializes Dagger. We call this method initializeDagger. The method looks like this:

@Override protected void initializeDagger() {
  mApplicationComponent = DaggerApplicationComponent
    .applicationModule(new ApplicationModule(this))

Because our module’s constructor needs an Application object, we need to set it on the builder. This application will be forwarded to the constructor of our module.

We now have everything set up and we can start using DI in our classes.

Using basic injection

Now that we have this all ready to use, let’s see how we can use it. In my sample project I also have a class called EventUtils which is provided in my module. This one I left out to simplify the example.

Now let’s say we have an SpinnerAdapter that we want to automatically inject into our activities and fragments. And this adapter has a dependency on the EventUtils class. In this case, we annotate the constructor to allow Dagger to create it for us. The class looks like this:

public class AccessLevelAdapter extends BaseAdapter {

  EventUtils mEventUtils;

  public AccessLevelAdapter(EventUtils eventUtils) {
    mEventUtils = eventUtils;


Now in our fragments and activities we can just add a field like this:

@Inject AccessLevelAdapter mAdapter;

And the adapter will automatically be created by Dagger and it will automatically be injected into our class. This way you no longer need to concern yourself with setting it up. It will be done for you.

Using dependencies in Framework controlled classes

Because activities and fragments are constructed by the Android framework, we need something to allow Dagger to inject our dependencies into these classes. We create an Injector class for this.

public class Injector {

  public static void inject(MainActivity activity) {
    ((MyApplication) activity.getApplication()).

Now in the onCreate method of the MainActivity, we need to call this. We do this right after our call to super.onCreate.

protected void onCreate(Bundle savedInstanceState) {




In our MainActivity we have the following field defined:

@Inject SharedPreferences mPreferences;

This dependency is now automatically injected, right after we call the injector. As soon as that method returns all dependencies have been injected.


Before we dive into creating the test-cases, let’s talk about Mockito. Mockito is a mocking framework. It simplifies creating mocks and removes the need to create mock implementations most of the time. This leads to cleaner and better readable tests.

For example, it allows us to set the result of a certain method call. This sounds a bit cryptic, so let me show you an example:

Mockito.when(mSharedPreferences.getInt("key", 0)).thenReturn(3);

Now this looks straight forward, doesn’t it? It says whenever getInt, with literal parameter “key” and literal value 0 is called, it should return 3. That’s all there is to this.

For me one of the biggest advantages on Android is that it can mock any class, for example service classes you can’t normally instantiate or mock yourself. So everything I am about to show you can also be applied to system services like NotificationManager, AlarmManager etc.

Mockito can do a lot more, like verifying a method was called on a mock with certain parameters, so be sure to read about it if you don’t know Mockito.

Now let’s test

DI is supposed to be the holy grail of loose coupling. So you can just swap implementations when you want to run your tests. But we just moved everything to compile time and removed all that is dynamic about it.

First, we will need to override the implementation of the Application class in our instrumentation classes. While this may seem impossible there is a way to do so.

Subclassing our Application class

The technique we’ll be using is based on what is described in this article:

We will take advantage of how Instrumentation testing works. All tests a run by a TestRunner. Normally this is AndroidJUnitRunner. This class indirectly extends Instrumentation. The interesting thing is the newApplication method in this class. It’s Javadoc says: “Perform instantiation of the process’s Application object. The default implementation provides the normal system behavior”.

What we’ll do, is override this method in our own runner, so we can change the Application class used for the App. In this Application class we will override the Dagger Component with a different one for testing.

Our new Runner implementation looks like this:

public class MockJUnitRunner extends AndroidJUnitRunner {

  public Application newApplication(ClassLoader cl, String className, Context context)
          throws InstantiationException, IllegalAccessException,
          ClassNotFoundException {
    return newApplication(MockApplication.class, context);

It is important to set the runner in the build.gradle so it is actually used when the instrumentation test runs:

android {
  defaultConfig {
    testInstrumentationRunner "com.appsimobile.weekly.MockJUnitRunner"

Now when the tests run, it uses our MockApplication which is a subclass of our normal Application implementation.

Injecting our test module

Because we want to be able to use dagger in our test project, we need to enable the dagger compiler for that module as well. All you need to do is enable it in the dependencies of the build.gradle file.

androidTestApt "${DAGGER_VERSION}"

Next we need to subclass the existing Application class and override the initializeDagger method. This class looks like this:

public class MockApplication extends MyApplication {
  protected void initializeDagger() {
    mApplicationComponent = DaggerMockApplicationComponent
          .mockApplicationModule(new MockApplicationModule(this))

Looking at this class, we notice that we still need to create the MockApplicationComponent and the MockApplicationModule. The module is where all of the magic happens. Instead of creating instances of the dependencies, we return mocked Mockito instances of the classes.

public class MockApplicationModule {

  private final Application mApplication;

  public MockApplicationModule(Application app) {
    this.mApplication = app;

  Context provideApplicationContext() {
    return mApplication;

  SharedPreferences provideSharedPrefs() {
    return Mockito.mock(SharedPreferences.class);

As stated above, the most interesting part is that we return a mocked instance of the class.

The MockApplicationComponent should extend the ApplicationComponent to make it assignable to mApplicationComponent. This class looks like this:

@Component(modules = {MockApplicationModule.class})
public interface MockApplicationComponent extends ApplicationComponent {
  void inject(MainActivityTest app);

As you can see we can inject our dependencies in our tests. MainActivityTest in this case. This allows us to do some very cool things. But first let’s move to the test class.

A very important part of this class is the way we initialize everything. Using JUnit’s @Before annotation we can perform the injection. We do this by getting the instrumentation as below:

SharedPreferences mSharedPreferences; 

public void setUp() {

  Instrumentation instrumentation = 
  MockApplication app = (MockApplication) instrumentation

  MockApplicationComponent component =
    (MockApplicationComponent) app.getApplicationComponent();



As a last step in this method we reset mSharedPreferences mock.

Now we can start writing tests with the mocks. We use espresso to write all of the tests. Setting up everything suddenly becomes very easy.

For example, MainActivity checks if it needs to show the on-boarding fragment. It does so by checking for a value in shared prefs. We can set the value to return with Mockito to test this. Our test now looks like this.

public void testSkipsOnBoarding() {



The first thing we do is set a mock on the preferences. Next, we launch the activity, and finally we check that the fab is visible. The FAB is not present in the on-boarding fragment so in that case the test will fail.

We can also create one for the other case. The case where we actually want to show the welcome flow:

public void testShowsOnBoarding() {



Together we have created a maintainable and well working structure to test our app that makes use of Dagger 2. The technique used here can of course be used in other cases where you need to override some behavior of your Application class. But remember, there are not very much good reasons to have your own Application class in the first place. Most of the time this just generates overhead which should be prevented.

I applied this technique in my own testing project, Weekly . And Chiu-Ki Chan also posted a minimal example for this on Github.

That’s it for today’s post. If this post helped you, or you have any questions, please leave a comment.



This post first appeared on the warm beer blog.