I often find myself in discussions in which people ask “what is the role of ‘design / user research’” or “how can ‘research’ support the product development process?”. In various discussions, it also often happens, that ’design / user research’ is mentioned as the synonym of ’usability research’. You can find amazingly well-crafted ‘101 guides on how to conduct usability studies’ and more and more organisations keep using those techniques naturally.
The phenomenon which suggests that ‘design / user research’ equals ‘usability research’ made me think. In the past few years, I was lucky to take various ’research challenges’ within Skyscanner’s Apps Tribe and in its Design Team. As our product grows, we face more and more complex problems. It strikes me that we need to understand the nature of human-beings more and more profoundly.
In this journey, Steve ‘Buzz’ Pearce and Bálint Orosz, two of my professional mentors at Skyscanner, inspired me to try out or develop new methods in order to reply to those fundamental questions that our travellers are faced with. This journey helped me in realising how diverse the world of ‘design / user research’ is and also helped me in realising that besides ‘usability research’ multiple other fields of research exist and they can also add significant value to product development processes.
Let me share with you a framework which I call the ‘Four Layers of Research’. It is actually more like a research mindset and it would be great to hear whether you can relate to it and also to hear what methods you use in the case of the below-mentioned ‘layers’.
The Four Layers of Research
1. Usability research
When building products, a highly important factor is whether or not people can actually use what we build. To illustrate… if they would like to move forward in our app, will they find how to take the ‘next step’ or if they would like to ‘go back’ one step, will they figure out how to do it? In this sense, usability research is all about making sure that the way in which we realised our solution is in line with what people expect and what feels natural for them.
Simply put, in usability research studies, we are not focusing on the question of whether people need the ‘Back button’ or not, we assume that they need it. The question we focus on is if, in the moment that they would like to go back, they know immediately, intuitively and without further thinking how to do that.
2. Valuability research
This area of research is all about understanding whether people actually need a ‘Back button’ or not. Valuability research could help a lot in validating or falsifying a solution we plan to build for our travellers.
‘Validating or falsifying’ and ‘plan to build’ seem to be key terms here. We all have many nice ideas on what to build for our users, but one of the most important questions is whether people really need that solution or not. In the case of valuability research studies, we consciously ignore whether our solution is usable or not, but we focus on the point whether our solution adds value and meaning to people’s every day life or not.
In other words, does our solution really resonate with our travellers’ needs, and does it really solve something valuable for them or not. Valuability research could be a powerful tool in the ‘product discovery phase’, more specifically at that stage before we start building anything, at that stage when we’re just about planning to build ‘something’.
Honestly speaking, for me, separating ‘usability’ and ‘valuability’ questions in research studies is extremely hard. In the case of prototypes, there are so many things that create ‘noise’ and makes it hard to identify what’s the reason that our solution fails in user discussions. The ultimate question is always there – did our solution fail to work because users don’t need it or because we created something absolutely unusable for them?
To overcome this stage, emotional intelligence best practices and the deep analysis of people’s emotions and mental state helps me a lot. Can you recall memories of when a user realised the value of a feature you work on and starts talking about it honestly and passionately? Shining eyes could be a good sign that you might have built something lovable (on the other hand, I try to keep in mind Peter Schwartz’s thought: “It is always worth asking yourself: “how could I be wrong?”).
3. Contextual research
There are two different types of situations that regularly come up in our product development processes:
- what are those needs of our users that they have not yet realised, but would really love if we figured it out for them?
- or we come up with new directions / new products / sets of features for a group of people and we start believing that it would add lots of value for them, but how could we validate or falsify our assumptions and how could we learn more about their context and their environment in order to fit into those naturally?
In such situations, it could be best to ‘live and breathe’ with those people for whom we would like to build ‘that next big thing’. It’s often mentioned as ‘ethnographic research’ or we can call it ‘contextual research’ as well.
At this early stage of a new product or feature seed, it could easily derail us if we don’t experience directly, but instead assume we understand, how the people for whom we plan to build feel, live and behave. In the short run, it’s of course faster, cheaper, easier and more comfortable to ‘imagine’ how that group of users could feel and behave. But in the mid-run, it adds lots of value (and helps decrease risks) if we jump into the context of those users and try to understand every aspect of their life and their emotions. In product development, we always refer to the importance of ‘the users’ context’ and to the importance of their emotional and mental state. The most meaningful way for us is if we just spend time amongst our users, talking with them, living with them and, in this way, obtaining a deeper understanding of them.
Being with them also enables us to understand them consciously. This is one form of what we call research bringing ‘people’s context in the house’ and opens up opportunities for product design to come up with solutions that really resonate with people. To be very pragmatic, contextual research can help you to understand how people live and what emotions they have and you may spot a need that leads you to design a ‘Back button’ (then test its valuability and its usability aspect).
4. Conceptual research
Have you ever found that you have a very fundamental question, everybody refers to it around you, but you never had the chance to spend enough time with it, go deep enough with it and to understand how it’s embedded in human nature? We love these fundamental questions such as ‘what is trust’, ‘what is personalisation’ or ‘who our travellers are’. These help us question the status quo by going deeper and deeper day by day.
To illustrate this with a tangible example, in the case of our trust research, we turned to respected professors and subject matter experts, in the fields of social sciences and behavioural psychology. We examined various concepts, tried to embrace as many thoughts as possible about the abstract notion of ‘trust’ and thought how we could apply our learnings to the world of digital products. Then we distilled our learnings into a practical tool we called the ‘Trust Map’. The Trust Map enabled us to analyse our iOS application through the lens of trust (based on feedbacks we captured from our travellers). In the framework of a workshop, we came up with various ideas on how to move forward. Of course, we had tons of ideas, but as we had those many ideas on a sheet of paper, we started to realise how those ideas were connected with each other and we could synthesise them into topics. Now, we had a ‘set of topics’ on the table and we think that if we further explore them, they can help us build more meaningful and trustworthy relationships with our travellers in a more ‘human way’.
So how did conceptual research help us? We translated this abstract substance called ‘trust’ into opportunities in our product. And as a squad or a design working group picks up one of these topics, they can start a focused ‘product discovery process’: do some contextual research to gather some real-life experiences, then craft and prototype solutions, test whether those solutions are valuable for users, iterate on them and if they are confident about the value of their solution, then test its usability. At the end of the journey, release and learn. And iterate and learn and iterate.
László Priskin, Design / User researcher at Skyscanner. László is based in Budapest, Hungary, working as a team member on Skyscanner’s renewed mobile app available on Android & on iOS. He started sharing his thoughts, because he passionately believes in the power of discussion. He thinks whatever is written above will be outdated in a few weeks’ time, because building products means that we inspire each other, criticize each other and continuously expand our ways of thinking. László is happy to get in touch with you in the comments below, on Linkedin or Twitter as well. Views are his own.
This blogpost is part of our apps series on Codevoyagers. Check out our previous posts:
Posted on by Tamas Chrenoczy-Nagy
Since we’ve just released our 3-in-1 update for the Skyscanner app, it’s a great time to share some info with you about how we release iOS and Android apps at Skyscanner. In this post we would like to give you a brief overview of our app release process, how we adopted the de facto industry standards and extended them to our needs. Beside this we also would like to give you a couple of ideas where we are planning to improve in the future.
- Separating code drops from feature releases
- Release trains
- Feature flags
- Faster release cycles
- Future ideas and improvements
Decoupling releasing features from releasing the binary
Imagine a company where multiple teams are working on the same app. The teams own different functions, or different screens of the app. They would like to work and deliver new features independently. So if team A has some kind of delay in releasing a new feature, it shouldn’t block team B to release their own feature on time. The company should be able to release a new version of the app even if not all new features are ready for it.
It is also very important to be confident that the new feature is delivering value to our users. Firstly we release only to a small number of users, and depending on their reaction we continue the rollout to all.
To achieve these goals we had to decouple releasing a feature from releasing the app binary. We put the new features behind a feature flag and only turn them on if the feature is ready to be released in production. Meanwhile we release the new binaries in a fixed schedule, so teams don’t need to synchronize their feature delivery, the schedule is available and they can plan ahead easily.
Release train – shipping the new binary on schedule
Releasing the binary follows a fixed schedule, let’s call it a release train. So if you finished your new feature on time, you can release it with the upcoming release train. If you are not ready on time and you missed the train and then you will be able to release the feature with a next train.
On both iOS and Android we have a 2-week release cycle, which mean we ship a new binary bi-weekly. The releases on the two platforms are held on alternate weeks, so each week either a new iOS or Android release train starts.
Feature flags – the tool for actually releasing features
By using feature flags, we are able to specify which features the users should see when they run the application. The feature flags can be modified remotely, so we are able to turn on and off features without releasing a new binary.
Feature flags provide a lot of value at four different stages of development:
- Feature is under development: If the feature is still under development and it is not ready to be released to any users, the flag is only turned on for developers – so they can iterate on the feature and commit changes without causing any user disruption.
- Rolling out a new feature: If the feature is ready to be released, first we usually turn it on for only a small amount of users (for instance 1%), so we can measure the reaction of the users and the quality of the feature (by checking analytics data – key business metrics, errors, crashes). If everything seems to be ok, we increase the rollout, sometimes in multiple steps, for instance 10%, 100%.
- A/B testing: This scenario is similar to point 2 but with variations of what we release. We would like to experiment with a new feature before releasing it to all of our users – so sometimes we can ship multiple variants of the feature and measure which one performs the best.
- Kill switch: If we have a feature which has already gone live and something goes wrong (i.e. the app starts crashing because of it), we can turn it off remotely.
To release the 3-in-1 update we also used feature flags. Early versions of the feature have been presented in the app binary since last December. At its early stages it was only turned on only in our development builds, then in our test builds. After we finished the development we released to 1%, 5%, 20%, 50% and finally to 100% of our users using the feature flags. Meanwhile we continuously monitored the performance of the feature. After the 100% rollout we also kept the feature flag as a kill switch for a while, so if something really bad had happened we could have still rolled it back.
Releasing a new binary frequently
Increasing the frequency of releases is a key part of keeping the pace in delivery. Frequent releases help us to experiment and iterate on our features faster. We can get valuable data about user behaviour faster so we can react on them faster. As of now we have a 2-week release cycle on both android and iOS and we are planning to further increase the frequency.
Each Wednesday morning we have a code freeze on either iOS or Android. After the code freeze we start a ~1 week long stability period. During this period our main goal is to get confident in the stability of our new release, so we are continuously testing the app and working on fixing critical/major bugs.
After we tested and fixed all critical bugs in the new release, we still cannot be confident that the app will work well for all of our users. There can be some issues which can be detected only in production. That’s the reason why on Android at least, we use the staged rollout feature of Google Play. During staged rollout the new release is rolled out to the users gradually (for only 1% first, then 10%, 100%). We are continuously checking key business metrics (like search rate, conversion rate), and also error rate, crash rate, etc. In case the metrics look good we increase the rollout, but in case we detect any major issue, we try to disable the feature using feature flags and ship an update with the next train. If it isn’t possible we release a hotfix to fix the problem. The staged rollout usually takes about one week.
After we validated the new version in production as well (using staged rollout) we release the app globally. But the story doesn’t end here. We keep monitoring the app metrics, user feedback/reviews, crashes and react if necessary.
What’s next? – There is a lot what we can improve
Shipping a new app version to our users in every two weeks is a good thing. It is good because we can quickly iterate and release new fetaures. But is it good enough? Let’s do some maths on how long it takes for one new feature to get to the users.
Let’s say development of this feature takes two weeks. If you sum up the length of the release process , you can see it takes an extra two weeks to get that feature shipped to our users. Beside this – if the feature isn’t finished on time and it misses the release train, that adds an extra 2 weeks again.
So shipping a new feature can take from 4 to 6 weeks to get to 100% rollout. If we need to iterate multiple times on the feature and make several experiments or adjustments, it is even longer. This is way too much compared to a web environment with a continuous delivery flow, where you can release the feature almost immediately. Can we achieve the same for apps? We believe so, in time.
Beside the development areas in our processes, we are facing several constraints coming from the nature of apps. One of these was the review process in App Store, which in the past took at least one full week. Fortunately Apple has worked hard to bring this down – as little as 1-2 days now, so it should not act as a blocker.
Another constraint is that apps are shipped to the users in big packages and app updates are still a big thing – and even more so if we ever had to perform a rollback of something that couldn’t quickly be hidden with feature flags.
However we can see some promising improvements in the industry which might result in making app releases a less heavy thing. Google’s Instant Apps feature is the best example and worth checking out.
So applying full continuous delivery for apps is not something which is possible for apps at this moment in time. But we can still apply best practices from web and use them to increase our release frequency to get that bit closer.
Hopefully the industry will also move in a direction which supports our goals, so if our processes and tooling are sound, we should be able to capitalise on any changes without major lifting.
Tamas Chrenoczy-Nagy is a product engineer working on improving release and development processes, tools for apps.
This blogpost is part of our apps series on Codevoyagers. Check out our previous posts:
Posted on by Zoltan Kolozsvari
Have you ever seen a seismometer-application running on your smartphone? It’s incredible. The refinement and responsiveness means that it picks up on even the smallest vibrations. So, what if you could use these vibrations in the way we use touch screens: to control your phone?
This was the question posed by a small team in our Budapest office, who launched ‘Project Woodpecker’ to create a knock (vibration) recognition component embedded within an application. Our aim was to see how viable and useful ‘knocking gestures’ could really be.
Posted on by Csaba Toth
As developers of Skyscanner’s iOS app, we’re always thinking about the performance of the application. We know that even if we have a great application with a lot of features, lag can hamper the whole UX. Here are our top three tips to help address app performance.
Posted on by Tamas Karai
You might have read our previous blog post about data driven development in apps, although if you haven’t, you can find that blog post here. This post is a more detailed overview showcasing the technical details and motivations behind building our new mobile analytics framework.
Posted on by Akos Kapui
Mobile first design often refers to a principle that puts UI/UX designs for mobile devices before desktop web. However, the implications that come with this go way beyond the UI. A mobile first strategy requires a new approach to planning, development, UX – and it requires a new approach to API design as well. Although using the same RESTful APIs for all platforms would be possible, in reality it creates constraints for the mobile experience. The benefit of designing user-interfaces only for the web was a massive simplification and we (engineers) can easily get used to this level of comfort. This article is going to discuss design considerations for service APIs that enable teams to deliver a mobile first experience.
Posted on by Gergely Orosz
At Skyscanner we have been building apps using Objective C for a long time. After the Swift 2.0 release, we have pragmatically been moving over to this language. For example all new code in our TravelPro application is written in 100% Swift, while keeping most of the existing 65,000 lines of perfectly good and tested Objective C code – and only rewriting parts that actually need changing.
After publishing our journey in the article Transitioning From Objective C to Swift in 4 Steps – Without Rewriting The Existing Code, we received lots of questions on how exactly we did the migration. The Apple documentation gives some details on interop between Swift and Objective C. It doesn’t, however, give guidance for complex, real world projects with unit, integration and end to end tests.
If you have a pretty complex Objective C codebase where you’re looking to introduce Swift code, then this article will show you how we did it, step by step. This is a pretty detailed guide, which includes:
- Objective C from Swift – using existing Objective C code from new Swift code
- Swift from Objective C – using Swift code from the existing Objective C code
- Testing – how to test Swift or Objective C production code with Swift or Objective C test code
- Summary of all of this – a low down of the important things to keep in mind with the transition
Posted on by Imre Kelényi
Skyscanner is a data driven company at its core, so our decisions are driven by data. This is a fundamental thing for all Skyscanner engineering teams, including mobile app developers. Data is required to validate experiments, it feeds our alerting and monitoring systems and is the source of various metrics, including KPIs. In all cases having complete, timely and accurate data is essential.
What kind of analytics and logging tools do we use to collect this data in our native apps? And how do we collect these in a scalable way? Let’s get started.
If you release an app, you can pick from a myriad of analytics tools to help you get information about who uses your product, and how the use it. Our apps are no exception: we send events to both third party and in-house event logging services.
You might wonder why do we need so many different tools. The main reason is that each one of them has different benefits; and one tool might fit a team’s needs better than the other. Google Analytics is great for following larger trends and KPIs. AppsFlyer can collect all data (including app installations) that is needed for in-app ad targeting. However, most of our internal tools work best with our in-house event logging services. We also like to experiment with new tools, so the toolset can change from release to release. Anyhow, this post does not aim to give an overview of the available analytics tools (there are plenty of other articles about that ).
When it comes to integrating analytics into apps, the first problem we needed to tackle was logging to multiple services. Naturally, we didn’t want to litter the code with duplicated event logging calls just to send the same event to several services. We have created a centralized analytics mediator framework which acts as a proxy and handles all event logging requests. Requests containing the event details are converted to each analytics providers’ own format and dispatched to the given service. This way a single call can trigger multiple logging services and the framework can easily be extended with new tools. Event filtering is also needed since we don’t want to send all events to every service: an App Start event might be relevant for each provider, but we don’t want to log each button tap in all services.
Automatic Event Logging
Speaking about our development workflows and tools, speeding up feature development and removing friction caused by the tooling is one of the most important factors. In the case of event logging, this means that it should happen almost automatically, with minimal effort required from developers. To achieve that, we divide in-app events into two categories:
- UI interaction: events that are triggered by user interaction, such as tapping a button, dragging a slider or scrolling through a list.
- Business logic or data events: everything else, i.e. events that are not directly connected to the manipulation of UI elements. This category includes the app start, a completion of an API call or the business critical point when the user initiated booking a flight.
Business/data events are logged explicitly by manually adding logging calls into the codebase. The good news is that less than 20% of the events we send from apps fall into this category. The rest belongs to UI interactions for which we came up with an automatic solution.
The main issue with UI interactions is that there are a lot of things that can be logged. Basically all controls on the screen are potential event sources, often triggering several different types of events. You could argue that we just need to track a few key interactions, but we have realized that having a high event coverage can be a great asset:
- We can precisely replay complete user journeys. This is not only useful for analyzing how the app is being used but can also be invaluable for debugging
- Apps have a longer release cycle (especially on iOS) and in many cases we just can’t wait until the next release to add an event. Also, it’s sometimes not possible to determine in advance what needs to be tracked to validate a certain assumption or gather data for the research need for some new features.
So how do we achieve this? Let’s take an example. The following figure shows a screen from the Skyscanner Flights iOS app and the event
SearchResultPage ListItem Tap. This event is automatically triggered when the user selects an itinerary card on the flight search results screen.
To make this happen, the only task of the developer is to assign an analytics name to each control that should be logged. No other extra code is needed. Of course there is no magic here. We have extended many of the platforms built in UI classes to automatically perform the logging. On iOS, this mostly means categories (e.g. a
UIButton+Analytics) and some custom classes. On Android, we also use custom view classes and hijack event handlers so that logging is performed in addition to their normal behavior.
The other clever thing here is how the name and properties of the events are generated. We wanted to have a consistent and clear naming convention for our events that also follows the hierarchy of the UI. In the name of the event
SearchResultPage ListItem Tap, the Tap part is the actual UI action that has happened. The other two components are the name of the direct event source (
ListItem) and its parent’s name (
SearchResultPage). This name is generated by our analytics framework. When an event is triggered on a view, the framework walks up through the view’s ancestors and collects the analytics names in the hierarchy. On iOS, we use the responder chain as the event hierarchy. On Android, a similar chain is formed by connecting the activities, fragments and their view hierarchies.
Some events displayed in our in-app analytics debugging tool:
In addition to the name resolving, this approach also provides an event context (set of additional event properties) at any level of the chain. The properties of an event are the aggregate of all contexts added at any level. In the following figure, you can see that the tap event also includes the top level application context, which contains, for instance, the user preferences (e.g. the user’s language).
Challenges with Automatic Event Logging
Automatic logging and generating a lot of events surely sounds nice but is there a catch? Here are two possible issues and our answer to them:
- Does sending so many events hurt the client side performance or bandwidth? – At the time of writing, a typical five minute session of opening the app and performing three different searches while navigating between screens and changing various search parameters generates about 100 analytics events. The total traffic sent to and received from various analytics services is in the 300 kB range. Although this is not negligible (and we are constantly working on improving it), it is still a relatively small amount of data compared to the cost of a web browsing session. Also, almost all analytics tools support batched updates which means that only a few actual HTTP requests are used.
- Since event names are bound to the UI hierarchy, if the hierarchy changes, event names change as well. How do you deal with this? – It turned out that in most cases, if the UI hierarchy changes, it also changes the user experience, which should be analyzed in a different way. It is kind of a paradigm shift, but accepting that if something moves in the UI hierarchy, then its analytics events might change as well, works well. If an event tied to a UI element is business critical and should never be changed, we can still use a custom data/business event with a fixed name.
Before adopting the new analytics framework and the automatic UI logging concept we struggled a lot with inconsistent event names, missing event properties and unlogged interactions. The introduced system has really helped in avoiding these and saving precious development time. Anyhow, logging is really just the first step in the life of data. After it has arrived to Skyscanner’s data stores, it is transformed, visualized, analyzed and fed to many different systems. And with several tens of millions of events logged per day, this is a complex problem our data team is delighted to tackle in innovative ways.
Posted on by Gergely Orosz
[Note: as a follow-up post, for more details on how we made this transition, please see: How We Migrated Our Objective C Projects to Swift – Step By Step]
We started developing Skyscanner TravelPro in Objective C in March 2015. A couple of months later, when Swift 2.0 was released, we started to slowly introduce Swift. Fast forward to 8 months later – and 100% of the new code we write is in Swift. All without having rewritten any of our existing, working, robust and tested Objective C code – there would have been little point in doing so.
There are many resources talking about how to decide whether to use Swift for a new project or not, and best practices for writing Swift. However if you’re working on a pretty large Objective C codebase, you will probably find this article useful. If not – one day you might bump into a codebase where you want to start using Swift: this article presents some advice on how to get started doing so.
Here’s a visual representation of how our codebase has changed in 10 months. Since November 2015 all new code is written in Swift, which now makes up about 10% of our 65,000 line codebase – and growing.
So what was our approach when going from Objective C to Swift?
Continue reading Transitioning From Objective C to Swift in 4 Steps – Without Rewriting The Existing Code