Adopting Backstage as a Developer Portal
UK-based Wise (formerly TransferWise) lets its users send, spend, receive and hold money in over 50 different currencies with its Wise account. It distinguishes itself with complete fee transparency, international account and routing numbers, Apple and Google Pay compatibility, and a promise to always use the mid-market exchange rate – without hidden fees. In other words, the company's mission of facilitating "money without borders" involves a powerful ecosystem and human engineering talent.
And the secret to getting everything to line up neatly? A highly effective engineering organisation and Internal Developer Platform. As part of this, Wise built a developer portal atop Backstage – and Senior Technical Product Manager Lambros Charissis was right there in the thick of it.
In a recent webinar, Lambros explained the Engineering Experience team’s journey toward building a developer portal. He revealed how he and his coworkers used product management techniques to achieve three critical goals: Recognizing the need, scoping an alpha version, and validating the results. Here's what he covered if you can't watch the full webinar.
Part one: recognizing the need
Knowing you have a problem worth solving is crucial to taking the critical next steps. So how did Wise recognize it might need to build a developer portal?
As Lambros tells it, there was no shortage of clues. For starters, Wise has millions of customers and thousands of employees. A dedicated platform engineering organization with technical product managers is responsible for assuring that Wise’s engineers can ship products fast and safely.
The Engineering Experience team at Wise zeroed in on the problem. After sending out biannual surveys on the organization's developer experience, the team spotted a trend.
Wise employees kept giving the company low ratings in three crucial areas: documentation, discoverability, and cognitive load. Engineers also reported a lot of frustration – so the product managers knew they had work to do.
Lambros said the next step involved defining the problem's true boundaries: Explicitly determining what the issue was, who it affected, and why it was worthy of a solution.
Understanding engineering friction through user research
The engineering team started their search for answers with a tried-and-true methodology: Engaging with the people who knew best – the users.
Wise conducted interviews across the engineering organization, soliciting feedback from engineers of different seniority levels and specializations, including both the platform and product sides.
The interviewers asked different questions highlighting subjects like users' personas and job roles. They addressed typical user journeys in depth to explore how people achieved their job goals using the existing tooling. Respondents also shared how they perceived their workflows – how bad things were and where the inefficiencies stood – adding a layer of context that fleshed out the bigger picture.
This exploratory process had a defined structure to it. Lambros described how Wise always had its interviewers work in teams of three: The main interviewer kept the conversation flowing, a journey mapper asked user-journey-specific questions, and a note-taker caught any nuances the other two missed.
Conducting interviews in teams had multiple benefits. In addition to ensuring that everyone could focus on their role, the structure kept more stakeholders directly involved. The platform engineers and others who'd ultimately build the portal could join in on the interviews, talk to users directly, and build empathy for their problems.
Paying attention to the user journey enabled Wise to map out distinct value streams. For instance, the team explored topics like which services involved specific tasks, common paths engineers followed, and changes that might offer valuable benefits.
By exploring journeys and dividing them into their stages, Wise classified common tasks according to how much value they added – leading to a natural problem-solving priority.
Lambros said Wise leveraged this methodology to explore multiple user personas and journeys, producing around ten value stream maps in different domains. Doing so reframed the problem in concrete frequency terms, clarifying why it merited a solution.
Wise's existing approach cost developers around two preventable meetings per month and two hours of wasted time per week. The service ecosystem's inefficiencies also slowed down the onboarding process by around two days.
Part two: the first version
Wise had established the problem merited a solution – but it didn't always know that a developer portal would be the best answer. For instance, the company could have just tried to improve its existing tools.
Wise made the decision simpler by building a value proposition canvas from the value stream maps it had generated in the user research stage, laying out the user journey problems by priority. Then, the team rated each potential solution based on how well it solved each user journey problem – making it easier to rank the options based on their overall suitability.
If you've gotten this far, you shouldn't be surprised to know Wise went with a developer portal. Everything Lambros and his colleagues had uncovered indicated it offered the best value proposition for the journey problems. It was also a fittingly holistic approach to documentation and discovery and addressed the goal of lowering cognitive load – the three main problems identified in the initial user experience surveys.
Building the alpha dev portal
Wise picked Backstage as a base for its developer portal alpha version. It seemed to tick all the important boxes, like complementing the company's skills, having a strong community, being a CNCF Incubation project, including a healthy ecosystem, and being customizable.
Next, the team had to define the intended scope of the alpha project. The whole point was to get early feedback and find out whether they were on the right track. This meant the new portal needed to go beyond merely functioning: It also had to inform Wise about its feasibility and cost efficacy.
The alpha also had to remain small enough to handle yet big enough to be validated. In other words, the experiment needed to stay manageable but still incorporate a sufficient sample size for analysis and review.
This wasn't an easy task. "It's difficult to make this trade-off," said Lambros, "but I think what helps is defining hypotheses."
Working from their value stream maps, user profiles, and user journeys, the Wise team identified a few hypotheses. For instance, it assumed the portal could:
- Speed up developer onboarding,
- Decrease the time required to find help, and
- Improve the discovery of tools and best practices.
Naturally, the engineers also set a goal. They wanted to validate five of their hypotheses before going all in with the new dev portal.
Defining target hypotheses informed the platform team on what to include in the alpha – combining these conjectures with the value proposition canvas and its impact-ranked user journeys helped them identify useful details to work on. In this case, the engineers focused on three features:
- Enhancing the discoverability of engineering practices, tools, and news,
- Creating a software catalog that clarified how to use services and find their owners, and
- Making it easier to work with documentation using search tools and reader-friendly rendered pages.
Using Backstage simplified the process of scoping out a functional alpha portal with a working GUI that fit the bill. It also helped that the team stuck to a methodological architecting strategy by:
- Avoiding solution bias by evaluating each option equally based on its objective merits,
- Using wireframes to create alignment and get everyone on the same page when they were working on different elements, and
- Deciding whether to build a throwaway alpha the company could ditch after the initial test or an incremental one to build on post-validation. Wise picked an incremental alpha for reasons we'll hit shortly.
Part three: validating the solution
It wasn't enough to assume the portal solved things. Wise needed to verify it solved the problem by confirming the hypotheses.
To discover how they did, Lambros and his team picked another pool of users to investigate. In this case, they selected 20 engineers of different seniority levels and split them into groups: One group worked with the alpha while the other used the existing tooling.
This experimental design made data-gathering straightforward. The team leaders could quantify the alpha's usefulness simply by measuring how long each group needed to complete different tasks.
The team also collected follow-up surveys. After letting around 40 people try the alpha for a week, the project leaders asked those testers how well they thought the dev portal succeeded at fulfilling the hypotheses.
This kind of organized evaluation was an insightful way to judge the work honestly, revealing that the alpha only validated four out of the five target hypotheses. The results also highlighted that the initial solution was lacking in some areas, like documentation quality and search user experience.
Lambros' biggest realizations about the validation process were pretty straightforward – but nonetheless informative:
- The hypothesis method made it easier to focus and scope the initial version.
- Preventing user testing bias – by maintaining a degree of separation between the people who built the portal and the people who set the testing goals – kept the process objective.
- Testing was an insufficient validation method by itself: The user surveys were equally illustrative.
Interestingly, falling short of the hypothesis validation target didn't make the alpha portal an outright failure.
Since the team had chosen to go with an incremental approach, it got a running start at assembling the beta version; the product managers knew exactly where to focus their efforts to make the portal more useful. This knowledge inspired Wise's engineers to implement improvements like adding quality scores and documentation rating tools.
Lambros' story proves the value of a well-thought-out, iterative approach to architecting platforms, portals, and other mission-critical engineering tooling. Building these components in an organized way makes it possible to break big problems down and deliver solutions for high-priority concerns. It's a lesson well-learned considering the complexity of modern software development practices and toolchains.
But that's just the beginning – This talk also shared many other insights during an interesting Q&A session, so why not get the full story? Watch the whole session.