In fine dining, omakase represents ultimate trust. The diner gives up the choice - “I’ll leave it up to you” - and the chef curates the experience. That trust only works because the chef has taste : years of training, constant tasting, and the deep empathy for what creates the perfect dining experience.

The omakase philosophy has already found its way into technology. I first heard it from a Head of Engineering at Autopay, and projects like Omakub (curated development environments), and Omarchy (curated architectural decisions) show how powerful it can be.

Platform engineering is having its own omakase moment. Just like in dining, endless self-serve options and customization rarely make the experience better, what matters is curation. The craft is in selecting the best tools and paths, the golden menus that help developers ship faster and with less friction. And being a good omakase platform team isn’t about ego. It’s about taste: knowing what truly matters to developers, and building the systems to act on it.

Here’s what I’ve learned from engineering leaders at companies like Datadog, YouTube, Honeycomb, and Miro on how platform or devex teams can develop that taste - serving their users while creating exceptional environments and experiences.

Developing Taste: Understanding what developers actually experience

The foundation of omakase platform engineering is empathy - a holistic and deep understanding of the people you serve and the experiences they have every day.

What do developers experience?

Software delivery can be seen as a cycle, experience-driven and data-informed, where teams measure, adapt, and improve in a continuous loop, moving seamlessly from user to developer and back again. 

This is how ‍Ben Darfler, Head of Engineering at Honeycomb, framed it when we talked: ‍ Direction and delivery both have to evolve together. If your car is broken down and can't move, you're not going to get there. If you're driving in the wrong direction, you're still not going to get there. 

Delivery quality depends on user experience data, the flag in the distance that shows where the product should head. Delivery productivity depends on developer experience data across many areas, the signals of how it feels, and how well it really works, to specify, build, test, and ship inside your environment: fast, easy, and with high quality. This is where platform engineering earns its place.

What do you optimize? Systems and experiences

That’s a lesson I took from Dmitry Derbenev, Deputy Head of Research and Development at Devexperts, when we discussed CI/CD efficiency. What defines “good” diverges between systems and experiences.

From a systems perspective, you optimize for speed and reliability: shorter build times, fewer failures. But from a developer experience perspective, the real currency is predictability. Why? Because predictability enables flow. A developer would rather have a consistent 30-minute build than a system that sometimes takes five minutes and other times fifty-five. The inconsistency creates uncertainty, wastes time, triggers context switching, and ultimately erodes trust in the environment. 

As Dmitry told me in our conversation: We decided to measure and monitor the queue size, choosing it as our North Star metric. Why queue size? When there’s a queue, you don’t know how long it will take for your build to complete. It could take five minutes, ten minutes, fifteen, or even fifty-five. But with a standard build, knowing it will take 15 minutes gives you at least some predictability.  

The effect of this focus? Devexperts’ on-prem CI/CD system now runs more than a million builds a year with just two and a half DevOps engineers supporting it. In developer surveys, the build process stands out as one of the highest-rated experience drivers.

By optimizing for what truly matters, in this case, predictability and flow over raw speed, platform teams can be highly effective and earn developers’ trust. Just as diners trust an omakase chef to curate the best experience, developers trust platform teams that focus on the right things to create environments where great software can thrive.

Good taste means great feedback loops

An omakase chef earns trust by sensing the right flavors, balancing them with care, and serving them back in a way that elevates the whole meal. Great platform teams work the same way, they don’t just build tools; they tune into the right signals, interpret them wisely, and serve them back as improvements that lift the entire developer experience.

What do you sense? Input, progress, output signals

DORA metrics (deployment frequency, lead time, change failure rate, and time to restore) capture the output of the pipeline, showing how code gets shipped into production. FLOW metrics (velocity, time, efficiency, load, and distribution) capture the movement of value through the pipeline, from input to progress to output. SPACE metrics (satisfaction, performance, activity, communication, and efficiency) span the full delivery spectrum: capturing inputs, progress, and outputs. DevEx metrics provide the input data across day-to-day developer experience areas, the factors that either drive or block delivery progress and, ultimately, the output. Top tech teams make a clear distinction between input and output data, acting primarily on the inputs while controlling for the outputs. 

Here’s how Simon Boudrias, Director of Engineering at Datadog, approaches it: DORA is a solid industry benchmark. It shows direction, but it’s not a great target. We also layer in dev sentiment and track the local dev experience. Things like how long the TypeScript server takes to finish or how quickly the dev server reloads. As an engineering manager, when you judge the success of a team, you focus more on operational metrics, the ones that actually impact the output.

How do you sense it? Subjective feedback, system logs

System logs validate inputs with objective traces: queue lengths, build durations, test runs, monitoring alerts. But just like in dining, logs only tell you what was cooked, not how it tasted. Developer experience surveys and comments close that gap, capturing the subjective signals that explain why the objective metrics look the way they do.

Top tech teams blend system logs with survey signals, never trusting logs in isolation when the data isn’t fully understood or clean.

As Egor Siniaev, Head of Engineering at Miro, put it: Why you shouldn’t always believe the data, and why you need to challenge it every time, is something we’ve learned at Miro. I remember once I saw a significant improvement in our median, and I thought, “Wow, it’s ten times better!” But I said to myself: I don’t believe it. And later we found out it was just one single repository we had created with small PRs that were reviewed and merged in less than an hour, all pairs. And yeah, that’s what skewed the statistics.

Instead of taking logs at face value, leading teams use survey insights to define what great, good enough, and bad actually mean for raw system numbers. That’s the path Miro took for code review, Google for technical debt and collaboration, Devexperts for the CI/CD pipeline, and Pipedrive for focus time. Each case shows how survey insights or other sources of human assessment on raw system log data reframe what system data really tells you.

Do you live it? Tasting it yourself

These teams also go beyond the numbers. They dive into developer comments from surveys to uncover the root causes of friction, and they interact directly with developers to experience the work themselves. At Datadog, for example, the DevEx team runs embed programs. Embeds outplace platform engineers inside product teams for about two weeks, working side by side with developers to feel the friction firsthand and build personal connections that make feedback faster and more candid. Embeds in flip the model: developers who raise issues are invited to join the DevEx team temporarily to co-create solutions, an approach also practiced at Miro. Similar direct-touch practices happen through Slack channels, Gemba walks, cross-team retrospectives, lean coffee sessions, interviews, workshops, and many other ways, are all ways of turning feedback into lived experience and shared learning.