Metrics That Matter: The Ultimate Guide to Measuring Self-Organising Team Success (Without Driving Everyone Crazy)

Hey there, data-driven dynamos and agile aficionados! πŸ‘‹ Ready to dive into the wild world of measuring team success? Buckle up, because we’re about to turn those vanity metrics upside down and discover what really matters in the land of self-organising teams!

The Metrics Maze: Don’t Get Lost in the Numbers

Picture this: You’re in a maze of mirrors, each one showing a different metric. Story points completed! Sprint velocity! Lines of code! Number of commits! It’s enough to make your head spin faster than a hard drive from 1995. πŸ’ΏπŸ’«

But here’s the million-dollar question: Which of these actually tell you if your team is succeeding?

Spoiler alert: Probably none of them. 😱

The Great Metrics Showdown

Let’s break down some common metrics and see how they stack up:

1. Sprint Completion / Story Points

The Good: Easy to measure, gives a sense of progress.Β 
The Bad: Can be gamed faster than a speedrunner playing Minecraft.Β 
The Ugly: Focuses on output, not outcome.

2. Meeting Deadlines / Completing Projects

The Good: Aligns with business expectations.Β 
The Bad: Can lead to corner-cutting and technical debt.Β 
The Ugly: Doesn’t account for value delivered.

3. DevOps Metrics (Deployment Frequency, Lead Time, etc.)

The Good: Focuses on flow and efficiency.Β 
The Bad: Can be technical overkill for some teams.Β 
The Ugly: Doesn’t directly measure business impact.

4. Business Metrics / KPIs

The Good: Directly ties to business value.Β 
The Bad: Can be hard to attribute to specific team actions.Β 
The Ugly: Might be too long-term for sprint-by-sprint evaluation.

The Secret Sauce: Metrics That Actually Matter

“Not everything that counts can be counted, and not everything that can be counted counts.” – Albert Einstein

Al wasn’t talking about Agile metrics, but he might as well have been. So what should we be measuring? Let’s cook up a recipe for metrics that actually matter:

  1. A Dash of Business Impact: How many users did that new feature attract?
  2. A Sprinkle of Team Health: How’s the team’s morale and collaboration?
  3. A Pinch of Technical Excellence: Is the codebase getting better or turning into spaghetti?
  4. A Dollop of Customer Satisfaction: Are users sending love letters or hate mail?

Mix these together, and you’ve got a metric feast that tells you how your team is really doing!

The Goldilocks Zone of Measurement

Remember Goldilocks? She wanted everything juuuust right. Your metrics should be the same:

  • Not too many: Analysis paralysis is real, folks!
  • Not too few: “Vibes” isn’t a metric (no matter how much we wish it was).
  • Just right: Enough to guide decisions without needing a PhD in statistics.

The Metrics Makeover: Before and After

Let’s give some common metrics a makeover:

Before: Number of Story Points Completed ❌

After: Business Value Delivered per Sprint βœ…

Instead of just counting points, assign business value to stories and track that. It’s like turning your backlog into a stock portfolio!

Before: Code Commit Frequency ❌

After: Feature Usage and User Engagement βœ…

Who cares how often you commit if users aren’t clicking that shiny new button?

Before: Bug Count ❌

After: User-Reported Issues vs. Proactively Fixed Issues βœ…

This shows both quality and how well you’re anticipating user needs. Crystal ball coding, anyone?

Some of your more technical metrics maybe SLAs as well, for example Quality, we want to deliver business value, without reducing quality.

The user engagement, you can usually glean from some kind of Web Analytics (Google, Analytics, etc), what ever you are using for this focus on the core user actions people are doing on your system, for example with ECommerce it usually Completed booking or step conversion in your funnel. Then these can be near real time even.

The Team Metrics Workshop: A Step-by-Step Guide

Want to revolutionise your team’s metrics? Try this workshop:

  1. Metric Brainstorm: Have everyone write down metrics they think matter.
  2. Business Value Voting: Get stakeholders to vote on which metrics tie closest to business goals.
  3. Feasibility Check: Can you actually measure these things without hiring a team of data scientists?
  4. Trial Run: Pick top 3-5 metrics and try them for a sprint.
  5. Retrospective: Did these metrics help or just add noise?

Repeat until you find your team’s metric sweet spot!

The Metrics Mindset: It’s a Journey, Not a Destination

Here’s the thing about metrics for self-organising teams: They should evolve as your team evolves. What works for a new team might not work for a seasoned one. It’s like updating your wardrobe – what looked good in the 90s probably doesn’t cut it now (unless you’re going for that retro vibe).

The Golden Rules of Team Metrics

  1. Measure what matters, not what’s easy.
  2. If a metric doesn’t drive action, it’s just noise.
  3. Team metrics should be about the team, not individuals.
  4. Metrics should spark conversations, not end them.
  5. When in doubt, ask the team what they think is important.

Wrapping Up: The Metric Mindfulness Movement

Measuring the success of self-organising teams isn’t about finding the perfect metric – it’s about finding the right combination of indicators that help your team improve and deliver value. It’s like being a DJ – you’re mixing different tracks to create the perfect sound for your audience.

Remember, the goal isn’t to hit some arbitrary numbers, it’s to build awesome products, delight users, and have a team that loves coming to work (or logging in) every day. If your metrics are helping with that, you’re on the right track!

So go forth, measure wisely, and may your charts always be up and to the right! πŸ“ˆ

What wild and wacky metrics have you seen in the wild? Got any metric horror stories or success sagas? Share in the comments – let’s start a metric revolution! πŸš€

P.S. If this post helped you see metrics in a new light, share it faster than your CI/CD pipeline! Your fellow tech leads will thank you (maybe with actual thank-you metrics)!

The Art of Hands-Off Management: Coaching Self-Organizing Teams Without Turning into a Micromanager

Hey there, tech leads and engineering managers! πŸ‘‹ Are you ready to level up your leadership game? Today, we’re diving into the delicate art of coaching self-organizing teams without accidentally morphing into the dreaded micromanager. Buckle up, because we’re about to walk the tightrope of hands-off management!

The Micromanager’s Dilemma

Picture this: You’re leading a team of brilliant devs. They’re self-organizing, they’re agile, they’re everything the tech blogs say they should be. But… they’re about to make a decision that makes your eye twitch. Do you:

A) Swoop in like a coding superhero and save the day? B) Bite your tongue so hard you taste binary? C) Find a way to guide without grabbing the wheel?

If you chose C, congratulations! You’re ready for the world of coaching self-organizing teams. If you chose A or B, don’t worry – we’ve all been there. Let’s explore how to nail that perfect balance.

The Golden Rule: Ask, Don’t Tell

“The art of leadership is saying no, not saying yes. It is very easy to say yes.” – Tony Blair

Okay, Tony wasn’t talking about tech leadership, but the principle applies. When you’re tempted to give directions, try asking questions instead. It’s like the difference between giving someone a fish and teaching them to fish – except in this case, you’re not even teaching. You’re just asking if they’ve considered using a fishing rod instead of their bare hands.

Example Time!

Let’s say your team is struggling with large, monolithic tasks that are slowing down the sprint. Instead of mandating “No task over 8 hours!”, try this:

You: “Hey team, I noticed our sprint completion rate is lower than usual. Any thoughts on why?”

Team: “Well, we have these huge tasks that only one person can work on…”

You: “Interesting. How might that be affecting our workflow?”

Team: “I guess it leads to a lot of ‘almost done’ stories at the end of the sprint.”

You: “Hmm, what could we do to address that?”

See what you did there? You guided them to the problem and let them find the solution. It’s like inception, but for project management!

The Five Whys: Not Just for Toddlers Anymore

Remember when kids go through that phase of asking “Why?” to everything? Turns out, they might be onto something. The Five Whys technique is a great way to dig into the root of a problem without telling the team what to do.

Here’s how it might go:

  1. Why is our sprint completion rate low?
  2. Why do we have a lot of long-running tasks?
  3. Why are our tasks so big?
  4. Why haven’t we broken them down further?
  5. Why didn’t we realize this was an issue earlier?

By the fifth “why,” you’ve usually hit the root cause. And the best part? The team has discovered it themselves!

When in Doubt, Shu Ha Ri

No, that’s not a new sushi restaurant. Shu Ha Ri is a concept from martial arts that applies beautifully to coaching self-organizing teams:

  • Shu (Follow): The team follows the rules and processes.
  • Ha (Detach): The team starts to break away from rigid adherence.
  • Ri (Fluent): The team creates their own rules and processes.

As a coach, your job is to recognize which stage your team is in and adapt accordingly. New team? Maybe they need more structure (Shu). Experienced team? Let them break some rules (Ha). Rockstar team? Stand back and watch them soar (Ri).

It’s a great way to introduce a process to them that isn’t overbearing, for example you can say how about we try “X” my way fora sprint or 2, see how you like it and evolve it from there.

The KPI Conundrum

“Not everything that can be counted counts, and not everything that counts can be counted.” – Albert Einstein

Al knew what he was talking about. When it comes to measuring the success of self-organizing teams, you need a KPI (Key Performance Indicator) that’s:

  • Instantly measurable (because who has time for complex calculations?)
  • Team-focused (no individual call-outs here)
  • Connected to business value (because that’s why we’re all here, right?)

Avoid vanity metrics like lines of code or number of commits. Instead, focus on things like deployment frequency, lead time for changes, or even better – actual business impact metrics.

Why instantly measurable? it doesn’t necessarily need to be instant, as long as it’s timely, the sooner you know results the sooner you can change direction, and if its very timely you can even get to the point of gamification, but more on that in another post.

A good KPI sets the course for the team, can solve arguments and helps them course correct if they choose the wrong direction.

It’s also good to agree on SLAs for technical metrics (Quality etc) to make sure we don’t make a decision that trades off long term for short without knowing.

The Coaching Toolkit: Your Swiss Army Knife of Leadership

Here are some tools to keep in your back pocket:

  1. The Silence Technique: Sometimes, the best thing you can say is nothing at all. Let the team fill the void. This will encourage your team to speak up on their own.
  2. The Mirror: Reflect the team’s ideas back to them. It’s like a verbal rubber duck debugging session.
  3. The Hypothetical: “What would happen if…” questions can open up new avenues of thinking.
  4. The Devil’s Advocate: Challenge assumptions, but make it clear you’re playing a role, if you don’t make this clear you may come across overly negative and not supportive.
  5. The Celebration: Recognize and celebrate when the team successfully self-organizes and solves problems.

Wrapping Up: The Zen of Hands-Off Management

Coaching self-organizing teams is a bit like being a gardener. You create the right conditions, you nurture, you occasionally prune, but ultimately, you let the plants do their thing. Sometimes you might get an odd-shaped tomato, but hey – it’s organic!

Remember, your goal is to make yourself progressively less necessary. If you’ve done your job right, the team should be able to function beautifully even when you’re on that beach vacation sipping piΓ±a coladas.

So go forth, ask questions, embrace the awkward silences, and watch your team bloom!

What’s your secret sauce for coaching self-organizing teams? Have you ever accidentally micromanaged and lived to tell the tale? Share your war stories in the comments – we promise not to judge (much)! πŸ˜‰

P.S. If you enjoyed this post, don’t forget to smash that like button, ring the bell, and subscribe to our newsletter for more tech leadership gems! (Just kidding, this isn’t YouTube, but do share if you found it helpful!)

Measuring Product Health: Beyond Code Quality

In the world of software development, we often focus on code quality as the primary measure of a product’s health. While clean, efficient code with passing tests is crucial, it’s not the only factor that determines the success of a product. As a product engineer, it’s essential to look beyond the code and understand how to measure the overall health of your product. In this post, we’ll explore some key metrics and philosophies that can help you gain a more comprehensive view of your product’s performance and impact.

The “You Build It, You Run It” Philosophy

Before diving into specific metrics, it’s important to understand the philosophy that underpins effective product health measurement. We follow the principle of “You Build It, You Run It.” This approach empowers developers to take ownership of their products not just during development, but also in production. It creates a sense of responsibility and encourages a deeper understanding of how the product performs in real-world conditions.

What Can We Monitor?

When it comes to monitoring product health, there are several areas we usually focus on:

  1. Logs: Application, web server, and system logs
  2. Metrics: Performance indicators and user actions
  3. Application Events: State changes within the application

While all these are important, it’s crucial to understand the difference between logs and metrics, and when to use each.

The Top-Down View: What Does Your Application Do?

One of the most important questions to ask when measuring product health is: “What does my application do?” This top-down approach helps you focus on the core purpose of your product and how it delivers value to users. So ultimatelly when this value is impacted you know when to act.

Example: E-commerce Website

Let’s consider an e-commerce website. At its core, the primary function of such a site is to facilitate orders. That’s the ultimate goal – to guide users through the funnel to complete a purchase.

So, how do we use this for monitoring? We ask two key questions:

  1. Is the application successfully processing orders?
  2. How often should it be processing orders, and is it meeting that frequency right now?

How to Apply This?

To monitor this effectively, we generally look at 10-minute windows throughout the day (for example, 8:00 to 8:10 AM). For each window, we calculate the average number of orders for that same time slot on the same day of the week over the past four weeks. If the current number falls below this average, it triggers an alert.

This approach is more nuanced and effective than setting static thresholds. It naturally adapts to the ebb and flow of traffic throughout the day and week, reducing false alarms while still catching significant drops in performance. By using dynamic thresholds based on historical data, you’re less likely to get false positives during normally slow periods, yet you remain sensitive enough to catch meaningful declines in performance.

One of the key advantages of this method is that it avoids the pitfalls of static thresholds. With static thresholds, you often face a dangerous compromise. To avoid constant alerts during off-hours or naturally slow periods, you might set the threshold very low. However, this means you risk missing important issues during busier times. Our dynamic approach solves this problem by adjusting expectations based on historical patterns.

While we typically use 10-minute windows, you can adjust this based on your needs. For systems with lower volume, you might use hourly or even daily windows. This will make you respond to problems more slowly in these cases, but you’ll still catch significant issues. The flexibility allows you to tailor the system to your specific product and business needs.

Another Example: Help Desk Chat System

Let’s apply our core question – “What does this system DO?” – to a different type of application: a help desk chat system. This question is crucial because it forces us to step back from the technical details and focus on the fundamental purpose of the system adn teh value it delviers to the business and ultimately the customer.

So, what does a help desk chat system do? At its most basic level, it allows communication between support staff and customers. But let’s break that down further:

  1. It enables sending messages
  2. It displays these messages to the participants
  3. It presents a list of ongoing conversations

Now, you might be tempted to say that sending messages is the primary function, and you’d be partly right. But remember, we’re thinking about what the system DOES, not just how it does it.

With this in mind, how might we monitor the health of such a system? While tracking successful message sends is important, it might not tell the whole story, especially if message volume is low. We should also consider monitoring:

  • Successful page loads for the conversation list (Are users able to see their ongoing chats?)
  • Successful loads of the message window (Can users access the core chat interface?)
  • Successful resolution rate (Are chats leading to solved problems?)

By expanding our monitoring beyond just message sending, we get a more comprehensive view of whether the system is truly doing what it’s meant to do: helping customers solve their problems efficiently.

This example illustrates why it’s so important to always start with the question, “What does this system DO?” It guides us towards monitoring metrics that truly reflect the health and effectiveness of our product, rather than just its technical performance.

A 200 Ok response, is not always OK

As you consider your own systems, always begin with this fundamental question. It will lead you to insights about what you should be measuring and how you can ensure your product is truly serving its purpose.

The Bottom-Up View: How Does Your Application Work?

While the top-down view focuses on the end result, the bottom-up approach looks at the internal workings of your application. This includes metrics such as:

  • HTTP requests (response time, response code)
  • Database calls (response time, success rate)

Modern systems often collect these metrics through contactless telemetry, reducing the need for custom instrumentation.

Prioritizing Alerts: When to Wake Someone Up at 3 AM

A critical aspect of product health monitoring is knowing when to escalate issues. Ask yourself: Should the Network Operations Center (NOC) call you at 3 AM if a server has 100% CPU usage?

The answer is no – not if there’s no business impact. If your core business functions (like processing orders) are unaffected, it’s better to wait until the next day to address the issue.

Using Loss as a Currency for Prioritization

Once you’ve established a health metric for your system and can compare current performance against your 4-week average, you gain a powerful tool: the ability to quantify “loss” during a production incident. This concept of loss can become a valuable currency in your decision-making process, especially when it comes to prioritizing issues and allocating resources.

Imagine your e-commerce platform typically processes 1000 orders per hour during a specific time window, based on your 4-week average. During an incident, this drops to 600 orders. You can now quantify your loss: 400 orders per hour. If you know your average order value, you can even translate this into a monetary figure. This quantification of loss becomes your currency for making critical decisions.

With this loss quantified, you can now make more informed decisions about which issues to address first. This is where the concept of “loss as a currency” really comes into play. You can compare the impact of multiple ongoing issues, justify allocating more resources to high-impact problems, and make data-driven decisions about when it’s worth waking up engineers in the middle of the night.

Reid Hoffman, co-founder of LinkedIn, once said, “You won’t always know which fire to stamp out first. And if you try to put out every fire at once, you’ll only burn yourself out. That’s why entrepreneurs have to learn to let fires burnβ€”and sometimes even very large fires.” This wisdom applies perfectly to our concept of using loss as a currency. Sometimes, you have to ask not which fire you should put out, but which fires you can afford to let burn. Your loss metric gives you a clear way to make these tough decisions.

This approach extends beyond just immediate incident response. You can use it to prioritize your backlog, make architectural decisions, or even guide your product roadmap. When you propose investments in system improvements or additional resources, you can now back these proposals with clear figures showing the potential loss you’re trying to mitigate, all be it with a pitch of crytal ball about how likely these incident are to occura gain sometimes.

By always thinking in terms of potential loss (or gain), you ensure that your team’s efforts are always aligned with what truly matters for your business and your users. You create a direct link between your technical decisions and your business outcomes, ensuring that every action you take is driving towards real, measurable impact.

Remember, the goal isn’t just to have systems that run smoothly from a technical perspective. It’s to have products that consistently deliver value to your users and meet your business objectives. Using loss as a currency helps you maintain this focus, even in the heat of incident response or the complexity of long-term planning.

In the end, this approach transforms the abstract concept of system health into a tangible, quantifiable metric that directly ties to your business’s bottom line.

Conclusion: A New Perspective on Product Health

As we’ve explored throughout this post, measuring product health goes far beyond monitoring code quality or individual system metrics. It requires a holistic approach that starts with a fundamental question: “What does our system DO?” This simple yet powerful query guides us toward understanding the true purpose of our products and how they deliver value to users.

By focusing on core business metrics that reflect this purpose, we can create dynamic monitoring systems that adapt to the natural ebbs and flows of our product usage. This approach, looking at performance in time windows compared to 4-week averages, allows us to catch significant issues without being overwhelmed by false alarms during slow periods.

Perhaps most importantly, we’ve introduced the concept of using “loss” as a currency for prioritization. This approach transforms abstract technical issues into tangible business impacts, allowing us to make informed decisions about where to focus our efforts. As Reid Hoffman wisely noted, we can’t put out every fire at once – we must learn which ones we can let burn. By quantifying the loss associated with each issue, we gain a powerful tool for making these crucial decisions.

This loss-as-currency mindset extends beyond incident response. It can guide our product roadmaps, inform our architectural decisions, and help us justify investments in system improvements. It creates a direct link between our technical work and our business outcomes, ensuring that every action we take drives towards real, measurable impact.

Remember, the ultimate goal isn’t just to have systems that run smoothly from a technical perspective. It’s to have products that consistently deliver value to our users and meet our business objectives.

As you apply these principles to your own systems, always start with that core question: “What does this system DO?” Let the answer guide your metrics, your monitoring, and your decision-making. In doing so, you’ll not only improve your product’s health but also ensure that your engineering efforts are always aligned with what truly matters for your business and your users.

Strategies for Successful Monolith Splitting

In our previous post, we explored how to identify business domains in your monolith and create a plan for splitting. Now, let’s dive into the strategies for executing this plan effectively. We’ll cover modularization techniques, handling ongoing development during the transition, and measuring your progress.

If you are in the early stages of the chart, you can probably look into Modularization, if you are however towards the right hand side (like we were), you will need to take some more drastic action.

If you are on the right hand side, they your monolith is at the point you need to stop writing code there NOW.

There’s 2 things to consider:

  • for new domains, or significant new features in existing domains start them outside straight away
  • for existing domains, build a new system for each of them, and move the code out

Once your code is in a new system, you get all the benefits straight away on that code. You aren’t waiting for an entire system to migrate before you see results to your velocity. This is why we say start with the high volume change areas and domains first.

How to stop writing code there “now”? Apply the open closed principle at the system level

  1. Open for extension: Extend functionality by consuming events and calling APIs from new systems
  2. Closed for modification: Limit changes to the monolith, aim to get to the point where it’s only crucial bug fixes

This pattern encourages you to move to the new high development velocity systems.

Modularization: The First Step for those on the Left of the chart

Before fully separating your monolith into distinct services, it’s often beneficial to start with modularization within the existing system. This approach, sometimes called the “strangler fig pattern,” can be particularly effective for younger monoliths.

Modularization is a good strategy when:

  • Your monolith is relatively young and manageable
  • You want to gradually improve the system’s architecture without a complete overhaul
  • You need to maintain the existing system while preparing for future splits

However, be wary of common pitfalls in this process:

  • Avoid over-refactoring; focus on creating clear boundaries between modules
  • Ensure your modularization efforts align with your identified business domains

For ancient monoliths with extremely slow velocity, a more drastic “lift and shift” approach into a new system is recommended.

Integrating New Systems with the Monolith, for those to the Right

When new requirements come in, especially for new domains, start implementing them in new systems immediately. This approach helps prevent your monolith from growing further while you’re trying to split it.

Integrating new systems with your monolith requires these considerations:

  1. Add events for everything that happens in your monolith, especially around data or state changes
  2. Listen to these events from new systems
  3. When new systems need to call back to the monolith, use the monolith’s APIs

This event-driven approach allows for loose coupling between your old and new systems, facilitating a smoother transition.

Existing Domains: The Copy-Paste Approach for those to the Right

If your monolith is in particularly bad shape, sometimes the best approach is the simplest, build a new system then copy, paste, and call it step one from the L7 router. Don’t get bogged down trying to improve everything right away. Focus on basic linting and formatting, but avoid major refactoring or upgrades at this stage. The goal is to get the code into the new system first, then improve it incrementally.

However, this approach comes with its own set of challenges. Here are some pitfalls to watch out for:

Resist the urge to upgrade everything: A common mistake is trying to upgrade frameworks or libraries during the split. For example, one team, 20% into their split, decided to upgrade React from version 16 to 18 and move all tests from Enzyme to React Testing Library in the new system. This meant that for the remaining 80% of the code, they not only had to move it but also refactor tests and deal with breaking React changes. They ended up reverting to React 16 and keeping Enzyme until further into the migration.

Remember the sooner your code gets into the new system the sooner you get faster.

Don’t ignore critical issues: While the “just copy-paste” approach can be efficient, it’s not an excuse to ignore important issues. In one case, a team following this advice submitted a merge request that contained a privilege escalation security bug, which was fortunately caught in code review. When you encounter critical issues like security vulnerabilities, fix them immediately – don’t wait.

Balance speed with improvements: It’s okay to make some improvements as you go. Simple linting fixes that can be auto-applied by your IDE or refactoring blocking calls into proper async/await patterns are worth the effort. It’s fine to spend a few extra hours on a multi-day job to make things a bit nicer, as long as it doesn’t significantly delay your migration.

The key is to find the right balance. Move quickly, but don’t sacrifice the integrity of your system. Make improvements where they’re easy and impactful, but avoid getting sidetracked by major upgrades or refactors until the bulk of the migration is complete.

Measuring Progress and Impact: Part 1 Velocity

Your goal is to have business impact, impact comes from the velocity game to start with, so taht’s where our measurements start.

Number of MRs on new vs old systems: Initially, focus on getting as many engineers onto the new (high velocity) systems as possible, compare your number of MRs on old vs new over time and monitor the change to make sure you are having the impact here first

Overall MR growth: If the total number of MRs across all systems is growing significantly, it might indicate incorrect splitting or dragging incremental work.

Work tracking across repositories: Ask engineers to use the same JIRA ID (or equivalent) for related work across repositories in the branch name or MR Title or something, to track units of work spanning both old and new systems.

Velocity Metrics on old vs new: Don’t “assume” your new systems will always be better, compare old vs new on velocity metric and make sure you are seeing the difference.

Ok, now when you ht critical mass on the above, for us we called it at about 80%, you will need to shift, the long tail there will be less ROI on velocity, it’ll become a support game, and you need to face it differently.

Measuring Progress and Impact: Part 2 Traffic

So at this time its best to look at traffic, moving high volume traffic pages/endpoints in theory should reduce the impact if there’s an issue with the legacy system thereby reducing the support, this might not be true for your systems, you may have valuable endpoints with low traffic, so you need to work it out the best way for you.

Traffic distribution: Looking per page or per endpoint where the biggest piece of the pie is.

Low Traffic: Looking per page or per endpoint where there is low traffic, this may lead you to find features you can deprecate.

As you move functionality to new services, you may discover features in the monolith that are rarely used. Raise with product and stakeholders, ask “Whats the value this brings vs the effort to migrate and maintain it?”

  1. deprecating the page or endpoint
  2. combining functionality into other similar pages/endpoints to reduce codebase size

Remember, every line of code you don’t move is a win for your migration efforts.

Conclusion

Splitting a monolith is a complex process that requires a strategic approach tailored to your system’s current state. Whether you’re dealing with a younger, more manageable monolith or an ancient system with slow velocity, there’s a path forward.

The key is to stop adding to the monolith immediately, start new development in separate systems, and approach existing code pragmatically – sometimes a simple copy-paste is the best first step. As you progress, shift your focus from velocity metrics to traffic distribution and support impact.

Remember, the goal is to improve your system’s overall health and development speed. By thoughtfully planning your split, building new features in separate systems, and closely tracking your progress, you can successfully transition from a monolithic to a microservices architecture.

In our next and final post of this series, we’ll discuss how to finish strong, including strategies for cleaning up your codebase, maintaining momentum, and ensuring you complete the splitting process. Stay tuned!

The Product Engineering Mindset: Bridging Technology and Business

In our previous posts, we explored the evolution of software development and the core principles of product engineering. Today, we’re diving into the product engineering mindset – the set of attitudes and approaches that define successful product engineers. This mindset is what truly sets product engineering apart from traditional software development roles.

The T-Shaped Professional

At the heart of the mindset is the concept of the T-shaped professional. This term, popularized by IDEO CEO Tim Brown, describes individuals who have deep expertise in one area (the vertical bar of the T) coupled with a broad understanding of other related fields (the horizontal bar of the T).

For engineers, the vertical bar typically represents their technical skills – be it front-end development, back-end systems, data engineering, or any other specific domain. The horizontal bar, however, is what truly defines this mindset. It includes:

  1. Understanding of user experience and design principles
  2. Knowledge of business models and metrics
  3. Familiarity with product management concepts
  4. Basic understanding of data analysis and interpretation
  5. Awareness of market trends and competitive landscape

This T-shaped skillset allows these engineers to collaborate effectively across disciplines, make informed decisions, and understand the broader impact of their work.

Customer-Centric Thinking

At the heart of product engineering lies a fundamental principle: an unwavering focus on the customer. Product engineers don’t just build features; they solve real problems for real people. This customer-centric approach permeates every aspect of their work, from initial concept to final implementation and beyond.

Central to this mindset is empathy – the ability to understand and share the feelings of another. This means going beyond surface-level user requirements to truly comprehend the user’s context, needs, and pain points. It’s about putting yourself in the user’s shoes, understanding their frustrations, their goals, and the environment in which they use your product.

Curiosity is another crucial component of customer-centric thinking. Engineers are not content with surface-level understanding; they constantly ask “why?” to get to the root of problems. This curiosity drives them to dig deeper, to question assumptions, and to seek out the underlying causes of user behavior and preferences.

For example, if users aren’t engaging with a particular feature, a curious engineer won’t simply accept this at face value. They’ll ask: Why aren’t users engaging? Is the feature difficult to find? Is it not solving the problem it was intended to solve? Is there a more fundamental issue that we haven’t addressed? This relentless curiosity leads to deeper insights and more effective solutions.

Observation is the third pillar of customer-centric thinking. Engineers pay close attention to how users actually interact with their products, not just how they’re expected to. This often involves going beyond analytics and user feedback to engage in direct observation and user testing.

Consider an engineer working on an e-commerce platform. They might set up user testing sessions where they observe customers navigating the site, making purchases, and encountering obstacles. They might analyze heatmaps and user flows to understand where customers are dropping off or getting confused. They might even use techniques like contextual inquiry, observing users in their natural environments to understand how the product fits into their daily lives.

Amazon’s “working backwards” process exemplifies this customer-centric mindset in action. Before writing a single line of code, product teams at Amazon start by writing a press release from the customer’s perspective. This press release describes the finished product, its features, and most importantly, the value it provides to the customer.

This approach forces teams to think deeply about the customer’s needs and desires from the very beginning of the product development process. It ensures that every feature is grounded in real customer value, not just technical possibilities or internal priorities.

In the end, customer-centric thinking is what transforms a good product engineer into a great one. It’s the difference between building features and creating solutions, between meeting specifications and delighting users.

Balancing Technical Skills with Business Acumen

While deep technical skills form the foundation of a product engineer’s expertise, the modern tech landscape demands a broader perspective. Today’s engineers need to bridge the gap between technology and business, understanding not just how to build products, but why they’re building them and how they fit into the larger business strategy.

This balance begins with a solid understanding of the business model. Engineers need to grasp how their company generates revenue and manages costs. This isn’t about becoming financial experts, but rather about understanding the basic mechanics of the business. For instance, an engineer at a SaaS company should understand the concepts of customer acquisition costs, lifetime value, and churn rate. They should know whether the company operates on a freemium model, enterprise sales, or something in between. This understanding helps engineers make informed decisions about where to invest their time and effort, aligning their technical work with the company’s financial goals.

Equally important is a grasp of key performance indicators (KPIs) and how engineering decisions impact these metrics. Different businesses will have different KPIs, but common examples include user acquisition, retention rates, conversion rates, and average revenue per user. engineers need to understand which metrics matter most to their business and how their work can move the needle on these KPIs.

At Airbnb, for example, engineers don’t just focus on building a fast and reliable booking system. They understand how factors like booking conversion rate, host retention, and customer lifetime value impact the company’s success. This knowledge informs their technical decisions, ensuring that their work aligns with and supports the company’s broader goals.

Awareness of market dynamics is another crucial aspect of business acumen for engineers. This involves understanding who the competitors are, what they’re doing, and how the market is evolving. Engineers should have a sense of where their product fits in the competitive landscape and what sets it apart.

This market awareness also extends to understanding broader industry trends that might impact the product. For instance, an engineer working on a mobile app needs to be aware of trends in mobile technology, changes in app store policies, and shifts in user behavior. This knowledge helps them anticipate challenges and opportunities, informing both short-term decisions and long-term strategy.

Consider an engineer at a streaming service like Netflix. They need to be aware of not just direct competitors in the streaming space, but also broader trends in entertainment consumption. Understanding the rise of short-form video content on platforms like TikTok, for example, might inform decisions about feature and infrastructure development or content recommendation algorithms.

Balancing technical skills with business acumen doesn’t mean that engineers need to become business experts. Rather, it’s about developing enough understanding to make informed decisions and communicate effectively with business stakeholders.

Developing this business acumen is an ongoing process. It involves curiosity about the broader context of one’s work, a willingness to engage with non-technical stakeholders, and a commitment to understanding the “why” behind product decisions.

Embracing Uncertainty and Learning

The product engineering mindset is characterized by a unique comfort with uncertainty and an unwavering commitment to continuous learning. In the fast-paced world of technology, where change is the only constant, this mindset is not just beneficialβ€”it’s essential for success.

At the heart of this mindset is a willingness to experiment. Engineers understand that innovation often comes from trying new approaches, even when the outcome is uncertain. They view each project not just as a task to be completed, but as an opportunity to explore and learn. This experimental approach extends beyond just trying new technologies; it encompasses new methodologies, team structures, and problem-solving techniques.

Crucially, these engineers see both successes and failures as valuable learning experiences. When an experiment succeeds, they analyze what went right and how to replicate that success. When it fails, they don’t see it as a setback, but as a rich source of information. They ask: What didn’t work? Why? What can we learn from this? This resilience in the face of failure, coupled with a curiosity to understand and learn from it, is a hallmark of the product engineering mindset.

Data-driven decision making is another key aspect of this mindset. Product engineers don’t rely on hunches or assumptions; they seek out data to inform their choices. This might involve A/B testing different features, analyzing user behavior metrics, or conducting performance benchmarks. They’re comfortable with analytics tools and basic statistical concepts, using these to derive insights that guide their work.

However they also understand the limitations of data. They know that not everything can be quantified and that sometimes, especially when innovating, there may not be historical data to rely on. In these cases, they balance data with intuition and experience. They’re not paralyzed by a lack of complete information but are willing to make informed judgments when necessary.

Spotify’s “fail fast” culture exemplifies this mindset in action. Engineers are encouraged to experiment with new ideas, measure the results, and quickly iterate or pivot based on what they learn. This approach not only leads to innovative solutions but also creates an environment where learning is valued and uncertainty is seen as an opportunity rather than a threat.

Collaborative Problem-Solving

Product engineers don’t work in silos. The complexity of modern software products demands a collaborative approach, where diverse perspectives and skill sets come together to create solutions. Product engineers collaborate closely with designers, product managers, data scientists, and other stakeholders, each bringing their unique expertise to the table.

Teamwork is another crucial aspect of collaborative problem-solving. Engineers must be willing to share their ideas openly, knowing that exposure to different viewpoints can refine and improve their initial concepts. They need to be open to feedback, seeing it not as criticism but as an opportunity for growth and improvement. At the same time, they should be ready to offer constructive feedback to others, always keeping the common goal in mind. This give-and-take of ideas, when done in a spirit of mutual respect and shared purpose, can lead to breakthroughs that no single individual could have achieved alone.

Often, these engineers find themselves in the role of facilitator, especially when it comes to technical decisions that impact the broader product strategy. They may need to guide discussions, helping the team navigate complex technical tradeoffs while considering business and user experience implications. This requires not just technical knowledge, but also the ability to listen actively, synthesize different viewpoints, and guide the team towards consensus. It’s about finding the delicate balance between driving decisions and ensuring all voices are heard.

At Google, this collaborative mindset is embodied in their design sprint process. In these intensive, time-boxed sessions, cross-functional teams come together to tackle complex problems. Engineers work side-by-side with designers, product managers, and other stakeholders, rapidly prototyping and testing ideas. This process not only leads to innovative solutions but also builds stronger, more cohesive teams.

Conclusion

The product engineering mindset is about much more than coding skills. It’s about understanding the bigger picture, taking ownership of outcomes, focusing relentlessly on user needs, and working collaboratively to solve complex problems.

Developing this mindset is a journey. It requires curiosity, empathy, and a willingness to step outside the comfort zone of pure technical work. But for those who embrace it, this mindset opens up new opportunities to create meaningful impact and drive innovation.

In our next post, we’ll dive into the specific skills that product engineers need to cultivate to be successful in their roles. We’ll explore both technical and non-technical skills that are crucial in the world of product engineering.

What aspects of the product engineering mindset resonate with you? How have you seen this mindset impact product development in your organization? Share your thoughts and experiences in the comments below!

Understanding Product Engineering: A New Paradigm in Software Development

In our previous post, we explored how the software development landscape is rapidly changing and why traditional methods are becoming less effective. Today, we’re diving deep into the concept of product engineering – a paradigm shift that’s reshaping how we approach software development.

What is Product Engineering?

At its core, product engineering is a holistic approach to software development that combines technical expertise with a deep understanding of user needs and business goals. It’s not just about writing code or delivering features; it’s about creating products that solve real problems and provide tangible value to users.

Product engineering teams are cross-functional, typically including software engineers, designers, product managers, and sometimes data scientists or other specialists. These teams work collaboratively, with each member bringing their unique perspective to the table.

The Purpose of Product Engineering

1. Innovating on Behalf of the Customer

The primary purpose of product engineering is to innovate on behalf of the customer. This means going beyond simply fulfilling feature requests or specifications. Instead, product engineers strive to deeply understand the problems customers face and develop innovative solutions – sometimes before customers even realize they need them.

For example, when Amazon introduced 1-Click ordering in 1999, they weren’t responding to a specific customer request. Instead, they identified a pain point in the online shopping experience (the tedious checkout process) and innovated a solution that dramatically improved user experience.

2. Building Uncompromisingly High-Quality Products

Teams are committed to building high-quality products that customers love to use. This goes beyond just ensuring that the code works correctly. It encompasses:

  • Performance: Ensuring the product is fast and responsive
  • Reliability: Building systems that are stable and dependable
  • User Experience: Creating intuitive, enjoyable interfaces
  • Scalability: Designing systems that can grow with user demand

Take Spotify as an example. Their product engineering teams don’t just focus on adding new features. They continually work on improving streaming quality, reducing latency, and enhancing the user interface – all elements that contribute to a high-quality product that keeps users coming back.

3. Driving the Business

While product engineering is customer-centric, it also plays a crucial role in driving business success. Engineers need to understand the business model and how their work contributes to key performance indicators (KPIs).

For instance, at Agoda, a travel booking platform, teams might focus on metrics like “Incremental Bookings per Day” in the booking funnel or “Activations” in the Accommodation Supply side. These metrics directly tie to business success while also reflecting improvements in the customer experience.

Key Principles of Product Engineering

1. Problem-Solving Over Feature Building

Teams focus on solving problems rather than just building features. Instead of working from a list of specifications, they start with a problem statement. For example, rather than “Build feature X to specification Y,” a product engineering team might tackle “We don’t have a good enough conversion rate on our booking funnel.”

This approach allows for more creative solutions and ensures that the team’s efforts are always aligned with real user needs and business goals.

2. Cross-Functional Collaboration

Teams are enabled with all the expertise needed to solve the problem at hand. This might include UX designers, security experts, or even legacy system specialists, depending on the project’s needs.

This cross-functional collaboration ensures that all aspects of the product – from its technical architecture to its user interface – are considered from the start, leading to more cohesive and effective solutions.

3. Ownership of Results

Teams take ownership of the results, not just the delivery of features. If a change doesn’t increase conversion rates or solve the intended problem, it’s up to the team to iterate and improve until they achieve the desired results.

This shift from being judged on feature delivery to business results can be challenging for engineers used to traditional methods. As one engineer put it, “It was easier before when I just had to deliver 22 story points. Now you expect me to deliver business results?” However, this ownership leads to more impactful work and a deeper sense of satisfaction when real improvements are achieved.

The Shift from Feature Factories to Problem-Solving Teams

Traditional software development often operates like a “feature factory.” Requirements come in, code goes out, and success is measured by how many features are delivered to specification. This approach can lead to bloated software with features that aren’t used or don’t provide real value, remember our 37% unused software? that’s how companies get to this number.

Product engineering turns this model on its head. Teams are given problems to solve rather than features to build. They have the autonomy to explore different solutions, run experiments, and iterate based on real-world feedback. Success is measured not by features delivered, but by problems solved and value created for users and the business.

Conclusion

Product engineering represents a fundamental shift in how we approach software development. By focusing on customer needs, maintaining a commitment to quality, and aligning closely with business goals, teams are able to create software that truly makes a difference.

In our next post, we’ll explore the mindset required for successful product engineering. We’ll discuss the concept of T-shaped professionals and the balance of technical skills with business acumen that characterizes great product engineers.

What’s your experience with product engineering? Have you seen this approach in action in your organization? Share your thoughts and experiences in the comments below!

The Evolution of Product Engineering: Adapting to a Rapidly Changing World

In today’s fast-paced digital landscape, the way we approach software development is undergoing a significant transformation. As a product engineer with decades of experience in the field, I’ve witnessed firsthand the shift from traditional methodologies to a more dynamic, customer-centric approach. This blog post, the first in our series on Product Engineering, will explore this evolution and why it’s crucial for modern businesses to adapt.

The Changing Landscape of Software Development

Remember the days when software projects followed rigid, long-term plans? When we’d spend months mapping out every detail, stake holder meetings,d esign reviews for weeks architecting a massive new system, before writing a single line of code? Well it’s becoming increasingly clear that it’s no longer sufficient in our rapidly evolving digital world.

The reality is that by the time we finish implementing software based on these detailed plans, the world has often moved on. Our assumptions become outdated, and our solutions may no longer fit the problem at hand. As Mike Tyson says, “Everyone has a plan until they get punched in the mouth.” In software development, that punch often comes in the form of changing market conditions, disruptive technologies, or shifts in user behavior.

The Pitfalls of Traditional Methods

Let’s consider a real-world example. The finance industry has been turned on its head by small, agile fintech startups. Traditional banks, confident in their market position, initially dismissed these newcomers, thinking, “They aren’t stealing our core market.” But before they knew it, these startups were nibbling away at their core business. By the time the banks started planning their response, it was often too late – they were too slow to adapt.

PayPal and Square as examples revolutionized online and mobile payments. While banks were still relying on traditional credit card systems, these startups made it easy for individuals and small businesses to accept payments digitally. By the time banks caught up, PayPal had become a household name, processing over $936 billion in payments in 2020.

Robinhood as well disrupted the investment world by offering commission-free trades and fractional shares, making investing accessible to a new generation. Established brokerages were forced to eliminate trading fees to compete, significantly impacting their revenue models.

This scenario isn’t unique to finance. Across industries, we’re seeing that the old ways of developing software – with long planning cycles and rigid roadmaps – are becoming less effective. In fact, a staggering statistic reveals that 37% of software in large corporations is rarely or never used. Think about that for a moment. We constantly hear about the scarcity of engineering talent, yet more than a third of the software we produce doesn’t provide value. Clearly, something needs to change.

The Rise of Product Engineering

Enter product engineering – a approach that’s gaining traction among the most innovative companies in the world. But what sets apart companies like Spotify, Amazon, and Airbnb? Why do they consistently build software that we love to use?

The answer lies in their approach to product development. These companies understand a fundamental truth that Steve Jobs articulated so well: “A lot of times, people don’t know what they want until you show it to them.” And as far back as Henry Ford as well said, “If I had asked people what they wanted, they would have said faster horses.”

Product engineering isn’t about blindly following customer requests or building features that someone thinks people want. It’s about deeply understanding customer problems and innovating on their behalf. It’s about creating solutions that customers might not even realize they need – yet come to love.

The Need for a New Approach

In the traditional models many companies have built, engineers are often isolated from the product side of things. They’re told to focus solely on coding, “go code, do what you are good at”, protect this precious engineering resource, and don’t let them be disturbed by non-engineering things, with the assumption that someone else will worry about whether the product actually enhances the customer’s life or gets used at all.

This leads to what I call the “feature factory” – a system where engineers are fed requirements through tools like Jira, expected to churn out code, and measured solely on their ability to deliver features to specification. The dreaded term “pixel perfect” comes to mind. But this approach misses a crucial point: the true measure of our work isn’t in the features we ship, but in the value we create for our customers and our business.

Product engineering flips this model on its head. It brings engineers into the heart of the product development process, encouraging them to think deeply about the problems they’re solving and the impact of their work. It’s about creating cross-functional teams that are empowered to make decisions, experiment, and iterate quickly based on real-world feedback.

Looking Ahead

As we dive deeper into this series on Product Engineering, we’ll explore the specific skills, mindsets, and practices that define this approach. We’ll look at how to build empowered, cross-functional teams, how to make decisions in the face of uncertainty, and how to measure success in ways that truly matter.

The evolution of product engineering isn’t just a trend – it’s a necessary adaptation to the realities of modern software development. By embracing this approach, we can create better products, reduce waste, and ultimately deliver more value to our customers and our businesses.

Stay tuned for our next post, where we’ll dive deeper into what exactly makes a product engineering team tick.

What’s your experience with traditional software development versus more modern, product-focused approaches? Share your thoughts in the comments below!