When Three Tech Leads Met: A Lesson in Team Dynamics

The Customer Token Handler project offers an ironic illustration of Tuckman’s stages of group development – specifically how a highly skilled group can easily fail due to team dynamics. At a major technology company in Southeast Asia, an experiment in temporary team formation produced an unexpected lesson in ineffectiveness.

The setup appeared logical on paper. Three seasoned technical leads from Mobile, Homepage, and Supply Extranet domains would combine their expertise for two sprints in a temporary 3 person team. The mission: create a unified token handling library for customer authentication. Experience suggested their combined expertise would accelerate delivery.

Hour one revealed the fundamental flaw. The Homepage lead populated JIRA with exhaustive tickets. The Mobile lead covered whiteboards with sticky note tasks. The Supply Extranet lead, not one for the bureaucracy of planning, just started writing code. Three experts, three methodologies, zero progress. No forming, straight to storming, never reaching norming.

The two-sprint experiment dissolved afterwards without a working solution. The project appeared destined for failure until one technical lead took ownership. Working alone, they completed the token handling library in another two weeks – accomplishing more solo than the combined team had achieved together in double that time.

This outcome challenged conventional wisdom about collaborative development. While the three-person team generated extensive documentation, diagrams, and partial implementations, they never established the shared context necessary for effective collaboration. The eventual solo success demonstrated how reduced coordination overhead can sometimes accelerate delivery, did we end up with a better product from the solo lead? probably not, but its a working one at least that we can iterate on.

The Customer Token Handler story reshaped our approach to temporary team formation. It highlighted how Tuckman’s stages cannot be shortcuts, even with experienced technical leaders. Teams should be long lived to be effective.

We must never look at Engineering teams like a scheduling problem to solve, they are people, they have the same flaws and quirks as people do, we need to acknowledge them beyond their IC number in order to create winning teams.

Technical North Star Metrics: The Currency of Product Success

There’s a saying in business that “what gets measured, gets managed.” But in the complex world of modern software systems, choosing what to measure can be as crucial as the measurement itself. Enter the concept of Technical North Star metrics – not just another KPI, but a fundamental compass that guides technical decisions and shapes organizational behavior.

The Power of a Single Number

When I first encountered the concept of a Technical North Star metric at a previous organization, I was skeptical. How could one number capture the complexity of our technical systems? But over time, I’ve come to appreciate the elegant simplicity it brings to decision-making and incident management.

The most effective Technical North Star metrics share three key characteristics: they’re ubiquitous throughout the organization, they directly correlate with business success, and perhaps most importantly, they’re actionable at every level of the technical organization.

Consider Netflix’s “Total Watch Time” or Facebook’s “Daily Active Users.” These aren’t just vanity metrics – they’re deeply woven into the technical fabric of these organizations. Every engineer, product manager, and executive speaks this common language, creating a shared understanding of success and failure.

From Metric to Currency

One of the most enlightening perspectives I’ve encountered came from a manager who described our Technical North Star metric as a “currency.” This analogy perfectly captures how these metrics function within an organization.

At Agoda, for instance, “Bookings” serves as this universal currency. While I can’t share specific numbers, what’s fascinating is how this metric has become part of the engineering team’s DNA. Ask any engineer about current booking rates, and they’ll know the number (though they won’t share it!).

This currency analogy extends beautifully to incident management. When an incident occurs, we can literally “count the cost” in terms of lost bookings. It’s not abstract – it’s concrete, measurable, and immediately understood throughout the organization.

The Art of Measurement

But how do we actually measure these metrics in a meaningful way? The approach needs to be both sophisticated enough to be accurate and simple enough to be actionable.

At Agoda, we’ve developed an elegant solution for measuring booking impact during incidents. We look at four-week averages for specific time windows. For instance, if the 10:00-10:10 AM window typically sees 50 bookings (a hypothetical number), any significant deviation from this baseline triggers investigation. When services are restored and the trend returns to normal, we can calculate the “cost” of the incident in terms of lost bookings.

This approach is brilliant in its simplicity. It accounts for natural variations in booking patterns while providing clear signals when something’s amiss. The four-week average smooths out daily fluctuations while remaining responsive enough to reflect recent trends.

Beyond Incidents: Driving Technical Excellence

The real power of a Technical North Star metric extends far beyond incident management. It shapes architectural decisions, influences feature prioritization, and drives technical innovation.

When every technical decision can be evaluated against its potential impact on the North Star metric, it creates clarity in decision-making. Should we invest in that new caching layer? Well, how will it affect bookings? Is this new feature worth the additional complexity? Let’s AB Test it on bookings.

You can look at the incrementality of these metrics to measure a B variants success which generally translates to direct bottom line value. For example, if we see a B variant is up on 200 Bookings per Day, this language translates to bottom line impact that’s easy for any engineer to understand. Connecting you day to day work to impact is very important for motivation of staff.

The Human Element

Perhaps the most underappreciated aspect of Technical North Star metrics is their impact on organizational behavior. When everyone from junior engineers to senior architects speaks the same language and measures success by the same yardstick, it creates alignment that no amount of process or documentation can achieve.

This shared understanding breaks down silos between teams. When a front-end engineer and a database administrator can discuss the impact of their work in terms of the same metric, it creates a foundation for meaningful collaboration.

Looking Forward

As our systems grow more complex and our organizations more distributed, the importance of having a clear Technical North Star only increases. The metric must evolve as our products and markets evolve. What worked yesterday might not work tomorrow.

The key is to maintain the balance between stability and adaptability. Your Technical North Star should be stable enough to guide long-term decisions but flexible enough to remain relevant as your business evolves.

The next time you’re evaluating your organization’s technical metrics, ask yourself: Do we have a true Technical North Star? Does it drive behavior at all levels of the organization? Is it serving as a currency for technical decision-making? If not, it might be time to look up and reorient your technical compass.

Remember, the best Technical North Star isn’t just a metric – it’s a shared language that aligns technical excellence with business success. And in today’s complex technical landscape, that alignment is more valuable than ever.

The Impact of Paved Paths and Embracing the Future of Development

Throughout this series, we’ve explored the concept of paved paths, from understanding the problems they solve to implementing them with practical tools like .NET templates. In this final post, we’ll examine the broader impact of paved paths on development culture and look towards the future of software development.

The Cultural Shift: Embracing Paved Paths

Implementing paved paths is more than just a technical change—it’s a cultural shift within an organisation. Let’s explore how paved paths influence various aspects of development culture:

1. Balancing Standardization and Innovation

Paved paths provide a standardized approach to development, but they’re not about enforcing rigid conformity. As David Heinemeier Hansson, creator of Ruby on Rails, aptly puts it:

“Structure liberates creativity. The right amount of standardization frees developers to focus on solving unique problems.”

Paved paths offer a foundation of best practices and proven patterns, allowing developers to focus their creative energy on solving business problems rather than reinventing the wheel for every new project.

2. Fostering Collaboration and Knowledge Sharing

With paved paths in place, developers across different teams and projects share a common language and set of tools. This commonality facilitates:

  • Easier code reviews across projects, everyone is following similar structure and standards
  • Simplified onboarding for new team members, you dont need to maintain a lot of onboarding docs yourselves, you can lean on centralized docs more
  • Increased ability for developers to contribute to different projects, the other projects in my company look kinda like mine

3. Continuous Improvement Culture

Paved paths are not static; they evolve with the organization’s needs and learnings. This aligns well with a culture of continuous improvement. As Jez Humble, co-author of “Continuous Delivery,” states:

“The only constant in software development is change. Your templates should evolve with your understanding.”

Regular reviews and updates to your paved paths can become a focal point for discussing and implementing improvements across your entire development process.

4. Empowering Developers

While paved paths provide a recommended route, they also empower developers to make informed decisions about when to deviate. This balance is crucial, as Gene Kim, author of “The Phoenix Project,” notes:

“The best standardized process is one that enables innovation, not stifles it.”

By providing a solid foundation, paved paths actually give developers more freedom to innovate where it matters most.

As we conclude our series, let’s consider how paved paths align with and support emerging trends in software development:

Microservices and Serverless Architectures: Paved paths can greatly simplify the creation and management of microservices or serverless functions. By providing templates and standards for these architectural patterns, organizations can ensure consistency and best practices across a distributed system.

DevOps and CI/CD: Paved paths naturally complement DevOps practices and CI/CD pipelines. They can include standard configurations for build processes, testing frameworks, and deployment strategies, ensuring that DevOps best practices are baked into every project from the start.

Cloud-Native Development: As more organisations move towards cloud-native development, paved paths can incorporate cloud-specific best practices, security configurations, and scalability patterns, primarily from Infrastructure-as-code. This can significantly reduce the learning curve for teams transitioning to cloud environments.

Platform Quality: I see a rise in use of tools like static code analysis to help encourage and educate engineers on internal practices and patterns, which work well with paved paths.

Conclusion: Embracing Paved Paths for Sustainable Development

As we’ve seen throughout this series, paved paths offer a powerful approach to addressing many of the challenges faced in modern software development. From breaking down monoliths to streamlining the creation of new services, paved paths provide a flexible yet standardized foundation for development.

By implementing paved paths, organizations can:

  1. Increase development speed without sacrificing quality
  2. Improve consistency across projects and teams
  3. Facilitate contribution cross system
  4. Empower developers to focus on innovation
  5. Adapt more quickly to new technologies and architectural patterns

However, it’s crucial to remember that paved paths are not a one-time implementation. They require ongoing maintenance, regular reviews, and a commitment to evolution. As Kelsey Hightower, Principal Developer Advocate at Google, reminds us:

“Best practices are not written in stone, but they are etched in experience.”

Your paved paths should grow and change with your organization’s experience and needs.

As you embark on your journey with paved paths, remember that the goal is not to restrict or control, but to enable and empower. By providing a clear, well-supported path forward, you free your teams to do what they do best: solve problems and create innovative solutions.

The future of software development is collaborative, adaptable, and built on a foundation of shared knowledge and best practices. Paved paths offer a way to embrace this future, creating a development environment that is both efficient and innovative. As you move forward, keep exploring, keep learning, and keep paving the way for better software development.

The Evolution of Scrum: A Cautionary Tale

Two decades ago, the software development world witnessed a significant shift. Scrum, a framework within the Agile methodology, was gaining tremendous popularity. This change coincided with a wave of redundancies among traditional project managers. Faced with evolving industry demands, many of these professionals saw an opportunity to reinvent themselves as Scrum Masters.

The true nature of the problem became clear to me when the Project Management Institute (PMI) added an Agile certification, allowing it to contribute points towards one’s overall project management goals.

This certification still exists today, enabling individuals to become “Certified in Agile” through self-study and an online exam. The concept seems utterly foreign to me, especially when I reflect on my experience with the Certified Scrum Master (CSM) course I took with Scrum Alliance years ago. That intensive three-day course was such an eye-opener, fundamentally shifting my mindset. I simply cannot envision anyone truly grasping the core concepts of Agile without face-to-face communication – a principle that, ironically, is a core value in the Agile Manifesto itself.

This transition wasn’t always smooth or successful though. Many former project managers approached Scrum with a mindset still rooted in traditional methodologies. They viewed it as merely a new set of processes to follow rather than a fundamental shift in philosophy and approach.

This misinterpretation led to a superficial adoption of Scrum practices:

  1. Gantt Charts Transformed: The detailed project timelines of Gantt charts were simply repackaged as product backlogs, missing the dynamic and flexible nature of true Agile planning.
  2. Sprint Reviews Misused: Instead of focusing on demonstrating working software and gathering valuable feedback, sprint reviews often devolved into status update meetings reminiscent of traditional project reporting.
  3. Daily Standups Misinterpreted: The essential daily sync-up became a rote status report, losing its intended purpose of team coordination and obstacle identification.

In essence, while the terminology changed, the underlying project management approach remained largely unchanged. This “Scrum-but” approach – “We’re doing Scrum, but…” – became prevalent in many organizations.

This misapplication of Scrum principles highlights a crucial lesson: true agility isn’t achieved by merely adopting a new set of practices. It requires a fundamental shift in mindset, embracing flexibility, continuous improvement, and most importantly, a focus on delivering value to the customer.

As modern software engineers and managers, it’s crucial to reflect on this history. We must ask ourselves: Are we truly embracing the spirit of Agile and Scrum, or are we simply going through the motions? The power of these methodologies lies not in their ceremonies, but in their ability to foster collaboration, adaptability, and customer-centricity.

The evolution of Scrum serves as a reminder that in our rapidly changing industry, it’s not enough to change our processes. We must also transform our thinking, our culture, and our approach to creating software that truly meets the needs of our users.

The Unintended Consequences of Rigid Scrum Implementation

Scrum was originally designed as a flexible, adaptive framework for product development. Its creators envisioned a methodology that would empower teams to respond quickly to change and deliver value efficiently. However, as Scrum gained popularity, a troubling trend emerged. Many organizations began to treat Scrum as a rigid methodology, leading to several significant issues:

  1. Ritual Over Results: Teams became more focused on following Scrum ceremonies to the letter rather than using them as tools to improve productivity and value delivery.
  2. Inflexible Sprint Lengths: The idea of fixed-length sprints, while useful for creating rhythm, was often applied too rigidly. Teams lost the ability to adapt to work that didn’t neatly fit into arbitrary time boxes.
  3. Product Backlog as a Wish List: Product backlogs grew unwieldy, losing the crucial connection between backlog items and real customer needs. They became dumping grounds for ideas rather than curated lists of customer problems and needs.
  4. One-Size-Fits-All Approach: Organizations often applied Scrum uniformly across different types of projects and teams, ignoring the need for adaptation based on context.
  5. Overemphasis on Velocity: Story points and velocity, meant to be team-specific measures of capacity, became weaponized as performance metrics, leading to all sorts of dysfunctional behaviors.

“Never mistake motion for action.” – Ernest Hemingway

The results of this rigid application were often the opposite of what Scrum intended:

  • Decreased Agility: Ironically, the rigid application of Scrum led to less agile teams. They became bound by their processes rather than empowered by them.
  • Reduced Innovation: Over-planning and strict adherence to sprints left little room for experimentation. Teams became risk-averse, focusing on meeting sprint goals rather than solving customer problems.
  • Misalignment with Business Goals: The focus shifted to sprint completion rather than delivering business value, creating a disconnect between Scrum activities and overall product strategy.

Signs Your Team Might Be Falling into the Scrum Trap

If you’re wondering whether your team has fallen into a rigid Scrum implementation, here are some signs to look out for:

Ceremony Fatigue: Team members view Scrum events as time-wasting meetings rather than valuable collaboration opportunities.

Velocity Obsession: There’s a constant push to increase velocity, often at the expense of quality or sustainable pace.

Inflexible Planning: Your team struggles to accommodate urgent work or valuable opportunities because “it’s not in the sprint.”

Stale Backlog: Your product backlog is enormous, with items at the bottom that haven’t been reviewed in months (or years).

Sprint Goal Apathy: Sprint goals, if they exist at all, are vague or uninspiring, and the team doesn’t use them to guide decisions.

Lack of Experimentation: Your team rarely tries new approaches or technologies because there’s “no room in the sprint” for learning or innovation.

Lack of User Feedback: Stories come curated from a seeming invisible place in the sky onto the backlog, with little justification as to why we are doing things. After shipping you are “done”, no measurement of impact post release is done, only “feature shipped”.

Scrum Master as Process Police: The Scrum Master’s primary function has become enforcing rules rather than coaching and facilitating. Has your scrum master said lately “No you cant add that story to the sprint, we’ve already started, you need to wait till next sprint”, is this statement Agile?

One-Size-Fits-All Sprints: All your teams have the same sprint length and use the same processes, regardless of their work’s nature. They all measure themselves in teh same way, story points delivered or sprint completion rate, might be everyone’s main measure of success.

Conclusion: Rediscovering Agility in Scrum

The evolution of Scrum from a flexible framework to a rigid methodology in many organizations serves as a cautionary tale for the Agile community. It reminds us that the true spirit of agility lies not in strict adherence to practices, but in the principles that underpin them.

To truly benefit from Scrum, teams and organizations need to:

Focus on Outcomes: Shift the emphasis from following processes to delivering value.

Embrace Flexibility: Adapt Scrum practices to fit the team’s context and the nature of their work.

Foster Innovation: Create space for experimentation and learning within the Scrum framework.

Align with Business Goals: Ensure that Scrum activities directly contribute to overarching product and business strategies.

Continuous Improvement: Regularly reflect on and adapt not just the product, but the process itself.

Remember, Scrum is a framework, not a prescription. Its power lies in its ability to help teams organize and improve their work, not in rigid rule-following. By rediscovering the flexibility and adaptiveness at the heart of Scrum, teams can avoid the pitfalls of overly rigid implementation and truly harness the benefits of agile methodologies.

As we move forward in the ever-evolving landscape of software development, let’s carry forward the lessons learned from Scrum’s journey. Let’s strive to create processes that truly empower our teams, deliver value to our customers, and drive innovation in our products. That, after all, is the true spirit of agility.

Introducing Paved Paths: A Better Way Forward

In our previous post, we explored the challenges of monolithic architectures and the potential pitfalls of mono repos. We saw how engineers often find themselves trapped in a cycle of adding to existing monoliths, despite the long-term drawbacks. Today, we’re excited to introduce a concept that offers a way out of this dilemma: Paved Paths.

What is a Paved Path?

A paved path is a supported technology stack within an organisation that provides a clear, well-maintained route for developing new features or systems. It’s not about dictating a single way of doing things, but rather about offering a smooth, well-supported path that makes it easier to create new services or applications without sacrificing speed or quality.

Think of it like this: when you’re walking through a park, you’ll often see paved paths alongside open grassy areas. While you’re free to walk anywhere, the paved paths offer a clear, easy-to-follow route that most people naturally gravitate towards. In software development, a paved path serves a similar purpose.

Components of a Paved Path

A well-implemented paved path typically includes:

  1. Shared Libraries: Reusable code components that handle common functionalities like authentication, logging, or database access.
  2. New Project Templates: Pre-configured project structures that set up the basics of a new application or service, complete with best practices baked in.
  3. Infrastructure as Code: Templates for setting up the necessary infrastructure, ensuring consistency across different projects.
  4. CI/CD Pipelines: Pre-configured continuous integration and deployment pipelines that work out of the box with the new project templates.
  5. Monitoring and Observability: Built-in solutions for logging, metrics, and tracing that integrate seamlessly with the organization’s existing tools.
  6. Documentation and Guides: Comprehensive resources that explain how to use the paved path effectively and when it might be appropriate to deviate from it.

Benefits of Paved Paths

Paved paths offer numerous advantages that address the issues we’ve discussed with monoliths and mono repos:

  1. Faster Start-up: Engineers can quickly spin up new services or applications without spending weeks on boilerplate setup.
  2. Consistency: All new projects start with a consistent structure, making it easier for engineers to switch between different services.
  3. Best Practices Built-in: Security, performance, and scalability best practices are incorporated from the start.
  4. Easier Maintenance: With a consistent structure across services, maintenance becomes more straightforward.
  5. Flexibility: While providing a clear default path, paved paths still allow for deviation when necessary, offering the best of both worlds.
  6. Improved Onboarding: New team members can get up to speed quickly by following the paved path.

Striking the Right Balance

It’s important to note that paved paths are not about enforcing a rigid, one-size-fits-all approach. They’re about providing a well-supported default that makes it easy to do the right thing, while still allowing for flexibility when needed.

Paved paths coexist with what we might call “rough paths” – less travelled routes that engineers might choose to explore for various reasons. These rough paths could be new technologies, experimental approaches, or simply different ways of solving problems that don’t quite fit the paved path model.

The beauty of this approach is that it encourages a balance between standardization and innovation:

Engineers are free to venture off the paved path when they believe it’s necessary or beneficial. This openness to exploration prevents the stagnation that can come from overly strict standardization. As engineers explore these rough paths, they gather valuable insights and experiences. Some of these explorations might reveal better ways of doing things or address use cases that the current paved path doesn’t handle well.

The most successful “rough path” explorations often lead to the creation of new paved paths. This evolution ensures that the organization’s supported technology stack remains current and effective.

By allowing and encouraging these explorations, organizations tap into the collective wisdom and creativity of their engineering teams. This bottom-up approach to defining best practices often results in more robust and widely-accepted standards.

As the LinkedIn engineering team learned when they tried to standardize on a single tech stack, too much restriction can stifle innovation and lead to suboptimal solutions. Paved paths strike a balance by offering a smooth road forward without blocking other routes entirely.

This balanced approach creates a dynamic ecosystem where paved paths provide stability and efficiency, while the ability to explore rough paths ensures adaptability and innovation. It’s not about dictating a single way of doing things, but about fostering an environment where best practices can emerge organically and evolve over time.

Conclusion

Paved paths offer a promising solution to the challenges posed by both monolithic architectures and the complexity of mono repos. They provide the speed and ease of development that often draws us to monoliths, while enabling the modularity and scalability that we seek from microservices.

In our next post, we’ll dive deeper into how you can implement paved paths in your organisation, with a special focus on using .NET templates to create a smooth path for your development teams. Stay tuned!

The Art of Crafting KPIs That Actually Work

Welcome back to our series on managing self-managing teams! 👋 We’ve reached the final instalment, where we’ll dive into the crucial skill of crafting Key Performance Indicators (KPIs) that truly work for your team. Let’s turn those dull metrics into powerful tools for success!

When Good Metrics Go Bad

Ever presented what you thought was a perfect set of KPIs, only to be met with blank stares or confused looks? You’re not alone. Many of us have faced the dreaded “Why are we measuring this again?” moment. So, how do we create KPIs that inspire “Aha!” moments instead of “Uh… what?”

The Essential Elements of Effective KPIs

Before we start, let’s review the key properties our KPIs should have:

  1. Easily Measurable: No complex calculations or long running batch jobs required.
  2. Team-Focused: Avoid singling out individuals.
  3. Business-Aligned: Clearly linked to company goals.
  4. Actionable: Provides clear direction for improvement.
  5. Motivating: Inspires the team to perform better.

KPIs to Avoid

Just as important as knowing what to measure is knowing what not to measure. Here are some KPIs to steer clear of:

  • Lines of Code: Quantity doesn’t equal quality.
  • Number of Bugs Fixed: Could encourage writing buggy code just to fix it.
  • Hours Worked: We’re after results, not time spent.
  • Story Points: Often arbitrary and not indicative of real progress.

Real-World KPI Success: The Booking Completion Saga

Let me share a story from a company I once worked at. We implemented a KPI around booking completion that became a game-changer. Here’s what made it so effective:

  1. Direct Business Impact: We measured “Incremental Bookings per Day.” This directly showed teams how much they were contributing to the company’s bottom line.
  2. Instant Feedback: The real magic was in the immediacy. As soon as an A/B test was turned on, the numbers started ticking. Our experimentation system was linked to a real-time Kafka feed from the booking website.
  3. Visible Results: We had TVs on office walls displaying dashboards of running experiments. This visibility created a buzz of excitement.
  4. Celebration of Wins: When an experiment showed significant improvement, the Product Owner would take the team out for drinks the day it was taken, when the experiment run finished. It wasn’t uncommon to see teams celebrating their wins at the local bar area in the evenings with a bottle of something and shots on the table.

The excitement was so palpable that one developer even created a Slack bot in his spare time to check experiment results during dinner! He wasn’t going to wait to the next day in the office to see what the users thought about his new feature.

This KPI worked because it connected directly to business impact and provided instant, visible feedback. It almost gamified the process for the engineers, making it thrilling to see in real-time how users responded to new features. The high volume of bookings meant meaningful results appeared quickly, sometimes within minutes.

The result? A highly motivated team, numerous significant wins, and a culture of continuous improvement and celebration.

Aligning Team Metrics with Business Goals

Your KPIs should create a clear line from daily team activities to high-level business objectives. For example:

  • Business Goal: Increase market share
  • Team KPI: “Feature Adoption Rate” (How quickly users embrace new features)
  • Daily Activity: Developing intuitive UI and smooth user on-boarding

Regular KPI Reviews

KPIs aren’t set-and-forget metrics. Schedule regular review sessions with your team to ensure your KPIs remain relevant and effective. Make these sessions collaborative and open to change.

The Ethics of KPIs

Remember these important principles:

  1. Never use KPIs as weapons against your team. Using KPIs punitively creates a culture of fear and discourages risk-taking and innovation.
    Example: If a team’s “Time to Value” KPI is lagging, don’t use it to criticise or penalise the team. Instead, use it as a starting point for a constructive discussion about process improvements or resource needs.
  2. Prioritise learning and improvement over hitting arbitrary numbers. Focusing solely on numbers can lead to short-term thinking and missed opportunities for meaningful growth.
    Example: If your “Feature Adoption Rate” isn’t meeting targets, don’t push features that aren’t ready. Instead, dig into why adoption is low. Are you building the right features? Is user education lacking? This approach leads to better products and sustained improvement.
  3. Celebrate the intent and progress behind the metrics, not just the numbers themselves. This approach encourages a growth mindset and values effort and learning, which are crucial for long-term success.
    Example: Even if a new feature doesn’t immediately boost your “Enthusiastic User Ratio”, celebrate the team’s efforts in user research, innovative design, or technical challenges overcome. This keeps the team motivated and focused on continuous improvement.
  4. Regularly review and adjust KPIs to ensure they remain relevant. As your product and market evolve, yesterday’s crucial metric might become irrelevant or even counterproductive.
    Example: If your product has matured, you might shift focus from a “New User Acquisition Rate” KPI to a “User Retention Rate” KPI, reflecting the changing priorities of your business.

By adhering to these principles, you create an environment where KPIs drive positive behaviour, foster learning, and contribute to both team satisfaction and business success. Remember, the goal of KPIs is to improve performance and guide decision-making, not to create pressure or assign blame.

Wrapping Up: The True Value of KPIs

The real power of KPIs lies not in the numbers, but in the conversations they spark, the behaviours they encourage, and the focus they provide. When done right, KPIs serve as a compass, guiding your team through the complex landscape of product development.

Craft KPIs that inspire, illuminate, and drive your team towards excellence. And remember, in high-performing teams, the best KPIs often become obsolete because the team internalises the principles behind them.

What’s the most effective KPI you’ve used? Or the least useful? Share your experiences in the comments below!

P.S. If this post helped you rethink your approach to KPIs, don’t hesitate to share it with your network. Let’s spread the word about better performance indicators!

Metrics That Matter: The Ultimate Guide to Measuring Self-Organising Team Success (Without Driving Everyone Crazy)

Hey there, data-driven dynamos and agile aficionados! 👋 Ready to dive into the wild world of measuring team success? Buckle up, because we’re about to turn those vanity metrics upside down and discover what really matters in the land of self-organising teams!

The Metrics Maze: Don’t Get Lost in the Numbers

Picture this: You’re in a maze of mirrors, each one showing a different metric. Story points completed! Sprint velocity! Lines of code! Number of commits! It’s enough to make your head spin faster than a hard drive from 1995. 💿💫

But here’s the million-dollar question: Which of these actually tell you if your team is succeeding?

Spoiler alert: Probably none of them. 😱

The Great Metrics Showdown

Let’s break down some common metrics and see how they stack up:

1. Sprint Completion / Story Points

The Good: Easy to measure, gives a sense of progress. 
The Bad: Can be gamed faster than a speedrunner playing Minecraft. 
The Ugly: Focuses on output, not outcome.

2. Meeting Deadlines / Completing Projects

The Good: Aligns with business expectations. 
The Bad: Can lead to corner-cutting and technical debt. 
The Ugly: Doesn’t account for value delivered.

3. DevOps Metrics (Deployment Frequency, Lead Time, etc.)

The Good: Focuses on flow and efficiency. 
The Bad: Can be technical overkill for some teams. 
The Ugly: Doesn’t directly measure business impact.

4. Business Metrics / KPIs

The Good: Directly ties to business value. 
The Bad: Can be hard to attribute to specific team actions. 
The Ugly: Might be too long-term for sprint-by-sprint evaluation.

The Secret Sauce: Metrics That Actually Matter

“Not everything that counts can be counted, and not everything that can be counted counts.” – Albert Einstein

Al wasn’t talking about Agile metrics, but he might as well have been. So what should we be measuring? Let’s cook up a recipe for metrics that actually matter:

  1. A Dash of Business Impact: How many users did that new feature attract?
  2. A Sprinkle of Team Health: How’s the team’s morale and collaboration?
  3. A Pinch of Technical Excellence: Is the codebase getting better or turning into spaghetti?
  4. A Dollop of Customer Satisfaction: Are users sending love letters or hate mail?

Mix these together, and you’ve got a metric feast that tells you how your team is really doing!

The Goldilocks Zone of Measurement

Remember Goldilocks? She wanted everything juuuust right. Your metrics should be the same:

  • Not too many: Analysis paralysis is real, folks!
  • Not too few: “Vibes” isn’t a metric (no matter how much we wish it was).
  • Just right: Enough to guide decisions without needing a PhD in statistics.

The Metrics Makeover: Before and After

Let’s give some common metrics a makeover:

Before: Number of Story Points Completed ❌

After: Business Value Delivered per Sprint ✅

Instead of just counting points, assign business value to stories and track that. It’s like turning your backlog into a stock portfolio!

Before: Code Commit Frequency ❌

After: Feature Usage and User Engagement ✅

Who cares how often you commit if users aren’t clicking that shiny new button?

Before: Bug Count ❌

After: User-Reported Issues vs. Proactively Fixed Issues ✅

This shows both quality and how well you’re anticipating user needs. Crystal ball coding, anyone?

Some of your more technical metrics maybe SLAs as well, for example Quality, we want to deliver business value, without reducing quality.

The user engagement, you can usually glean from some kind of Web Analytics (Google, Analytics, etc), what ever you are using for this focus on the core user actions people are doing on your system, for example with ECommerce it usually Completed booking or step conversion in your funnel. Then these can be near real time even.

The Team Metrics Workshop: A Step-by-Step Guide

Want to revolutionise your team’s metrics? Try this workshop:

  1. Metric Brainstorm: Have everyone write down metrics they think matter.
  2. Business Value Voting: Get stakeholders to vote on which metrics tie closest to business goals.
  3. Feasibility Check: Can you actually measure these things without hiring a team of data scientists?
  4. Trial Run: Pick top 3-5 metrics and try them for a sprint.
  5. Retrospective: Did these metrics help or just add noise?

Repeat until you find your team’s metric sweet spot!

The Metrics Mindset: It’s a Journey, Not a Destination

Here’s the thing about metrics for self-organising teams: They should evolve as your team evolves. What works for a new team might not work for a seasoned one. It’s like updating your wardrobe – what looked good in the 90s probably doesn’t cut it now (unless you’re going for that retro vibe).

The Golden Rules of Team Metrics

  1. Measure what matters, not what’s easy.
  2. If a metric doesn’t drive action, it’s just noise.
  3. Team metrics should be about the team, not individuals.
  4. Metrics should spark conversations, not end them.
  5. When in doubt, ask the team what they think is important.

Wrapping Up: The Metric Mindfulness Movement

Measuring the success of self-organising teams isn’t about finding the perfect metric – it’s about finding the right combination of indicators that help your team improve and deliver value. It’s like being a DJ – you’re mixing different tracks to create the perfect sound for your audience.

Remember, the goal isn’t to hit some arbitrary numbers, it’s to build awesome products, delight users, and have a team that loves coming to work (or logging in) every day. If your metrics are helping with that, you’re on the right track!

So go forth, measure wisely, and may your charts always be up and to the right! 📈

What wild and wacky metrics have you seen in the wild? Got any metric horror stories or success sagas? Share in the comments – let’s start a metric revolution! 🚀

P.S. If this post helped you see metrics in a new light, share it faster than your CI/CD pipeline! Your fellow tech leads will thank you (maybe with actual thank-you metrics)!

The Art of Hands-Off Management: Coaching Self-Organizing Teams Without Turning into a Micromanager

Hey there, tech leads and engineering managers! 👋 Are you ready to level up your leadership game? Today, we’re diving into the delicate art of coaching self-organizing teams without accidentally morphing into the dreaded micromanager. Buckle up, because we’re about to walk the tightrope of hands-off management!

The Micromanager’s Dilemma

Picture this: You’re leading a team of brilliant devs. They’re self-organizing, they’re agile, they’re everything the tech blogs say they should be. But… they’re about to make a decision that makes your eye twitch. Do you:

A) Swoop in like a coding superhero and save the day? B) Bite your tongue so hard you taste binary? C) Find a way to guide without grabbing the wheel?

If you chose C, congratulations! You’re ready for the world of coaching self-organizing teams. If you chose A or B, don’t worry – we’ve all been there. Let’s explore how to nail that perfect balance.

The Golden Rule: Ask, Don’t Tell

“The art of leadership is saying no, not saying yes. It is very easy to say yes.” – Tony Blair

Okay, Tony wasn’t talking about tech leadership, but the principle applies. When you’re tempted to give directions, try asking questions instead. It’s like the difference between giving someone a fish and teaching them to fish – except in this case, you’re not even teaching. You’re just asking if they’ve considered using a fishing rod instead of their bare hands.

Example Time!

Let’s say your team is struggling with large, monolithic tasks that are slowing down the sprint. Instead of mandating “No task over 8 hours!”, try this:

You: “Hey team, I noticed our sprint completion rate is lower than usual. Any thoughts on why?”

Team: “Well, we have these huge tasks that only one person can work on…”

You: “Interesting. How might that be affecting our workflow?”

Team: “I guess it leads to a lot of ‘almost done’ stories at the end of the sprint.”

You: “Hmm, what could we do to address that?”

See what you did there? You guided them to the problem and let them find the solution. It’s like inception, but for project management!

The Five Whys: Not Just for Toddlers Anymore

Remember when kids go through that phase of asking “Why?” to everything? Turns out, they might be onto something. The Five Whys technique is a great way to dig into the root of a problem without telling the team what to do.

Here’s how it might go:

  1. Why is our sprint completion rate low?
  2. Why do we have a lot of long-running tasks?
  3. Why are our tasks so big?
  4. Why haven’t we broken them down further?
  5. Why didn’t we realize this was an issue earlier?

By the fifth “why,” you’ve usually hit the root cause. And the best part? The team has discovered it themselves!

When in Doubt, Shu Ha Ri

No, that’s not a new sushi restaurant. Shu Ha Ri is a concept from martial arts that applies beautifully to coaching self-organizing teams:

  • Shu (Follow): The team follows the rules and processes.
  • Ha (Detach): The team starts to break away from rigid adherence.
  • Ri (Fluent): The team creates their own rules and processes.

As a coach, your job is to recognize which stage your team is in and adapt accordingly. New team? Maybe they need more structure (Shu). Experienced team? Let them break some rules (Ha). Rockstar team? Stand back and watch them soar (Ri).

It’s a great way to introduce a process to them that isn’t overbearing, for example you can say how about we try “X” my way fora sprint or 2, see how you like it and evolve it from there.

The KPI Conundrum

“Not everything that can be counted counts, and not everything that counts can be counted.” – Albert Einstein

Al knew what he was talking about. When it comes to measuring the success of self-organizing teams, you need a KPI (Key Performance Indicator) that’s:

  • Instantly measurable (because who has time for complex calculations?)
  • Team-focused (no individual call-outs here)
  • Connected to business value (because that’s why we’re all here, right?)

Avoid vanity metrics like lines of code or number of commits. Instead, focus on things like deployment frequency, lead time for changes, or even better – actual business impact metrics.

Why instantly measurable? it doesn’t necessarily need to be instant, as long as it’s timely, the sooner you know results the sooner you can change direction, and if its very timely you can even get to the point of gamification, but more on that in another post.

A good KPI sets the course for the team, can solve arguments and helps them course correct if they choose the wrong direction.

It’s also good to agree on SLAs for technical metrics (Quality etc) to make sure we don’t make a decision that trades off long term for short without knowing.

The Coaching Toolkit: Your Swiss Army Knife of Leadership

Here are some tools to keep in your back pocket:

  1. The Silence Technique: Sometimes, the best thing you can say is nothing at all. Let the team fill the void. This will encourage your team to speak up on their own.
  2. The Mirror: Reflect the team’s ideas back to them. It’s like a verbal rubber duck debugging session.
  3. The Hypothetical: “What would happen if…” questions can open up new avenues of thinking.
  4. The Devil’s Advocate: Challenge assumptions, but make it clear you’re playing a role, if you don’t make this clear you may come across overly negative and not supportive.
  5. The Celebration: Recognize and celebrate when the team successfully self-organizes and solves problems.

Wrapping Up: The Zen of Hands-Off Management

Coaching self-organizing teams is a bit like being a gardener. You create the right conditions, you nurture, you occasionally prune, but ultimately, you let the plants do their thing. Sometimes you might get an odd-shaped tomato, but hey – it’s organic!

Remember, your goal is to make yourself progressively less necessary. If you’ve done your job right, the team should be able to function beautifully even when you’re on that beach vacation sipping piña coladas.

So go forth, ask questions, embrace the awkward silences, and watch your team bloom!

What’s your secret sauce for coaching self-organizing teams? Have you ever accidentally micromanaged and lived to tell the tale? Share your war stories in the comments – we promise not to judge (much)! 😉

P.S. If you enjoyed this post, don’t forget to smash that like button, ring the bell, and subscribe to our newsletter for more tech leadership gems! (Just kidding, this isn’t YouTube, but do share if you found it helpful!)

Measuring Product Health: Beyond Code Quality

In the world of software development, we often focus on code quality as the primary measure of a product’s health. While clean, efficient code with passing tests is crucial, it’s not the only factor that determines the success of a product. As a product engineer, it’s essential to look beyond the code and understand how to measure the overall health of your product. In this post, we’ll explore some key metrics and philosophies that can help you gain a more comprehensive view of your product’s performance and impact.

The “You Build It, You Run It” Philosophy

Before diving into specific metrics, it’s important to understand the philosophy that underpins effective product health measurement. We follow the principle of “You Build It, You Run It.” This approach empowers developers to take ownership of their products not just during development, but also in production. It creates a sense of responsibility and encourages a deeper understanding of how the product performs in real-world conditions.

What Can We Monitor?

When it comes to monitoring product health, there are several areas we usually focus on:

  1. Logs: Application, web server, and system logs
  2. Metrics: Performance indicators and user actions
  3. Application Events: State changes within the application

While all these are important, it’s crucial to understand the difference between logs and metrics, and when to use each.

The Top-Down View: What Does Your Application Do?

One of the most important questions to ask when measuring product health is: “What does my application do?” This top-down approach helps you focus on the core purpose of your product and how it delivers value to users. So ultimatelly when this value is impacted you know when to act.

Example: E-commerce Website

Let’s consider an e-commerce website. At its core, the primary function of such a site is to facilitate orders. That’s the ultimate goal – to guide users through the funnel to complete a purchase.

So, how do we use this for monitoring? We ask two key questions:

  1. Is the application successfully processing orders?
  2. How often should it be processing orders, and is it meeting that frequency right now?

How to Apply This?

To monitor this effectively, we generally look at 10-minute windows throughout the day (for example, 8:00 to 8:10 AM). For each window, we calculate the average number of orders for that same time slot on the same day of the week over the past four weeks. If the current number falls below this average, it triggers an alert.

This approach is more nuanced and effective than setting static thresholds. It naturally adapts to the ebb and flow of traffic throughout the day and week, reducing false alarms while still catching significant drops in performance. By using dynamic thresholds based on historical data, you’re less likely to get false positives during normally slow periods, yet you remain sensitive enough to catch meaningful declines in performance.

One of the key advantages of this method is that it avoids the pitfalls of static thresholds. With static thresholds, you often face a dangerous compromise. To avoid constant alerts during off-hours or naturally slow periods, you might set the threshold very low. However, this means you risk missing important issues during busier times. Our dynamic approach solves this problem by adjusting expectations based on historical patterns.

While we typically use 10-minute windows, you can adjust this based on your needs. For systems with lower volume, you might use hourly or even daily windows. This will make you respond to problems more slowly in these cases, but you’ll still catch significant issues. The flexibility allows you to tailor the system to your specific product and business needs.

Another Example: Help Desk Chat System

Let’s apply our core question – “What does this system DO?” – to a different type of application: a help desk chat system. This question is crucial because it forces us to step back from the technical details and focus on the fundamental purpose of the system adn teh value it delviers to the business and ultimately the customer.

So, what does a help desk chat system do? At its most basic level, it allows communication between support staff and customers. But let’s break that down further:

  1. It enables sending messages
  2. It displays these messages to the participants
  3. It presents a list of ongoing conversations

Now, you might be tempted to say that sending messages is the primary function, and you’d be partly right. But remember, we’re thinking about what the system DOES, not just how it does it.

With this in mind, how might we monitor the health of such a system? While tracking successful message sends is important, it might not tell the whole story, especially if message volume is low. We should also consider monitoring:

  • Successful page loads for the conversation list (Are users able to see their ongoing chats?)
  • Successful loads of the message window (Can users access the core chat interface?)
  • Successful resolution rate (Are chats leading to solved problems?)

By expanding our monitoring beyond just message sending, we get a more comprehensive view of whether the system is truly doing what it’s meant to do: helping customers solve their problems efficiently.

This example illustrates why it’s so important to always start with the question, “What does this system DO?” It guides us towards monitoring metrics that truly reflect the health and effectiveness of our product, rather than just its technical performance.

A 200 Ok response, is not always OK

As you consider your own systems, always begin with this fundamental question. It will lead you to insights about what you should be measuring and how you can ensure your product is truly serving its purpose.

The Bottom-Up View: How Does Your Application Work?

While the top-down view focuses on the end result, the bottom-up approach looks at the internal workings of your application. This includes metrics such as:

  • HTTP requests (response time, response code)
  • Database calls (response time, success rate)

Modern systems often collect these metrics through contactless telemetry, reducing the need for custom instrumentation.

Prioritizing Alerts: When to Wake Someone Up at 3 AM

A critical aspect of product health monitoring is knowing when to escalate issues. Ask yourself: Should the Network Operations Center (NOC) call you at 3 AM if a server has 100% CPU usage?

The answer is no – not if there’s no business impact. If your core business functions (like processing orders) are unaffected, it’s better to wait until the next day to address the issue.

Using Loss as a Currency for Prioritization

Once you’ve established a health metric for your system and can compare current performance against your 4-week average, you gain a powerful tool: the ability to quantify “loss” during a production incident. This concept of loss can become a valuable currency in your decision-making process, especially when it comes to prioritizing issues and allocating resources.

Imagine your e-commerce platform typically processes 1000 orders per hour during a specific time window, based on your 4-week average. During an incident, this drops to 600 orders. You can now quantify your loss: 400 orders per hour. If you know your average order value, you can even translate this into a monetary figure. This quantification of loss becomes your currency for making critical decisions.

With this loss quantified, you can now make more informed decisions about which issues to address first. This is where the concept of “loss as a currency” really comes into play. You can compare the impact of multiple ongoing issues, justify allocating more resources to high-impact problems, and make data-driven decisions about when it’s worth waking up engineers in the middle of the night.

Reid Hoffman, co-founder of LinkedIn, once said, “You won’t always know which fire to stamp out first. And if you try to put out every fire at once, you’ll only burn yourself out. That’s why entrepreneurs have to learn to let fires burn—and sometimes even very large fires.” This wisdom applies perfectly to our concept of using loss as a currency. Sometimes, you have to ask not which fire you should put out, but which fires you can afford to let burn. Your loss metric gives you a clear way to make these tough decisions.

This approach extends beyond just immediate incident response. You can use it to prioritize your backlog, make architectural decisions, or even guide your product roadmap. When you propose investments in system improvements or additional resources, you can now back these proposals with clear figures showing the potential loss you’re trying to mitigate, all be it with a pitch of crytal ball about how likely these incident are to occura gain sometimes.

By always thinking in terms of potential loss (or gain), you ensure that your team’s efforts are always aligned with what truly matters for your business and your users. You create a direct link between your technical decisions and your business outcomes, ensuring that every action you take is driving towards real, measurable impact.

Remember, the goal isn’t just to have systems that run smoothly from a technical perspective. It’s to have products that consistently deliver value to your users and meet your business objectives. Using loss as a currency helps you maintain this focus, even in the heat of incident response or the complexity of long-term planning.

In the end, this approach transforms the abstract concept of system health into a tangible, quantifiable metric that directly ties to your business’s bottom line.

Conclusion: A New Perspective on Product Health

As we’ve explored throughout this post, measuring product health goes far beyond monitoring code quality or individual system metrics. It requires a holistic approach that starts with a fundamental question: “What does our system DO?” This simple yet powerful query guides us toward understanding the true purpose of our products and how they deliver value to users.

By focusing on core business metrics that reflect this purpose, we can create dynamic monitoring systems that adapt to the natural ebbs and flows of our product usage. This approach, looking at performance in time windows compared to 4-week averages, allows us to catch significant issues without being overwhelmed by false alarms during slow periods.

Perhaps most importantly, we’ve introduced the concept of using “loss” as a currency for prioritization. This approach transforms abstract technical issues into tangible business impacts, allowing us to make informed decisions about where to focus our efforts. As Reid Hoffman wisely noted, we can’t put out every fire at once – we must learn which ones we can let burn. By quantifying the loss associated with each issue, we gain a powerful tool for making these crucial decisions.

This loss-as-currency mindset extends beyond incident response. It can guide our product roadmaps, inform our architectural decisions, and help us justify investments in system improvements. It creates a direct link between our technical work and our business outcomes, ensuring that every action we take drives towards real, measurable impact.

Remember, the ultimate goal isn’t just to have systems that run smoothly from a technical perspective. It’s to have products that consistently deliver value to our users and meet our business objectives.

As you apply these principles to your own systems, always start with that core question: “What does this system DO?” Let the answer guide your metrics, your monitoring, and your decision-making. In doing so, you’ll not only improve your product’s health but also ensure that your engineering efforts are always aligned with what truly matters for your business and your users.

Identifying and Planning Your Monolith Split

In the world of software development, monolithic architectures often become unwieldy as applications grow in complexity and scale. Splitting a monolith into smaller, more manageable services can improve development velocity, scalability, and maintainability. However, this process requires careful planning and execution. In this post, we’ll explore the crucial first steps in splitting your monolith: identifying business domains and creating a solid plan.

Finding Business Domains in Your Monolith

The first step in splitting a monolith is identifying the business domains within your application. Business domains are typically where “units of work” are isolated, representing distinct areas of functionality or responsibility within your system.

Splitting by business domain allows you to optimize for the majority of your units of work being in the one system. While you may never achieve 100% optimization without significant effort, focusing on business domains usually covers 80-90% of your needs.

How to Identify Business Domains

  1. Analyze Work Units: Look at the different areas of functionality in your application. What are the main features or services you provide?
  2. Examine Data Flow: Consider how data moves through your system. Are there natural boundaries where data is transformed or handed off?
  3. Review Team Structure: Often, team organization reflects business domains. How are your development teams structured?
  4. Consider User Journeys: Map out the different paths users take through your application. These often align with business domains.

For more detail here is a great book on the topic.

When to Keep Domains Together

Sometimes, you’ll find two domains that share a significant amount of code. In these cases, it might be more efficient to keep them in the same system. Consider creating a “modulith” (a modular monolith) or even maintaining a smaller monolith for these tightly coupled domains might make sense, but this is usually the exception to the rule, dont let it be an easy way out for you.

Analyzing Changes in the Monolith

Once you’ve identified potential business domains, the next step is to analyze how your monolith changes over time. This analysis helps prioritize which parts of the system to split first. Because this is where the value is, velocity, the more daily/weekly merge requests that happen in the new systems the more business impact you cause, and that’s our goal, business impact, in this case in the form of engineering velocity, don’t lose sight on this goal for some milestone driven Gantt chart.

There’s many elegant tools on the market for analysis for git and changes over time, I would encourage you to explore. We didn’t find any that worked for us because the domains were scattered throughout the code due to the age and size of our monolith (i.e. it was ancient).

What we found worked best, we used a hammer, its manual but it worked:

  1. Use MR (Merge Request) Labels: Implement a system where developers label each MR with the relevant business domain. This provides ongoing data about which domains of the system change most frequently.
  2. Add CI Checks: Include a CI step that fails if an MR doesn’t have a domain label. This ensures consistent data collection.
  3. Historical Analysis: Have your teams go through 1-2 quarters of historical MRs and label them retrospectively. This gives you an initial dataset to work with.

Once you have this data, wether it comes from the hammer approach or you find a more elegant one you want to look for patterns in your MRs. Which domains see the most frequent changes? This is how you prioritize your split.

Making a Plan

With your business domains identified and change patterns analyzed, it’s time to create a plan for splitting your monolith. Start with the domains that have the highest impact. These are the ones that change frequently.

Implement L7 Routing for incremental migration

Use Layer 7 (application layer) routing to perform A/B testing between your old monolith and new services. This allows you to:

  • Gradually shift traffic to new services
  • Compare performance and functionality potentially with AB Tests
  • Quickly roll back if issues arise

For Web Applications:

  • Consider migrating one page at a time
  • Treat each “page” as a unit of migration

Within pages sometimes we found that doing a staged approach with ajax endpoints individually helped to do the change more incrementally, but don’t let a “page” exist in multiple system for too long, it kills local dev experience, you go backwards on what you planned, you are meant to be improving dev experience, not making it worse, so finish it asap.

For Backend Services:

  • Migrate one endpoint or a small group of tightly coupled endpoints at a time
  • This allows for a gradual transition without disrupting the entire system

Also as you are incrementally migrating, if you focus is on fast killing the monolith, don’t bother deleting the old code as you go, let the thing die as a whole. This will give you more time to spend on moving to new systems. Try to not improve the experience on the old monolith, the harder it is to work on it the more likely a team is to make a decision to break something out of it, you increase the ROI this way of splitting.

Conclusion

Splitting a monolith is a significant undertaking, but with proper planning and analysis, it can lead to a more maintainable and scalable system. By identifying your business domains, analyzing change patterns, and creating a solid migration plan, you set the foundation for a successful transition from a monolithic to a microservices architecture.

In our next post, we’ll dive deeper into the strategies for executing your monolith split, including modularization techniques and how to handle ongoing development during the transition. Stay tuned!