The Customer Token Handler project offers an ironic illustration of Tuckman’s stages of group development – specifically how a highly skilled group can easily fail due to team dynamics. At a major technology company in Southeast Asia, an experiment in temporary team formation produced an unexpected lesson in ineffectiveness.
The setup appeared logical on paper. Three seasoned technical leads from Mobile, Homepage, and Supply Extranet domains would combine their expertise for two sprints in a temporary 3 person team. The mission: create a unified token handling library for customer authentication. Experience suggested their combined expertise would accelerate delivery.
Hour one revealed the fundamental flaw. The Homepage lead populated JIRA with exhaustive tickets. The Mobile lead covered whiteboards with sticky note tasks. The Supply Extranet lead, not one for the bureaucracy of planning, just started writing code. Three experts, three methodologies, zero progress. No forming, straight to storming, never reaching norming.
The two-sprint experiment dissolved afterwards without a working solution. The project appeared destined for failure until one technical lead took ownership. Working alone, they completed the token handling library in another two weeks – accomplishing more solo than the combined team had achieved together in double that time.
This outcome challenged conventional wisdom about collaborative development. While the three-person team generated extensive documentation, diagrams, and partial implementations, they never established the shared context necessary for effective collaboration. The eventual solo success demonstrated how reduced coordination overhead can sometimes accelerate delivery, did we end up with a better product from the solo lead? probably not, but its a working one at least that we can iterate on.
The Customer Token Handler story reshaped our approach to temporary team formation. It highlighted how Tuckman’s stages cannot be shortcuts, even with experienced technical leaders. Teams should be long lived to be effective.
We must never look at Engineering teams like a scheduling problem to solve, they are people, they have the same flaws and quirks as people do, we need to acknowledge them beyond their IC number in order to create winning teams.
As someone who’s spent countless hours in pair programming sessions, both as a participant and as a coach, I’ve witnessed the good, the bad, and the occasionally ugly sides of co-creation. While pair programming and mob programming can be incredibly powerful tools for knowledge sharing and code quality, they can also become surprisingly counterproductive when certain patterns emerge.
The Silent Drift
Picture this: You’re deep in a pairing session, working through a complex problem, when you notice your partner’s eyes glazing over as they check their phone. We’ve all been there, but this “silent drift” is perhaps the most insidious enemy of effective co-creation.
I once worked with a team where this had become so normalized that people would actually be on-call handling tickets during their pairing sessions. It’s not just about the immediate disruption; it’s about the message it sends: “This collaboration isn’t worth my full attention.”
The solution isn’t draconian rules about device usage. Instead, establish clear communication channels. If you need to check something urgent, simply say so: “Hey, I need two minutes to respond to an important email.” OR “I’m on call today and its busy, so probably better we do this tomorrow hen we have my full focus.” This transparency builds trust rather than eroding it.
The Keyboard Dictator
“Type ‘System dot out dot println.’ No, no – use the shortcut! Press Command-Shift-O…”
Sound familiar? Welcome to what I call the “Keyboard Dictator” syndrome. It’s particularly common when pairing involves developers of different experience levels, but it’s toxic regardless of the participants’ seniority.
This micro-management style doesn’t just slow things down; it actively prevents learning. It’s like trying to teach someone to ride a bike by controlling their handlebars remotely – they’ll never develop the intuition they need.
Instead, embrace the “Five Seconds Rule”: When you see your partner doing something you think is inefficient or incorrect, wait five seconds before speaking up. You’d be surprised how often they’re already on their way to a solution, just via a different path than you would have taken.
That being said though, if your goal is to teach someone keyboard shortcuts, being a keyboard dictator can be a good way to do it, so this antipattern can be used for good as well as evil.
The Eternal Marathon
I once encountered a team that prided itself on pairing “all day, every day.” They saw it as a badge of honor – until burnout started hitting their team like dominoes falling.
Pairing for eight hours straight isn’t just unsustainable; it’s mathematically impossible. Between meetings, emails, documentation, research, and basic human needs, forcing continuous pairing creates more stress than value.
The most effective teams I’ve worked with typically aim for 2-3 hours of pairing per day, with built-in breaks and solo time. This rhythm allows for both intense collaboration and necessary individual processing time.
The Keyboard Hoarder
We all know that developer who, consciously or not, maintains a death grip on the keyboard during pairing sessions. It’s often someone who’s incredibly skilled and efficient – which paradoxically makes the problem worse.
This pattern is particularly dangerous because it creates a passive observer rather than an active participant. The observer’s mind starts to wander, and suddenly you’ve lost all the benefits of having two brains on the problem.
Implement strict rotation patterns. Tools like mob timer can help, but even a simple agreement to switch roles every 25 minutes can make a huge difference.
If you are one of these incredibly skilled and efficient, try using the Keyboard Dictator Antipattern and teach the other guys on your team how to work as effectively as you, and you won’t get as frustrated, and the other people on your team will improve and everyone is happy.
The One True Way™ Syndrome
Perhaps the most dangerous pattern I’ve observed is the belief that there’s one “correct” way to do pair programming. I’ve seen teams tie themselves in knots trying to follow textbook definitions of driver-navigator patterns when their natural working style was completely different.
The truth is, effective co-creation is more art than science. What works brilliantly for one pair might be completely ineffective for another. The key is to focus on outcomes rather than process: Are both participants engaged? Is knowledge being shared? Is the code quality improving? What is the goal for this session?
The Path Forward
The most successful pairing sessions I’ve witnessed share a common thread: they’re built on a foundation of mutual respect and clear communication. When something isn’t working, participants feel safe to speak up and adjust their approach.
Rather than trying to avoid these patterns through rigid rules, build a culture where team members can openly discuss what’s working and what isn’t. Regular retrospectives focused specifically on pairing practices can be invaluable.
Remember, the goal of co-creation isn’t to follow a perfect process – it’s to build better software through collaboration. Sometimes that means typing together for hours, and sometimes it means giving each other space to think and process.
A Final Thought
The next time you find yourself in a pairing session, pay attention to these patterns. Are you drifting? Dictating? Hoarding the keyboard? The awareness itself is often enough to start shifting toward more effective collaboration.
After all, pair programming isn’t about being perfect – it’s about being better together than we are apart. And sometimes, knowing what not to do is just as important as knowing what to do.
There’s a saying in business that “what gets measured, gets managed.” But in the complex world of modern software systems, choosing what to measure can be as crucial as the measurement itself. Enter the concept of Technical North Star metrics – not just another KPI, but a fundamental compass that guides technical decisions and shapes organizational behavior.
The Power of a Single Number
When I first encountered the concept of a Technical North Star metric at a previous organization, I was skeptical. How could one number capture the complexity of our technical systems? But over time, I’ve come to appreciate the elegant simplicity it brings to decision-making and incident management.
The most effective Technical North Star metrics share three key characteristics: they’re ubiquitous throughout the organization, they directly correlate with business success, and perhaps most importantly, they’re actionable at every level of the technical organization.
Consider Netflix’s “Total Watch Time” or Facebook’s “Daily Active Users.” These aren’t just vanity metrics – they’re deeply woven into the technical fabric of these organizations. Every engineer, product manager, and executive speaks this common language, creating a shared understanding of success and failure.
From Metric to Currency
One of the most enlightening perspectives I’ve encountered came from a manager who described our Technical North Star metric as a “currency.” This analogy perfectly captures how these metrics function within an organization.
At Agoda, for instance, “Bookings” serves as this universal currency. While I can’t share specific numbers, what’s fascinating is how this metric has become part of the engineering team’s DNA. Ask any engineer about current booking rates, and they’ll know the number (though they won’t share it!).
This currency analogy extends beautifully to incident management. When an incident occurs, we can literally “count the cost” in terms of lost bookings. It’s not abstract – it’s concrete, measurable, and immediately understood throughout the organization.
The Art of Measurement
But how do we actually measure these metrics in a meaningful way? The approach needs to be both sophisticated enough to be accurate and simple enough to be actionable.
At Agoda, we’ve developed an elegant solution for measuring booking impact during incidents. We look at four-week averages for specific time windows. For instance, if the 10:00-10:10 AM window typically sees 50 bookings (a hypothetical number), any significant deviation from this baseline triggers investigation. When services are restored and the trend returns to normal, we can calculate the “cost” of the incident in terms of lost bookings.
This approach is brilliant in its simplicity. It accounts for natural variations in booking patterns while providing clear signals when something’s amiss. The four-week average smooths out daily fluctuations while remaining responsive enough to reflect recent trends.
Beyond Incidents: Driving Technical Excellence
The real power of a Technical North Star metric extends far beyond incident management. It shapes architectural decisions, influences feature prioritization, and drives technical innovation.
When every technical decision can be evaluated against its potential impact on the North Star metric, it creates clarity in decision-making. Should we invest in that new caching layer? Well, how will it affect bookings? Is this new feature worth the additional complexity? Let’s AB Test it on bookings.
You can look at the incrementality of these metrics to measure a B variants success which generally translates to direct bottom line value. For example, if we see a B variant is up on 200 Bookings per Day, this language translates to bottom line impact that’s easy for any engineer to understand. Connecting you day to day work to impact is very important for motivation of staff.
The Human Element
Perhaps the most underappreciated aspect of Technical North Star metrics is their impact on organizational behavior. When everyone from junior engineers to senior architects speaks the same language and measures success by the same yardstick, it creates alignment that no amount of process or documentation can achieve.
This shared understanding breaks down silos between teams. When a front-end engineer and a database administrator can discuss the impact of their work in terms of the same metric, it creates a foundation for meaningful collaboration.
Looking Forward
As our systems grow more complex and our organizations more distributed, the importance of having a clear Technical North Star only increases. The metric must evolve as our products and markets evolve. What worked yesterday might not work tomorrow.
The key is to maintain the balance between stability and adaptability. Your Technical North Star should be stable enough to guide long-term decisions but flexible enough to remain relevant as your business evolves.
The next time you’re evaluating your organization’s technical metrics, ask yourself: Do we have a true Technical North Star? Does it drive behavior at all levels of the organization? Is it serving as a currency for technical decision-making? If not, it might be time to look up and reorient your technical compass.
Remember, the best Technical North Star isn’t just a metric – it’s a shared language that aligns technical excellence with business success. And in today’s complex technical landscape, that alignment is more valuable than ever.
Throughout this series, we’ve explored the concept of paved paths, from understanding the problems they solve to implementing them with practical tools like .NET templates. In this final post, we’ll examine the broader impact of paved paths on development culture and look towards the future of software development.
The Cultural Shift: Embracing Paved Paths
Implementing paved paths is more than just a technical change—it’s a cultural shift within an organisation. Let’s explore how paved paths influence various aspects of development culture:
1. Balancing Standardization and Innovation
Paved paths provide a standardized approach to development, but they’re not about enforcing rigid conformity. As David Heinemeier Hansson, creator of Ruby on Rails, aptly puts it:
“Structure liberates creativity. The right amount of standardization frees developers to focus on solving unique problems.”
Paved paths offer a foundation of best practices and proven patterns, allowing developers to focus their creative energy on solving business problems rather than reinventing the wheel for every new project.
2. Fostering Collaboration and Knowledge Sharing
With paved paths in place, developers across different teams and projects share a common language and set of tools. This commonality facilitates:
Easier code reviews across projects, everyone is following similar structure and standards
Simplified onboarding for new team members, you dont need to maintain a lot of onboarding docs yourselves, you can lean on centralized docs more
Increased ability for developers to contribute to different projects, the other projects in my company look kinda like mine
3. Continuous Improvement Culture
Paved paths are not static; they evolve with the organization’s needs and learnings. This aligns well with a culture of continuous improvement. As Jez Humble, co-author of “Continuous Delivery,” states:
“The only constant in software development is change. Your templates should evolve with your understanding.”
Regular reviews and updates to your paved paths can become a focal point for discussing and implementing improvements across your entire development process.
4. Empowering Developers
While paved paths provide a recommended route, they also empower developers to make informed decisions about when to deviate. This balance is crucial, as Gene Kim, author of “The Phoenix Project,” notes:
“The best standardized process is one that enables innovation, not stifles it.”
By providing a solid foundation, paved paths actually give developers more freedom to innovate where it matters most.
Looking to the Future: Paved Paths and Emerging Trends
As we conclude our series, let’s consider how paved paths align with and support emerging trends in software development:
Microservices and Serverless Architectures: Paved paths can greatly simplify the creation and management of microservices or serverless functions. By providing templates and standards for these architectural patterns, organizations can ensure consistency and best practices across a distributed system.
DevOps and CI/CD: Paved paths naturally complement DevOps practices and CI/CD pipelines. They can include standard configurations for build processes, testing frameworks, and deployment strategies, ensuring that DevOps best practices are baked into every project from the start.
Cloud-Native Development: As more organisations move towards cloud-native development, paved paths can incorporate cloud-specific best practices, security configurations, and scalability patterns, primarily from Infrastructure-as-code. This can significantly reduce the learning curve for teams transitioning to cloud environments.
Platform Quality: I see a rise in use of tools like static code analysis to help encourage and educate engineers on internal practices and patterns, which work well with paved paths.
Conclusion: Embracing Paved Paths for Sustainable Development
As we’ve seen throughout this series, paved paths offer a powerful approach to addressing many of the challenges faced in modern software development. From breaking down monoliths to streamlining the creation of new services, paved paths provide a flexible yet standardized foundation for development.
By implementing paved paths, organizations can:
Increase development speed without sacrificing quality
Improve consistency across projects and teams
Facilitate contribution cross system
Empower developers to focus on innovation
Adapt more quickly to new technologies and architectural patterns
However, it’s crucial to remember that paved paths are not a one-time implementation. They require ongoing maintenance, regular reviews, and a commitment to evolution. As Kelsey Hightower, Principal Developer Advocate at Google, reminds us:
“Best practices are not written in stone, but they are etched in experience.”
Your paved paths should grow and change with your organization’s experience and needs.
As you embark on your journey with paved paths, remember that the goal is not to restrict or control, but to enable and empower. By providing a clear, well-supported path forward, you free your teams to do what they do best: solve problems and create innovative solutions.
The future of software development is collaborative, adaptable, and built on a foundation of shared knowledge and best practices. Paved paths offer a way to embrace this future, creating a development environment that is both efficient and innovative. As you move forward, keep exploring, keep learning, and keep paving the way for better software development.
“Good templates are like good habits – they make doing the right thing easy and automatic.” – Scott Hanselman, Principal Program Manager at Microsoft
In our previous post, we introduced the concept of paved paths as a solution to the challenges posed by monolithic architectures and mono repos. Today, we’re going to dive into the technical details of how to implement a key component of paved paths: new project templates. We’ll use .NET as our example, demonstrating how to create custom templates that embody your organization’s best practices and preferred setup.
Why .NET Templates?
.NET templates are an excellent tool for implementing paved paths because they allow you to:
Standardize project structure and initial setup
Embed best practices and common configurations
Quickly bootstrap new services or applications
Ensure consistency across different teams and projects
Getting Started with .NET Templates
The .NET CLI provides a powerful templating engine that we can leverage to create our paved path templates. Let’s walk through the process of creating a custom template.
Step 1: Create a Template Project
First, let’s create a new project that will serve as our template:
dotnet new webapi -n MyCompany.Template.WebApi
This creates a new Web API project that we’ll customize to serve as our template.
Step 2: Customize the Template
Now, let’s make some modifications to this project to align it with our organization’s standards. For example:
Add common NuGet packages
Set up a standard folder structure
Add common middleware or services
Configure logging and monitoring
Here’s an example of how you might modify the Program.cs file:
using MyCompany.Shared.Logging;
using MyCompany.Shared.Monitoring;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
// Add MyCompany standard services
builder.Services.AddMyCompanyLogging();
builder.Services.AddMyCompanyMonitoring();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
// Use MyCompany standard middleware
app.UseMyCompanyLogging();
app.UseMyCompanyMonitoring();
app.Run();
Step 3: Create Template Configuration
Next, we need to add a special configuration file that tells the .NET CLI how to treat this project as a template. Create a new folder in your project called .template.config, and inside it, create a file called template.json:
This configuration file defines metadata about your template and tells the .NET CLI how to use it.
Step 4: Package the Template
Now that we have our template project set up, we need to package it for distribution. We can do this by creating a NuGet package. Add the following to your .csproj file:
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<PackageType>Template</PackageType>
<PackageVersion>1.0</PackageVersion>
<PackageId>MyCompany.WebApi.Template</PackageId>
<Title>MyCompany Web API Template</Title>
<Authors>Your Name</Authors>
<Description>Web API template for MyCompany projects</Description>
<PackageTags>dotnet-new;templates;mycompany</PackageTags>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IncludeContentInPack>true</IncludeContentInPack>
<IncludeBuildOutput>false</IncludeBuildOutput>
<ContentTargetFolders>content</ContentTargetFolders>
</PropertyGroup>
<ItemGroup>
<Content Include="**\*" Exclude="**\bin\**;**\obj\**" />
<Compile Remove="**\*" />
</ItemGroup>
</Project>
Step 5: Build and Pack the Template
Now you can build and pack your template:
dotnet pack
This will create a NuGet package in the bin/Debug or bin/Release folder, depending on your build configuration.
Step 6: Install and Use the Template
To use your new template, you first need to publish it to your internal nuget server and then you can install:
dotnet new -i MyCompany.WebApi.Template --nuget-source https://your.internal.nuget
Now you can use your template to create new projects:
dotnet new mycompany-webapi -n MyNewWebApi
Maintaining and Updating Templates
As your organization’s needs evolve, you’ll want to update your templates. Here are some tips for maintaining your templates:
Version your templates and keep a changelog
Regularly review and update dependencies
Collect feedback from developers using the templates
Consider creating multiple templates for different use cases
Conclusion
By creating custom .NET templates, we’ve taken a significant step in implementing paved paths in our organization. These templates encapsulate our best practices, preferred project structure, and common configurations, making it easy for developers to start new projects that align with our standards.
Remember, templates are just one part of a paved path strategy. In future posts, we’ll explore other aspects such as shared libraries, infrastructure as code, and CI/CD pipelines. Stay tuned!
Two decades ago, the software development world witnessed a significant shift. Scrum, a framework within the Agile methodology, was gaining tremendous popularity. This change coincided with a wave of redundancies among traditional project managers. Faced with evolving industry demands, many of these professionals saw an opportunity to reinvent themselves as Scrum Masters.
The true nature of the problem became clear to me when the Project Management Institute (PMI) added an Agile certification, allowing it to contribute points towards one’s overall project management goals.
This certification still exists today, enabling individuals to become “Certified in Agile” through self-study and an online exam. The concept seems utterly foreign to me, especially when I reflect on my experience with the Certified Scrum Master (CSM) course I took with Scrum Alliance years ago. That intensive three-day course was such an eye-opener, fundamentally shifting my mindset. I simply cannot envision anyone truly grasping the core concepts of Agile without face-to-face communication – a principle that, ironically, is a core value in the Agile Manifesto itself.
This transition wasn’t always smooth or successful though. Many former project managers approached Scrum with a mindset still rooted in traditional methodologies. They viewed it as merely a new set of processes to follow rather than a fundamental shift in philosophy and approach.
This misinterpretation led to a superficial adoption of Scrum practices:
Gantt Charts Transformed: The detailed project timelines of Gantt charts were simply repackaged as product backlogs, missing the dynamic and flexible nature of true Agile planning.
Sprint Reviews Misused: Instead of focusing on demonstrating working software and gathering valuable feedback, sprint reviews often devolved into status update meetings reminiscent of traditional project reporting.
Daily Standups Misinterpreted: The essential daily sync-up became a rote status report, losing its intended purpose of team coordination and obstacle identification.
In essence, while the terminology changed, the underlying project management approach remained largely unchanged. This “Scrum-but” approach – “We’re doing Scrum, but…” – became prevalent in many organizations.
This misapplication of Scrum principles highlights a crucial lesson: true agility isn’t achieved by merely adopting a new set of practices. It requires a fundamental shift in mindset, embracing flexibility, continuous improvement, and most importantly, a focus on delivering value to the customer.
As modern software engineers and managers, it’s crucial to reflect on this history. We must ask ourselves: Are we truly embracing the spirit of Agile and Scrum, or are we simply going through the motions? The power of these methodologies lies not in their ceremonies, but in their ability to foster collaboration, adaptability, and customer-centricity.
The evolution of Scrum serves as a reminder that in our rapidly changing industry, it’s not enough to change our processes. We must also transform our thinking, our culture, and our approach to creating software that truly meets the needs of our users.
The Unintended Consequences of Rigid Scrum Implementation
Scrum was originally designed as a flexible, adaptive framework for product development. Its creators envisioned a methodology that would empower teams to respond quickly to change and deliver value efficiently. However, as Scrum gained popularity, a troubling trend emerged. Many organizations began to treat Scrum as a rigid methodology, leading to several significant issues:
Ritual Over Results: Teams became more focused on following Scrum ceremonies to the letter rather than using them as tools to improve productivity and value delivery.
Inflexible Sprint Lengths: The idea of fixed-length sprints, while useful for creating rhythm, was often applied too rigidly. Teams lost the ability to adapt to work that didn’t neatly fit into arbitrary time boxes.
Product Backlog as a Wish List: Product backlogs grew unwieldy, losing the crucial connection between backlog items and real customer needs. They became dumping grounds for ideas rather than curated lists of customer problems and needs.
One-Size-Fits-All Approach: Organizations often applied Scrum uniformly across different types of projects and teams, ignoring the need for adaptation based on context.
Overemphasis on Velocity: Story points and velocity, meant to be team-specific measures of capacity, became weaponized as performance metrics, leading to all sorts of dysfunctional behaviors.
“Never mistake motion for action.” – Ernest Hemingway
The results of this rigid application were often the opposite of what Scrum intended:
Decreased Agility: Ironically, the rigid application of Scrum led to less agile teams. They became bound by their processes rather than empowered by them.
Reduced Innovation: Over-planning and strict adherence to sprints left little room for experimentation. Teams became risk-averse, focusing on meeting sprint goals rather than solving customer problems.
Misalignment with Business Goals: The focus shifted to sprint completion rather than delivering business value, creating a disconnect between Scrum activities and overall product strategy.
Signs Your Team Might Be Falling into the Scrum Trap
If you’re wondering whether your team has fallen into a rigid Scrum implementation, here are some signs to look out for:
Ceremony Fatigue: Team members view Scrum events as time-wasting meetings rather than valuable collaboration opportunities.
Velocity Obsession: There’s a constant push to increase velocity, often at the expense of quality or sustainable pace.
Inflexible Planning: Your team struggles to accommodate urgent work or valuable opportunities because “it’s not in the sprint.”
Stale Backlog: Your product backlog is enormous, with items at the bottom that haven’t been reviewed in months (or years).
Sprint Goal Apathy: Sprint goals, if they exist at all, are vague or uninspiring, and the team doesn’t use them to guide decisions.
Lack of Experimentation: Your team rarely tries new approaches or technologies because there’s “no room in the sprint” for learning or innovation.
Lack of User Feedback: Stories come curated from a seeming invisible place in the sky onto the backlog, with little justification as to why we are doing things. After shipping you are “done”, no measurement of impact post release is done, only “feature shipped”.
Scrum Master as Process Police: The Scrum Master’s primary function has become enforcing rules rather than coaching and facilitating. Has your scrum master said lately “No you cant add that story to the sprint, we’ve already started, you need to wait till next sprint”, is this statement Agile?
One-Size-Fits-All Sprints: All your teams have the same sprint length and use the same processes, regardless of their work’s nature. They all measure themselves in teh same way, story points delivered or sprint completion rate, might be everyone’s main measure of success.
Conclusion: Rediscovering Agility in Scrum
The evolution of Scrum from a flexible framework to a rigid methodology in many organizations serves as a cautionary tale for the Agile community. It reminds us that the true spirit of agility lies not in strict adherence to practices, but in the principles that underpin them.
To truly benefit from Scrum, teams and organizations need to:
Focus on Outcomes: Shift the emphasis from following processes to delivering value.
Embrace Flexibility: Adapt Scrum practices to fit the team’s context and the nature of their work.
Foster Innovation: Create space for experimentation and learning within the Scrum framework.
Align with Business Goals: Ensure that Scrum activities directly contribute to overarching product and business strategies.
Continuous Improvement: Regularly reflect on and adapt not just the product, but the process itself.
Remember, Scrum is a framework, not a prescription. Its power lies in its ability to help teams organize and improve their work, not in rigid rule-following. By rediscovering the flexibility and adaptiveness at the heart of Scrum, teams can avoid the pitfalls of overly rigid implementation and truly harness the benefits of agile methodologies.
As we move forward in the ever-evolving landscape of software development, let’s carry forward the lessons learned from Scrum’s journey. Let’s strive to create processes that truly empower our teams, deliver value to our customers, and drive innovation in our products. That, after all, is the true spirit of agility.
In our previous post, we explored the challenges of monolithic architectures and the potential pitfalls of mono repos. We saw how engineers often find themselves trapped in a cycle of adding to existing monoliths, despite the long-term drawbacks. Today, we’re excited to introduce a concept that offers a way out of this dilemma: Paved Paths.
What is a Paved Path?
A paved path is a supported technology stack within an organisation that provides a clear, well-maintained route for developing new features or systems. It’s not about dictating a single way of doing things, but rather about offering a smooth, well-supported path that makes it easier to create new services or applications without sacrificing speed or quality.
Think of it like this: when you’re walking through a park, you’ll often see paved paths alongside open grassy areas. While you’re free to walk anywhere, the paved paths offer a clear, easy-to-follow route that most people naturally gravitate towards. In software development, a paved path serves a similar purpose.
Components of a Paved Path
A well-implemented paved path typically includes:
Shared Libraries: Reusable code components that handle common functionalities like authentication, logging, or database access.
New Project Templates: Pre-configured project structures that set up the basics of a new application or service, complete with best practices baked in.
Infrastructure as Code: Templates for setting up the necessary infrastructure, ensuring consistency across different projects.
CI/CD Pipelines: Pre-configured continuous integration and deployment pipelines that work out of the box with the new project templates.
Monitoring and Observability: Built-in solutions for logging, metrics, and tracing that integrate seamlessly with the organization’s existing tools.
Documentation and Guides: Comprehensive resources that explain how to use the paved path effectively and when it might be appropriate to deviate from it.
Benefits of Paved Paths
Paved paths offer numerous advantages that address the issues we’ve discussed with monoliths and mono repos:
Faster Start-up: Engineers can quickly spin up new services or applications without spending weeks on boilerplate setup.
Consistency: All new projects start with a consistent structure, making it easier for engineers to switch between different services.
Best Practices Built-in: Security, performance, and scalability best practices are incorporated from the start.
Easier Maintenance: With a consistent structure across services, maintenance becomes more straightforward.
Flexibility: While providing a clear default path, paved paths still allow for deviation when necessary, offering the best of both worlds.
Improved Onboarding: New team members can get up to speed quickly by following the paved path.
Striking the Right Balance
It’s important to note that paved paths are not about enforcing a rigid, one-size-fits-all approach. They’re about providing a well-supported default that makes it easy to do the right thing, while still allowing for flexibility when needed.
Paved paths coexist with what we might call “rough paths” – less travelled routes that engineers might choose to explore for various reasons. These rough paths could be new technologies, experimental approaches, or simply different ways of solving problems that don’t quite fit the paved path model.
The beauty of this approach is that it encourages a balance between standardization and innovation:
Engineers are free to venture off the paved path when they believe it’s necessary or beneficial. This openness to exploration prevents the stagnation that can come from overly strict standardization. As engineers explore these rough paths, they gather valuable insights and experiences. Some of these explorations might reveal better ways of doing things or address use cases that the current paved path doesn’t handle well.
The most successful “rough path” explorations often lead to the creation of new paved paths. This evolution ensures that the organization’s supported technology stack remains current and effective.
By allowing and encouraging these explorations, organizations tap into the collective wisdom and creativity of their engineering teams. This bottom-up approach to defining best practices often results in more robust and widely-accepted standards.
As the LinkedIn engineering team learned when they tried to standardize on a single tech stack, too much restriction can stifle innovation and lead to suboptimal solutions. Paved paths strike a balance by offering a smooth road forward without blocking other routes entirely.
This balanced approach creates a dynamic ecosystem where paved paths provide stability and efficiency, while the ability to explore rough paths ensures adaptability and innovation. It’s not about dictating a single way of doing things, but about fostering an environment where best practices can emerge organically and evolve over time.
Conclusion
Paved paths offer a promising solution to the challenges posed by both monolithic architectures and the complexity of mono repos. They provide the speed and ease of development that often draws us to monoliths, while enabling the modularity and scalability that we seek from microservices.
In our next post, we’ll dive deeper into how you can implement paved paths in your organisation, with a special focus on using .NET templates to create a smooth path for your development teams. Stay tuned!
In the ever-evolving landscape of software development, we constantly seek better ways to structure our projects, manage our code, and streamline our development processes. Two approaches that have dominated discussions in recent years are monolithic architectures and mono repositories. In this post, we’ll dive deep into the challenges posed by monoliths and explore why mono repos, despite their initial appeal, may not be the panacea we’re looking for.
The Monolith Dilemma: When Bigger Isn’t Better
Monolithic architectures have been the go-to structure for many projects, especially in their early stages. A monolith is a single, large application where all the code for various features and functionalities resides in one codebase. While this approach can simplify initial development and deployment, it often leads to significant challenges as the project grows.
The Problems with Monoliths
Dev Feedback Slowdown: As the codebase expands, compilation times increase dramatically. What once took seconds can stretch into minutes or even hours, severely impacting developer productivity and morale.
Test Suite Bloat: Large codebases accumulate a vast number of tests. Running the entire test suite becomes a time-consuming process, often delaying deployments and slowing down the development cycle.
Test Flakiness: With a high volume of tests, the likelihood of encountering flaky tests increases. Even if each test has a 99% stability rate, the overall stability of your test suite decreases exponentially with the number of tests. For instance, with 179 tests at 99% stability, your actual stability drops to a mere 17%!
Extended Lead Times: The combination of slow compilation, lengthy test runs, and increased deployment complexity leads to extended lead times. This delay between writing code and seeing it in production can be frustrating for developers and stakeholders alike.
Difficult Upgrades: Upgrading components or frameworks in a monolith is a massive undertaking. For instance, upgrading a web framework like React or a backend framework like .NET Core often requires changes across the entire codebase, making it a risky and time-consuming process. “It’s like changing tires on a moving car.” – Jeff Bezos
The Engineer’s Dilemma: To Add or Not to Add?
Picture this: You’re an engineer tasked with implementing a new feature. As you sit at your desk, coffee in hand, you find yourself at a crossroads. The path before you splits into two directions:
Add to the existing monolithic system
Create a new, separate system for the feature
Your mind races through the implications of each choice:
Option 1: Add to the Existing System
“Well,” you think, “the monolith already has everything set up. Authentication? Check. Infrastructure? In place. Deployment pipelines? Running smoothly. CI/CD? Configured and working. Monitoring? All set up.”
You can almost hear the siren call of the monolith: “Just add your feature here. It’ll be quick and easy. You know how everything works already!”
Option 2: Create a New System
As you consider this option, a wave of tasks floods your mind:
“I’ll need to wire up the authentication library.”
“What about infrastructure? That’s going to take time, i’ll need to create new terraform scripts and thing about capacity and resources.”
“CI/CD for a new system? More work.”
“And let’s not forget about monitoring and alerts. Ugh.”
Your product manager’s voice echoes in your head: “Remember, we’ve got to deliver this next sprint. We need to move fast!”
The Decision
As you weigh your options, the choice seems clear. Adding to the existing system will be faster, easier, and will let you meet those tight deadlines. Creating a new system feels like it would slow you down, potentially for weeks.
“I’ll just add it to the monolith,” you decide. “It’s not ideal, but it’s the most practical solution right now.”
And so, another feature joins the monolith. It’s a decision made countless times by countless engineers, each one logical in the moment, each one contributing to the growing complexity and challenges of the monolithic system.
This cycle repeats, sprint after sprint, feature after feature. The monolith grows ever larger, compilation times creep up, test suites expand, and the very problems that tempt us to create new systems become more pronounced.
It’s a vicious cycle, one that leaves many engineering teams wondering: Is there a better way? How can we break free from this pattern and create systems that are both efficient to develop and maintainable in the long run?
The Mono Repo Mirage: A Solution or Another Problem?
In recent years, mono repositories (mono repos) have gained popularity as a potential solution to some of the challenges posed by monoliths. A mono repo is a version control repository that contains multiple projects or applications. The idea is to maintain modularity while keeping all code in one place.
The Promise of Mono Repos
“Every solution breeds new problems.” — Arthur Bloch (Murphy’s Law)
Mono repos offer several potential benefits:
Unified codebase: All projects are in one place, making it easier to share code and maintain consistency.
Simplified dependency management: Dependencies can be shared and updated across projects more easily.
Atomic commits: Changes across multiple projects can be committed together, ensuring consistency.
Easier refactoring: Mass updating code between projects becomes simpler when everything is in one repository.
The Reality Check
While mono repos sound promising in theory, the reality can be quite different, especially for companies that aren’t tech giants like Google (which famously uses a massive mono repo).
Let’s consider a real-world perspective from a developer at Uber, a company known for its use of mono repos:
“It is horrible – everyone hates it. It does not work well with IDEs – feels like going back 20 years with IDE support. Dependency management is a nightmare – which is supposed to be the big selling point. Release tooling sucks – I see thousands of commits between my releases.”
There’s several key issues with mono repos:
Poor IDE Support: Many modern IDEs struggle with the size and complexity of mono repos, leading to a degraded development experience.
Dependency Management Challenges: Contrary to the promise of simplified dependency management, large mono repos can make this process more complex due to issues with large updates having large amounts of change that go with them, and large amounts of change it a high frequency of change repo compound against each other.
Release Complications: With thousands of commits across various projects, identifying and managing releases becomes a significant challenge.
Tooling Requirements: Effective use of mono repos often requires substantial investment in custom tooling. As our Uber developer notes, “Monorepo might work for Google who has an army to build tooling – for everyone else, stay far away from it.”
Its not that easier to refactor: With the volume of change you get you cant do big refactors, you get too many merge conflicts and get kicked out of the merge train/queue
The Search Continues
While mono repos attempt to address some of the issues posed by monolithic architectures, they introduce their own set of challenges. For most organisations, mono repos may not be the silver bullet they appear to be at first glance.
So, where does this leave us? How can we address the challenges of monoliths without falling into the pit of mono repo complexity? Is there a middle ground that can provide the benefits of modular development without the drawbacks we’ve discussed?
In our next post, we’ll explore these questions and introduce the concept of “paved paths” – a promising approach that aims to combine the best of both worlds while avoiding their pitfalls. Stay tuned as we continue our journey from monoliths to more maintainable and scalable architectures!
In the fast-paced world of software engineering, the quest for code quality is never-ending. As organisations scale and codebases grow, maintaining consistency and preventing bugs becomes increasingly challenging. Enter linting: the seemingly perfect solution to all our code quality woes.
It’s a familiar scene in engineering teams across the globe. A passionate developer, let’s call them our “hero engineer,” identifies the root of all evil: inconsistent, potentially buggy code repeated throughout the codebase. Their solution? Implement a series of good practice linting rules to revolutionize the way people code. With the best of intentions, they charge forth, determined to elevate the entire team’s coding standards.
The promise is enticing: with these new linting rules in place, surely the code will magically improve. After all, if engineers can’t merge without passing these checks, they’ll have to write better code, right?
Wrong.
II. The Problem: When Linting Becomes Policing
In the complex ecosystem of a modern engineering organization, introducing new rules without proper context can lead to unexpected – and often counterproductive – results.
A. Engineers’ Reaction to Unexplained Linting Errors
Picture this: An engineer, deep in the flow of solving a critical problem, suddenly encounters a barrage of linting errors in their IDE or CI pipeline. These errors, appearing out of nowhere, seem to have no relation to the functionality they’re implementing. What’s their instinctive reaction?
More often than not, the goal shifts from “write good code” to “make the errors go away.” This usually involves finding the quickest path to silence these annoying new alerts that have suddenly appeared in their workflow.
Let me share a real-world example I’ve encountered:
In a production repository I once worked on, I witnessed this scenario unfold. Our well-intentioned “hero engineer” had implemented strict linting rules overnight. The next day, pull requests were failing left and right due to linting errors. What happened next was eye-opening.
Instead of embracing these new rules, engineers started adding // eslint-disable-next-line comments liberally throughout the codebase. Others went a step further, adding /* eslint-disable */ at the top of entire files. The very tools meant to improve code quality were being systematically circumvented.
This behaviour isn’t born out of malice or laziness. It’s a natural response to a perceived obstacle in the development process, especially when the benefits of these new rules aren’t clear or immediate.
And this inst a totally fictional tale, below example from production code bases I’ve witnessed people similar to our hero try this on
B. The Temptation to Force Rules
Faced with this resistance, our hero engineer might be tempted to double down. “If people won’t follow the rules voluntarily,” they think, “we’ll have to force them.” This usually involves:
Putting codeowners on lint configuration files
Implementing additional scripts to check for inline linting disables
Blocking merges for any code that doesn’t pass linting
“If all you have is a hammer, everything looks like a nail.” — Abraham Maslow
Suddenly, our well-meaning engineer finds themselves in an ongoing battle with their own colleagues. The very team they sought to help now views them as an adversary, the enforcer of arbitrary and frustrating rules.
C. The “Military vs. Police” Analogy
This situation reminds me of a quote from the 2004 series Battlestar Galactica. Admiral Adama says:
“There’s a reason you separate military and the police. One fights the enemies of the state, the other serves and protects the people. When the military becomes both, then the enemies of the state inevitably become the people.”
While we’re not dealing with matters of state security, the principle holds true in software engineering. When we turn our tools of improvement into our “military might” against our own team members, we risk turning them into adversaries rather than collaborators.
In a modern engineering organization, where collaboration and shared ownership of code quality are crucial, this adversarial approach can be toxic. It creates an “us vs. them” mentality, where developers feel policed rather than supported in their efforts to improve.
The result? A team that’s more focused on appeasing the linter than on writing genuinely good, maintainable code. The very tool intended to improve code quality becomes a bureaucratic hurdle to be overcome, rather than a valuable aid in the development process.
So, if forcing linting rules upon the team isn’t the answer, what is? How can we harness the power of linting tools without creating a police state in our codebase?
III. The Alternative: Linting as a Teaching Tool
“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” – Buckminster Fuller
So, if enforcing linting rules like a code police force isn’t effective, what’s the alternative? The answer lies in a fundamental shift of perspective: from policing to teaching.
It’s about the Importance of Education and Buy-in. In modern engineering organizations, where autonomy and expertise are valued, dictating rules without explanation is rarely effective. Instead, we need to focus on education and securing buy-in from the entire team.
Remember: if your team doesn’t believe in the linting rules, they won’t follow them — at least not in the spirit they were intended.
Starting a Dialogue with Your Team, the key to successful implementation of linting rules is open communication. Here’s how to approach it:
Agree on the reasons for linting: Is it for bug prevention (quality) or increasing readability (velocity)? Make sure everyone understands and agrees with the goals.
Collaborative rule-setting: Involve the team in deciding which rules to implement. This isn’t just about democracy — it’s about leveraging the collective expertise of your engineers.
Use tools like Code Coach: For teams working with external contributors or in code review scenarios, tools like Code Coach can help enforce agreed-upon standards without feeling heavy-handed.
Implementing Linting Effectively, Once you have buy-in, consider these strategies for smooth implementation:
Plan for legacy code: Create a plan to clean up existing code gradually. Automation can be your friend here — look for existing code fixes or write your own if needed.
Fail fast: Implement linting warnings as close to the development process as possible. IDE warnings are far less frustrating than CI pipeline failures.
Document and explain: Ensure that every linting rule has a clear explanation and, if possible, a link to further documentation.
IV. Best Practices for Educational Linting
Now that we’ve shifted our mindset from policing to teaching, let’s explore some best practices that embody this educational approach.
Try to Explain the ‘Why’ Behind Rules. Unhelpful errors like “don’t do this” teach obedience, not understanding. We’re dealing with knowledge workers, not factory line operators. Put the ‘why’ in your errors, and if your linter supports it, link out to documentation.
For example, in C#, using Roslyn metadata can embed links to documentation directly in IDE error messages, providing immediate context and explanation.
BYO (Build Your Own) Rules When Necessary. Most of the time, the best way to start is by agreeing on standards within your team. Document these in an easily accessible format — markdown files in your repo can work well.
From this document, look for existing rules in the market that match your standards. If you can’t find any, don’t be afraid to write your own. Most language-specific rules are fairly trivial to write, usually requiring only 10-20 lines of code each.
You can read in this post about how we approached this is Agoda many years ago.
Please, please Use Auto-formatters, if you’ve established style standards, don’t make people implement them manually. Every major programming language has formatters that work with IDEs. Make use of them!
Here’s a story that illustrates the importance of auto-formatters:
I once had an engineer who was the first external contributor to a particular Scala backend in our company. He sent a PR with about 500 lines of code changed or added. The review came back with 140 comments — about one comment for every three lines. It was escalated to the Director/VP level because it seemed so egregious.
When we dug into it, we realized about 80% of the comments were purely about style: “You need a line feed before this brace,” “This brace needs one more tab in front of it,” and so on.
After this realization, we de-escalated the situation.
But here’s where the story takes a positive turn: my engineers did a follow-up PR to add Scala style configurations to the repo. They went through all 140 comments and reverse-engineered a Scala style config that suited the team’s preferences. They even held a knowledge-sharing session afterward.
That right there is good culture. Instead of assuming the contributor was careless or incompetent, the team recognized a knowledge gap around the tooling and filled it, then shared that knowledge.
Automate Fixing Where Possible. Most linters are based on AST (Abstract Syntax Tree) queries, which means you can often apply mutations to the AST to automatically fix issues. This makes it even easier for developers to comply with standards.
Here’s another story that illustrates this principle:
Whenever a new version of C# would come out, Microsoft would often include code fixes in Visual Studio to convert old language patterns to new ones. This became my personal way of learning new language features. My IDE would suggest, “This is a code smell. Let’s fix it,” and then I’d apply the auto-formatting to see a new, often more concise or readable way of doing things.
By automating fixes interactively via teh IDE while engineers are coding, you’re not just enforcing standards — you’re actively teaching developers new and improved coding patterns.
Remember, the goal isn’t to force developers into rigid compliance. It’s to create an environment where writing high-quality, consistent code is the path of least resistance. By focusing on education, collaboration, and automation, you can transform linting from a policing tool into a teacher that scales, elevating the skills of your entire engineering organization.
V. Creating a Culture of Continuous Improvement
In modern engineering organizations, the way we approach code quality can significantly impact team dynamics, productivity, and overall job satisfaction.
Don’t Assume Malice or Incompetence, when faced with code that doesn’t meet our standards, it’s easy to jump to conclusions about the developer’s skills or intentions. However, this mindset is rarely productive and often inaccurate.
Remember: In almost all cases, developers aren’t writing “bad” code out of laziness or incompetence. They usually haven’t been shown a better way yet. This principle applies not just to linting, but to all aspects of tooling and best practices.
Foster an Environment of Knowledge Sharing, try to create a culture of continuous improvement means making knowledge sharing a core value of your team. Here are some ways to encourage this:
Regular code review workshops: These can be opportunities to discuss common issues found in reviews and share solutions.
Linting rule of the week: Highlight a specific linting rule each week, explaining its purpose and demonstrating good practices.
Pair programming sessions: Encourage developers to work together, especially when implementing new patterns or working with unfamiliar parts of the codebase.
Tech talks or brown bag sessions: Give team members a platform to share their knowledge about tools, techniques, or interesting problems they’ve solved.
Encourage Feedback and Iteration on Linting Rules, remember, your linting rules shouldn’t be set in stone. As your team grows and your codebase evolves, your needs may change. Create a process for regularly reviewing and updating your linting rules. This might include:
Quarterly linting reviews: Discuss which rules have been helpful, which have been pain points, and what new rules might be beneficial.
An easy process for proposing changes: Make it simple for any team member to suggest modifications to the linting rules.
Trial periods for new rules: When introducing a new rule, consider having a “warning only” period before enforcing it, allowing the team to adjust and provide feedback.
VII. Conclusion: Embracing Linting as a Teaching Tool
The Teacher Approach vs. The Police Approach, we started by examining the common pitfall of treating linting rules as a policing tool. We saw how this approach often leads to resistance, workarounds, and a adversarial relationship between developers and the very tools meant to help them.
In contrast, we’ve explored the benefits of treating linting as a teaching tool. This approach focuses on education, collaboration, and continuous improvement. By explaining the ‘why’ behind rules, involving the team in rule-setting, and fostering a culture of knowledge sharing, we can transform linting from a source of frustration into a catalyst for growth.
B. Long-term Benefits of Educational Linting
The benefits of this educational approach extend far beyond just cleaner code:
Improved Developer Skills: By understanding the reasoning behind linting rules, developers become more skilled and conscientious coders.
Increased Team Cohesion: Collaborative rule-setting and knowledge sharing foster a sense of shared ownership and team unity.
Faster Onboarding: Clear, well-explained coding standards make it easier for new team members to get up to speed quickly.
Adaptability: Regular review and iteration of linting rules ensure that your practices evolve with your team and technology.
Positive Engineering Culture: An approach based on teaching and collaboration contributes to a more positive, growth-oriented engineering culture.
C. Call to Action: Evaluate and Improve Your Linting Culture
As we conclude, I encourage you to take a step back and evaluate your team’s current approach to linting:
Are your linting rules serving as a teacher or a police officer?
Do your developers see linting as a helpful tool or a frustrating obstacle?
Is there open dialogue about coding standards and best practices?
If you find that your current approach leans more towards policing than teaching, consider implementing some of the strategies we’ve discussed. Start small – perhaps by initiating a team discussion about one or two linting rules. Remember, the goal is not perfection, but continuous improvement and learning.
By shifting towards an educational approach to linting, you’re not just improving your code – you’re investing in your team’s growth and creating a more positive, collaborative engineering culture.
VIII. Additional Resources
To help you on your journey towards more effective, educational linting, here are some additional resources you might find useful:
Further Reading on Effective Code Review and Team Collaboration
“Best Kept Secrets of Peer Code Review” by Jason Cohen
A comprehensive guide to effective code review practices.
“The Art of Readable Code” by Dustin Boswell and Trevor Foucher
Offers insights into writing clear, maintainable code.
“Clean Code: A Handbook of Agile Software Craftsmanship” by Robert C. Martin
A classic book on writing quality code that’s easy to understand and maintain.
“The Pragmatic Programmer: Your Journey to Mastery” by Andrew Hunt and David Thomas
Provides practical advice for improving as a programmer, including tips on code quality and team collaboration.
Remember, the journey to better code quality is ongoing. Stay curious, keep learning, and always be open to new ideas and approaches. Happy coding!
Welcome back to our series on managing self-managing teams! 👋 We’ve reached the final instalment, where we’ll dive into the crucial skill of crafting Key Performance Indicators (KPIs) that truly work for your team. Let’s turn those dull metrics into powerful tools for success!
When Good Metrics Go Bad
Ever presented what you thought was a perfect set of KPIs, only to be met with blank stares or confused looks? You’re not alone. Many of us have faced the dreaded “Why are we measuring this again?” moment. So, how do we create KPIs that inspire “Aha!” moments instead of “Uh… what?”
The Essential Elements of Effective KPIs
Before we start, let’s review the key properties our KPIs should have:
Easily Measurable: No complex calculations or long running batch jobs required.
Team-Focused: Avoid singling out individuals.
Business-Aligned: Clearly linked to company goals.
Actionable: Provides clear direction for improvement.
Motivating: Inspires the team to perform better.
KPIs to Avoid
Just as important as knowing what to measure is knowing what not to measure. Here are some KPIs to steer clear of:
Lines of Code: Quantity doesn’t equal quality.
Number of Bugs Fixed: Could encourage writing buggy code just to fix it.
Hours Worked: We’re after results, not time spent.
Story Points: Often arbitrary and not indicative of real progress.
Real-World KPI Success: The Booking Completion Saga
Let me share a story from a company I once worked at. We implemented a KPI around booking completion that became a game-changer. Here’s what made it so effective:
Direct Business Impact: We measured “Incremental Bookings per Day.” This directly showed teams how much they were contributing to the company’s bottom line.
Instant Feedback: The real magic was in the immediacy. As soon as an A/B test was turned on, the numbers started ticking. Our experimentation system was linked to a real-time Kafka feed from the booking website.
Visible Results: We had TVs on office walls displaying dashboards of running experiments. This visibility created a buzz of excitement.
Celebration of Wins: When an experiment showed significant improvement, the Product Owner would take the team out for drinks the day it was taken, when the experiment run finished. It wasn’t uncommon to see teams celebrating their wins at the local bar area in the evenings with a bottle of something and shots on the table.
The excitement was so palpable that one developer even created a Slack bot in his spare time to check experiment results during dinner! He wasn’t going to wait to the next day in the office to see what the users thought about his new feature.
This KPI worked because it connected directly to business impact and provided instant, visible feedback. It almost gamified the process for the engineers, making it thrilling to see in real-time how users responded to new features. The high volume of bookings meant meaningful results appeared quickly, sometimes within minutes.
The result? A highly motivated team, numerous significant wins, and a culture of continuous improvement and celebration.
Aligning Team Metrics with Business Goals
Your KPIs should create a clear line from daily team activities to high-level business objectives. For example:
Business Goal: Increase market share
Team KPI: “Feature Adoption Rate” (How quickly users embrace new features)
Daily Activity: Developing intuitive UI and smooth user on-boarding
Regular KPI Reviews
KPIs aren’t set-and-forget metrics. Schedule regular review sessions with your team to ensure your KPIs remain relevant and effective. Make these sessions collaborative and open to change.
The Ethics of KPIs
Remember these important principles:
Never use KPIs as weapons against your team. Using KPIs punitively creates a culture of fear and discourages risk-taking and innovation. Example: If a team’s “Time to Value” KPI is lagging, don’t use it to criticise or penalise the team. Instead, use it as a starting point for a constructive discussion about process improvements or resource needs.
Prioritise learning and improvement over hitting arbitrary numbers. Focusing solely on numbers can lead to short-term thinking and missed opportunities for meaningful growth. Example: If your “Feature Adoption Rate” isn’t meeting targets, don’t push features that aren’t ready. Instead, dig into why adoption is low. Are you building the right features? Is user education lacking? This approach leads to better products and sustained improvement.
Celebrate the intent and progress behind the metrics, not just the numbers themselves. This approach encourages a growth mindset and values effort and learning, which are crucial for long-term success. Example: Even if a new feature doesn’t immediately boost your “Enthusiastic User Ratio”, celebrate the team’s efforts in user research, innovative design, or technical challenges overcome. This keeps the team motivated and focused on continuous improvement.
Regularly review and adjust KPIs to ensure they remain relevant. As your product and market evolve, yesterday’s crucial metric might become irrelevant or even counterproductive. Example: If your product has matured, you might shift focus from a “New User Acquisition Rate” KPI to a “User Retention Rate” KPI, reflecting the changing priorities of your business.
By adhering to these principles, you create an environment where KPIs drive positive behaviour, foster learning, and contribute to both team satisfaction and business success. Remember, the goal of KPIs is to improve performance and guide decision-making, not to create pressure or assign blame.
Wrapping Up: The True Value of KPIs
The real power of KPIs lies not in the numbers, but in the conversations they spark, the behaviours they encourage, and the focus they provide. When done right, KPIs serve as a compass, guiding your team through the complex landscape of product development.
Craft KPIs that inspire, illuminate, and drive your team towards excellence. And remember, in high-performing teams, the best KPIs often become obsolete because the team internalises the principles behind them.
What’s the most effective KPI you’ve used? Or the least useful? Share your experiences in the comments below!
P.S. If this post helped you rethink your approach to KPIs, don’t hesitate to share it with your network. Let’s spread the word about better performance indicators!