Finishing Strong: Completing Your Monolith Split

In our previous posts, we discussed identifying business domains in your monolith, planning the split, and strategies for execution. Now, let’s focus on the crucial final phase: finishing the split and ensuring long-term success.

Maintaining Momentum

As you progress through your monolith splitting journey, it’s essential to keep the momentum going.

Bi-weekly Check-ins or just having a regular cadence where you track progress to completion helps co-ordinate, especially if you have many teams working on it. Use these bi-weekly meetings to:

  1. Track progress towards completion
  2. Share wins and learnings across teams
  3. Identify and address blockers quickly
  4. Maintain visibility of the project at a management level

These regular touchpoints help ensure that the split remains a priority and doesn’t get sidelined by other initiatives.

Have a Plan to Finish

One of the most critical aspects of a successful monolith split is having a clear plan to finish. Without this, you’ll start another migration before you’ve finished this one and end up with a multi-generation codebase.

Have a timeline in your bi-weekly’s, update it as you go so everyone has their eyes on teh finish line.

This timeline should:

  1. Be realistic based on your progress so far
  2. Include major milestones and dependencies
  3. Be visible to all stakeholders

Remember, if you don’t finish, you’ll start another migration before you’ve finished this one and end up with a multi-generation codebase, which will explode cognitive load and lead to escaping bugs, war rooms, and prod incidents.

Handling the Long Tail

As you approach the end of your split, you’ll likely encounter a long tail of less-frequently used features or challenging components. Keep on top of them, it’ll be hard, but worth it in the end.

Celebrate the success at the end too, mark those big milestones, it means a lot to the people that worked tirelessly on the legacy code.

Conclusion

Completing a monolith split is a significant achievement that requires persistence, strategic thinking, and a clear plan. By maintaining momentum through regular check-ins, having a solid plan to finish, and consistently measuring your progress and impact, you can successfully navigate the challenges of this complex process.

Remember, the goal isn’t just to split your monolith—it’s to improve your system’s overall health, development velocity, and maintainability. Keep this end goal in mind as you make decisions throughout the process.

As you finish your split, take time to celebrate your achievement and reflect on the learnings. These insights will be invaluable for future architectural decisions and potential migrations.

Thank you for following this series on monolith splitting. We hope these insights help you navigate your own journey from monolith to microservices. Good luck with your splitting efforts!

Strategies for Successful Monolith Splitting

In our previous post, we explored how to identify business domains in your monolith and create a plan for splitting. Now, let’s dive into the strategies for executing this plan effectively. We’ll cover modularization techniques, handling ongoing development during the transition, and measuring your progress.

If you are in the early stages of the chart, you can probably look into Modularization, if you are however towards the right hand side (like we were), you will need to take some more drastic action.

If you are on the right hand side, they your monolith is at the point you need to stop writing code there NOW.

There’s 2 things to consider:

  • for new domains, or significant new features in existing domains start them outside straight away
  • for existing domains, build a new system for each of them, and move the code out

Once your code is in a new system, you get all the benefits straight away on that code. You aren’t waiting for an entire system to migrate before you see results to your velocity. This is why we say start with the high volume change areas and domains first.

How to stop writing code there “now”? Apply the open closed principle at the system level

  1. Open for extension: Extend functionality by consuming events and calling APIs from new systems
  2. Closed for modification: Limit changes to the monolith, aim to get to the point where it’s only crucial bug fixes

This pattern encourages you to move to the new high development velocity systems.

Modularization: The First Step for those on the Left of the chart

Before fully separating your monolith into distinct services, it’s often beneficial to start with modularization within the existing system. This approach, sometimes called the “strangler fig pattern,” can be particularly effective for younger monoliths.

Modularization is a good strategy when:

  • Your monolith is relatively young and manageable
  • You want to gradually improve the system’s architecture without a complete overhaul
  • You need to maintain the existing system while preparing for future splits

However, be wary of common pitfalls in this process:

  • Avoid over-refactoring; focus on creating clear boundaries between modules
  • Ensure your modularization efforts align with your identified business domains

For ancient monoliths with extremely slow velocity, a more drastic “lift and shift” approach into a new system is recommended.

Integrating New Systems with the Monolith, for those to the Right

When new requirements come in, especially for new domains, start implementing them in new systems immediately. This approach helps prevent your monolith from growing further while you’re trying to split it.

Integrating new systems with your monolith requires these considerations:

  1. Add events for everything that happens in your monolith, especially around data or state changes
  2. Listen to these events from new systems
  3. When new systems need to call back to the monolith, use the monolith’s APIs

This event-driven approach allows for loose coupling between your old and new systems, facilitating a smoother transition.

Existing Domains: The Copy-Paste Approach for those to the Right

If your monolith is in particularly bad shape, sometimes the best approach is the simplest, build a new system then copy, paste, and call it step one from the L7 router. Don’t get bogged down trying to improve everything right away. Focus on basic linting and formatting, but avoid major refactoring or upgrades at this stage. The goal is to get the code into the new system first, then improve it incrementally.

However, this approach comes with its own set of challenges. Here are some pitfalls to watch out for:

Resist the urge to upgrade everything: A common mistake is trying to upgrade frameworks or libraries during the split. For example, one team, 20% into their split, decided to upgrade React from version 16 to 18 and move all tests from Enzyme to React Testing Library in the new system. This meant that for the remaining 80% of the code, they not only had to move it but also refactor tests and deal with breaking React changes. They ended up reverting to React 16 and keeping Enzyme until further into the migration.

Remember the sooner your code gets into the new system the sooner you get faster.

Don’t ignore critical issues: While the “just copy-paste” approach can be efficient, it’s not an excuse to ignore important issues. In one case, a team following this advice submitted a merge request that contained a privilege escalation security bug, which was fortunately caught in code review. When you encounter critical issues like security vulnerabilities, fix them immediately – don’t wait.

Balance speed with improvements: It’s okay to make some improvements as you go. Simple linting fixes that can be auto-applied by your IDE or refactoring blocking calls into proper async/await patterns are worth the effort. It’s fine to spend a few extra hours on a multi-day job to make things a bit nicer, as long as it doesn’t significantly delay your migration.

The key is to find the right balance. Move quickly, but don’t sacrifice the integrity of your system. Make improvements where they’re easy and impactful, but avoid getting sidetracked by major upgrades or refactors until the bulk of the migration is complete.

Measuring Progress and Impact: Part 1 Velocity

Your goal is to have business impact, impact comes from the velocity game to start with, so taht’s where our measurements start.

Number of MRs on new vs old systems: Initially, focus on getting as many engineers onto the new (high velocity) systems as possible, compare your number of MRs on old vs new over time and monitor the change to make sure you are having the impact here first

Overall MR growth: If the total number of MRs across all systems is growing significantly, it might indicate incorrect splitting or dragging incremental work.

Work tracking across repositories: Ask engineers to use the same JIRA ID (or equivalent) for related work across repositories in the branch name or MR Title or something, to track units of work spanning both old and new systems.

Velocity Metrics on old vs new: Don’t “assume” your new systems will always be better, compare old vs new on velocity metric and make sure you are seeing the difference.

Ok, now when you ht critical mass on the above, for us we called it at about 80%, you will need to shift, the long tail there will be less ROI on velocity, it’ll become a support game, and you need to face it differently.

Measuring Progress and Impact: Part 2 Traffic

So at this time its best to look at traffic, moving high volume traffic pages/endpoints in theory should reduce the impact if there’s an issue with the legacy system thereby reducing the support, this might not be true for your systems, you may have valuable endpoints with low traffic, so you need to work it out the best way for you.

Traffic distribution: Looking per page or per endpoint where the biggest piece of the pie is.

Low Traffic: Looking per page or per endpoint where there is low traffic, this may lead you to find features you can deprecate.

As you move functionality to new services, you may discover features in the monolith that are rarely used. Raise with product and stakeholders, ask “Whats the value this brings vs the effort to migrate and maintain it?”

  1. deprecating the page or endpoint
  2. combining functionality into other similar pages/endpoints to reduce codebase size

Remember, every line of code you don’t move is a win for your migration efforts.

Conclusion

Splitting a monolith is a complex process that requires a strategic approach tailored to your system’s current state. Whether you’re dealing with a younger, more manageable monolith or an ancient system with slow velocity, there’s a path forward.

The key is to stop adding to the monolith immediately, start new development in separate systems, and approach existing code pragmatically – sometimes a simple copy-paste is the best first step. As you progress, shift your focus from velocity metrics to traffic distribution and support impact.

Remember, the goal is to improve your system’s overall health and development speed. By thoughtfully planning your split, building new features in separate systems, and closely tracking your progress, you can successfully transition from a monolithic to a microservices architecture.

In our next and final post of this series, we’ll discuss how to finish strong, including strategies for cleaning up your codebase, maintaining momentum, and ensuring you complete the splitting process. Stay tuned!

Identifying and Planning Your Monolith Split

In the world of software development, monolithic architectures often become unwieldy as applications grow in complexity and scale. Splitting a monolith into smaller, more manageable services can improve development velocity, scalability, and maintainability. However, this process requires careful planning and execution. In this post, we’ll explore the crucial first steps in splitting your monolith: identifying business domains and creating a solid plan.

Finding Business Domains in Your Monolith

The first step in splitting a monolith is identifying the business domains within your application. Business domains are typically where “units of work” are isolated, representing distinct areas of functionality or responsibility within your system.

Splitting by business domain allows you to optimize for the majority of your units of work being in the one system. While you may never achieve 100% optimization without significant effort, focusing on business domains usually covers 80-90% of your needs.

How to Identify Business Domains

  1. Analyze Work Units: Look at the different areas of functionality in your application. What are the main features or services you provide?
  2. Examine Data Flow: Consider how data moves through your system. Are there natural boundaries where data is transformed or handed off?
  3. Review Team Structure: Often, team organization reflects business domains. How are your development teams structured?
  4. Consider User Journeys: Map out the different paths users take through your application. These often align with business domains.

For more detail here is a great book on the topic.

When to Keep Domains Together

Sometimes, you’ll find two domains that share a significant amount of code. In these cases, it might be more efficient to keep them in the same system. Consider creating a “modulith” (a modular monolith) or even maintaining a smaller monolith for these tightly coupled domains might make sense, but this is usually the exception to the rule, dont let it be an easy way out for you.

Analyzing Changes in the Monolith

Once you’ve identified potential business domains, the next step is to analyze how your monolith changes over time. This analysis helps prioritize which parts of the system to split first. Because this is where the value is, velocity, the more daily/weekly merge requests that happen in the new systems the more business impact you cause, and that’s our goal, business impact, in this case in the form of engineering velocity, don’t lose sight on this goal for some milestone driven Gantt chart.

There’s many elegant tools on the market for analysis for git and changes over time, I would encourage you to explore. We didn’t find any that worked for us because the domains were scattered throughout the code due to the age and size of our monolith (i.e. it was ancient).

What we found worked best, we used a hammer, its manual but it worked:

  1. Use MR (Merge Request) Labels: Implement a system where developers label each MR with the relevant business domain. This provides ongoing data about which domains of the system change most frequently.
  2. Add CI Checks: Include a CI step that fails if an MR doesn’t have a domain label. This ensures consistent data collection.
  3. Historical Analysis: Have your teams go through 1-2 quarters of historical MRs and label them retrospectively. This gives you an initial dataset to work with.

Once you have this data, wether it comes from the hammer approach or you find a more elegant one you want to look for patterns in your MRs. Which domains see the most frequent changes? This is how you prioritize your split.

Making a Plan

With your business domains identified and change patterns analyzed, it’s time to create a plan for splitting your monolith. Start with the domains that have the highest impact. These are the ones that change frequently.

Implement L7 Routing for incremental migration

Use Layer 7 (application layer) routing to perform A/B testing between your old monolith and new services. This allows you to:

  • Gradually shift traffic to new services
  • Compare performance and functionality potentially with AB Tests
  • Quickly roll back if issues arise

For Web Applications:

  • Consider migrating one page at a time
  • Treat each “page” as a unit of migration

Within pages sometimes we found that doing a staged approach with ajax endpoints individually helped to do the change more incrementally, but don’t let a “page” exist in multiple system for too long, it kills local dev experience, you go backwards on what you planned, you are meant to be improving dev experience, not making it worse, so finish it asap.

For Backend Services:

  • Migrate one endpoint or a small group of tightly coupled endpoints at a time
  • This allows for a gradual transition without disrupting the entire system

Also as you are incrementally migrating, if you focus is on fast killing the monolith, don’t bother deleting the old code as you go, let the thing die as a whole. This will give you more time to spend on moving to new systems. Try to not improve the experience on the old monolith, the harder it is to work on it the more likely a team is to make a decision to break something out of it, you increase the ROI this way of splitting.

Conclusion

Splitting a monolith is a significant undertaking, but with proper planning and analysis, it can lead to a more maintainable and scalable system. By identifying your business domains, analyzing change patterns, and creating a solid migration plan, you set the foundation for a successful transition from a monolithic to a microservices architecture.

In our next post, we’ll dive deeper into the strategies for executing your monolith split, including modularization techniques and how to handle ongoing development during the transition. Stay tuned!

The Pitfalls and Potential of Monolithic Architectures

Before we dive into the process of splitting a monolith, it’s crucial to understand why monoliths can become problematic and when they might still be a good choice. In this post, we’ll explore the challenges that often arise with monolithic architectures and discuss scenarios where they might still be appropriate.

What’s So Bad About Monoliths?

Monolithic architectures, where all components of an application are interconnected and interdependent, can present several challenges as systems grow:

1. Development Feedback Loops

One of the most significant issues with large monoliths is the impact on development feedback loops:

  • Compilation Time: Large codebases often take a long time to compile, slowing down the development process.
  • Test Execution Time: With a vast number of tests, running the entire test suite can be time-consuming.
  • Test Flakiness: As the number of tests grows, the overall stability of the test suite can decrease dramatically. For example:
    • If each individual test has a 99% stability rate (which sounds good),
    • In a suite with 179 tests, the actual stability rate becomes 0.99^179 ≈ 17%
    • This means there’s only a 17% chance of all tests passing in a given run!

2. Increased Lead Time

The factors mentioned above contribute to increased lead time for new features or bug fixes:

  • Longer compile and test times slow down the development cycle.
  • Large monoliths often require more server resources, leading to longer deployment times.

3. Framework Upgrades

Upgrading frameworks or libraries in a monolith can be a massive undertaking. Changes often need to be applied across the entire system simultaneously. the more code you have the more potential breaking change you need to fix in one go, the you have a large MR, and with high volume of change you normally get in big repos, good luck getting it merged with all the merge conflicts 🙂

The Pitfalls and Potential of Monolithic Architectures

Are Monoliths Ever Good?

Despite these challenges, monoliths aren’t always bad. In fact, they can be an excellent choice in certain scenarios:

1. Startups and Small Projects

Many large companies started with small monolithic applications. When you’re small and trying to “take on the world,” a monolith can be the fastest way to get a product to market. It allows for rapid development and iteration in the early stages of a product. This approach enables startups to focus on validating their business ideas and gaining market traction without the added complexity of a distributed system.

2. Simple Applications

For applications with straightforward requirements and minimal complexity, a monolith might be the most straightforward and maintainable solution. In such cases, the simplicity of a monolithic architecture can lead to faster development cycles and easier debugging, as all components are in one place.

3. Teams New to Microservices

If your team doesn’t have experience with distributed systems, starting with a well-structured monolith can be a good learning experience before moving to microservices. This approach allows the team to focus on building features and understanding the domain, while gradually introducing concepts like modularity and service boundaries within the monolith. As the team and application grow, this experience can make a future transition to microservices smoother and more informed.

Best Practices for Starting Small

If you’re starting a new project and decide to go with a monolithic architecture, here are some best practices:

  1. Plan for Future Splitting: Design your monolith with clear boundaries between different functionalities, making future splits easier.
  2. Use Modular Design: Even within a monolith, use modular design principles to keep different parts of your application loosely coupled.
  3. Maintain Clean Architecture: Follow clean architecture principles to separate concerns and make your codebase more manageable.
  4. Monitor Growth: Keep an eye on your application’s size and complexity. Be prepared to start splitting when you notice development slowing down or when the benefits of splitting outweigh the costs.

Conclusion

While monoliths can present significant challenges as they grow, they’re not inherently bad. The key is understanding when a monolithic architecture is appropriate and when it’s time to consider splitting. By being aware of the potential pitfalls and planning for future growth, you can make informed decisions about your application’s architecture.

In the next post, we’ll dive into the process of identifying business domains within your monolith, which is the first step in planning a successful split.

Essential Skills for Product Engineers (Part 2): Mastering the Craft

In our previous post, we explored the first set of essential skills for product engineers, focusing on non-technical abilities that bridge the gap between engineering and business. Today, we’ll dive into the second part of our essential skills series, covering more technically-oriented skills that are crucial for success in product engineering.

Data Analysis and Metrics

In the world of product engineering, data reigns supreme. This skill empowers engineers to make informed decisions, measure the impact of their work, and continuously improve product performance.

Metrics Definition is the foundation of effective data analysis. It’s not enough to simply collect data; you need to know which metrics are most relevant to your product and how they align with broader business goals. This requires a deep understanding of both the product and the business model. For instance, a social media application might focus on Daily Active Users (DAU) as a key engagement metric, along with other user interaction metrics like posts per user or time spent in the app. On the other hand, an e-commerce platform might prioritize conversion rates, average order value, and customer lifetime value. By defining the right metrics, engineers ensure that they’re measuring what truly matters for their product’s success.

The next step is Data Collection. This involves implementing systems to gather data accurately and consistently. It’s not just about collecting data, but ensuring its accuracy and integrity. Many engineers work with established analytics tools like Google or Adobe Analytics, which provide a wealth of user behavior data out of the box. However, for more specific or granular data needs, custom tracking solutions are necessary. This could involve instrumenting your code to log specific events or user actions. The key is to create a comprehensive data collection system that captures all the information needed to calculate your defined metrics.

With data in hand, the next skill is Statistical Analysis. While engineers don’t need to be statisticians, a basic understanding of statistical concepts is needed for interpreting data correctly. This includes grasping concepts like statistical significance, which helps determine whether observed differences in metrics are meaningful or just random noise. Understanding the difference between correlation and causation is also vital – just because two metrics move together doesn’t necessarily mean one causes the other. Handling outliers is another important skill, as extreme data points can significantly skew results if not treated properly. These statistical skills allow engineers to draw accurate conclusions from their data and avoid common pitfalls in data interpretation.

Data Visualization is where numbers transform into narratives. The ability to present data in clear, compelling ways is crucial for communicating insights to stakeholders who may not have a deep technical background. Tools like metabase, superset, grafana, etc offer powerful capabilities for creating interactive visualizations, while even simple Excel charts can be effective too. The goal is to make the data tell a story – to highlight trends, comparisons, or anomalies in a way that’s immediately understandable. Good data visualization can turn complex datasets into actionable insights, influencing product decisions and strategy.

A/B Testing is a technique in the engineer’s toolkit. It involves designing and implementing experiments to test hypotheses and measure the impact of changes. This could be as simple as testing two different button colors (one the A variant, the other the B) to see which drives more clicks, or as complex as rolling out a major feature to a subset of users to evaluate its impact on key metrics. Effective A/B testing requires understanding concepts like control groups (users who don’t receive the change), variable isolation (ensuring you’re testing only one thing at a time), and statistical power (having a large enough sample size to draw meaningful conclusions). Mastering A/B testing allows engineering teams to make data-driven decisions about feature development and optimization.

Performance Optimization

In today’s fast-paced digital world, user expectations for application performance have never been higher. Users demand fast, responsive applications that work seamlessly across devices and network conditions. As a result, performance optimization has become a critical skill for engineers. It’s not just about making things fast; it’s about creating a smooth, responsive user experience that keeps users engaged and satisfied, regardless of the complexity behind the scenes.

Profiling and Benchmarking form the foundation of effective performance optimization. Before you can improve performance, you need to understand where the bottlenecks are. This involves using a variety of tools to analyze your application’s performance characteristics. For front-end performance, browser developer tools provide powerful capabilities for analyzing load times, JavaScript execution, and rendering performance, chrome debugger and extensions allow testing stats like LCP, CLS, etc and debugging why they are bad locally, but don’t forget to measure RUM (Real User Metrics), getting data from your real user interactions. These tools can help identify slow-loading resources, long-running scripts, or inefficient DOM manipulations that might be causing performance issues.

On the backend, specialized profiling tools can help identify performance bottlenecks in server-side code or database queries. These tools like pyroscope, application insights and open telemetry tracing, might analyze CPU usage, memory allocation, or database query execution times to pinpoint areas for improvement. The key is to establish baseline performance metrics and then systematically identify the areas that have the biggest impact on overall application performance.

Once you’ve identified performance bottlenecks, the next step is applying Optimization Techniques. This is a topic for another post for sure, based on your environment this can vary greatly so I wont go into too much details today.

Google’s Core Web Vitals initiative is a prime example of the industry’s focus on performance and its impact on user experience. These metrics – Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS) – provide a standardized way to measure key aspects of user-centric performance. LCP measures loading performance, FID measures interactivity, and CLS measures visual stability. By focusing on these metrics, engineers can ensure they’re optimizing for the aspects of performance that most directly impact user experience.

For example, optimizing for Largest Contentful Paint might involve prioritizing the loading of above-the-fold content, while improving First Input Delay could involve breaking up long tasks in JavaScript to improve responsiveness to user interactions. Minimizing Cumulative Layout Shift often involves careful management of how content loads and is displayed, ensuring that elements don’t unexpectedly move around as the page loads.

The importance of these metrics extends beyond just providing a better user experience. Search engines like Google now consider these performance metrics as ranking factors, directly tying performance optimization to an application’s visibility and success.

Security and Privacy

Cyber threats are ever-evolving and privacy regulations are becoming increasingly stringent, security and privacy considerations must be at the forefront of a engineer’s mind. These are not just technical challenges, but fundamental aspects of building user trust and ensuring the long-term success of a product.

Threat Modeling is a proactive approach to security that involves anticipating and modeling potential security threats to your application. This process requires engineers to think like attackers, identifying potential vulnerabilities and attack vectors in their systems. It’s not just about considering obvious threats like unauthorized access, but also more subtle risks like data leakage or denial of service attacks. Effective threat modeling involves mapping out the system architecture, identifying assets that need protection, and systematically analyzing how these assets could be compromised. This process should be an ongoing part of the development lifecycle, revisited as new features are added or the system architecture evolves.

Secure Coding Practices are the foundation of building secure applications. This involves understanding and implementing best practices for writing code that is resistant to common security vulnerabilities. Input validation is a crucial aspect of this, ensuring that all data entering the system is properly sanitized to prevent attacks like SQL injection or cross-site scripting. Proper authentication and authorization mechanisms are essential to ensure that users can only access the resources they’re entitled to. Secure data storage practices, including proper encryption of sensitive data both at rest and in transit, are also critical. Engineers should be familiar with common security vulnerabilities (like those listed in the OWASP Top 10) and know how to mitigate them in their code.

Compliance Understanding has become increasingly important as privacy regulations have proliferated around the world. Engineers need at least a basic understanding of relevant privacy regulations like the General Data Protection Regulation (GDPR) in Europe or the California Consumer Privacy Act (CCPA) in the United States. This doesn’t mean engineers need to become legal experts, but they should understand how these regulations impact product development. For example, GDPR’s “right to be forgotten” requirement has implications for how user data is stored and managed. Understanding these regulations helps engineers make informed decisions about data handling and storage, and ensures that privacy considerations are factored into product design from the outset.

Security Testing is a important skill for ensuring that the security measures implemented are effective. This involves familiarity with various security testing tools and practices. Penetration testing, or “pen testing,” involves simulating attacks on a system to identify vulnerabilities. This can be done manually by security experts or using automated tools. Code security scanners are another important tool, analyzing code for potential security issues. Static Application Security Testing (SAST) tools can identify vulnerabilities in source code, while Dynamic Application Security Testing (DAST) tools can find issues in running applications. Engineers should be familiar with these tools and be able to interpret and act on their results.

Security and privacy are no longer optional considerations in engineering – they are fundamental requirements. As cyber threats continue to evolve and users become increasingly aware of privacy issues, the ability to build secure, privacy-respecting products will be a key differentiator for engineers.

Scalability and Reliability

As products grow and user bases expand, the ability to scale systems to meet increased demand while maintaining reliability becomes an important skill for engineers. This is not just about handling more users or data; it’s about ensuring that the product continues to perform well and provide a consistent user experience even as it grows exponentially.

Distributed systems involve multiple components working together across different networks or geographic locations to appear as a single, cohesive system to end-users. This approach allows for greater scalability and fault tolerance, but it also introduces complexities in areas like data consistency, network partitions, and system coordination. Engineers need to understand concepts like CAP theorem, and it’s proponents. They should be familiar with patterns like microservices architecture, moduliths, event sourcing, etc. and how these apply with scale.

Load Balancing and Caching are critical strategies for managing increased demand on systems. Load balancing has changed greatly in recent years, gone are teh days of a large “in front everywhere” infrastructure, in favour of load balancing sidecars now with tech like envoy, and in-front load balancing banished to the edges. Engineers should be familiar with different load balancing algorithms (like round-robin, least connections, etc.) and understand when to use each as well as how health checks work in these scenarios.

Caching, on the other hand, could involve in-memory caches like Redis, content delivery networks (CDNs) for static assets, or application-level caching strategies. Effective caching requires careful consideration of cache invalidation strategies to ensure users always see up-to-date information. Engineers should understand not only pull through cache but other forms such as write-through, etc and also when to prewarm and expire based on the data and user needs.

Database Scaling is often one of the most challenging aspects of growing a system. As data volume and read/write operations increase, a single database instance may no longer be sufficient. Engineers need to be familiar with various database scaling techniques. Vertical scaling (adding more resources to a single machine) can work up to a point, but eventually, horizontal scaling becomes necessary and presents many challenges and options that engineers should be familiar with to be able to make the right choice.

Chaos Engineering is a proactive approach to ensuring system reliability that has gained prominence in recent years. The core idea is to intentionally introduce failures into your system in a controlled manner to test its resilience. This helps identify weaknesses in the system that might not be apparent under normal conditions.

Netflix’s Chaos Monkey is a prime example of this approach. This tool randomly terminates instances in their production environment, forcing engineers to build systems that can withstand these types of failures. By simulating failures in a controlled way, Netflix ensures that their systems can handle unexpected issues in real-world scenarios.

Other forms of chaos engineering might involve simulating network partitions, inducing latency, or exhausting system resources. The key is to start small, build confidence, and gradually increase the scope of these experiments. This approach not only improves system reliability but also builds a culture of resilience within engineering teams.

The importance of scalability and reliability in product engineering cannot be overstated. As users increasingly rely on digital products for critical aspects of their lives and work, the cost of downtime or poor performance can be enormous, both in terms of lost revenue and damaged user trust.

Moreover, the ability to scale efficiently can be a key competitive advantage. Products that can quickly adapt to growing demand can capture market share and outpace competitors. On the flip side, products that struggle with scalability often face user frustration, increased operational costs, and missed opportunities.

Continuous Integration and Deployment (CI/CD)

CI/CD practices enable teams to deliver code changes more frequently and reliably, accelerating the feedback loop and reducing the risk associated with deployments.

Engineers need to be proficient in writing effective, efficient tests and understanding concepts like test coverage and why the test pyramid is flawed, and new concepts like the testing honey combe. They should also be familiar with testing frameworks and tools specific to their technology stack. The goal is to catch bugs early in the development process, reducing the cost and risk of fixing issues in production.

Continuous Integration (CI) means continuously integrating code, its not about your Jenkins or github actions pipeline, its about fast merging changes together. Git branches are counter to this principle, but necessary in order to facilitate change in manageable or deployable chunks. Good Engineers understand CI is a principle, not a build system, this help them focus on it’s purpose which is moving fast an efficiently.

Continuous Deployment (CD) key skills here include understanding deployment strategies like blue-green deployments or canary releases, which minimize risk and downtime during updates. Engineers need to be proficient in infrastructure-as-code tools like Helm, Terraform or CloudFormation to manage their infrastructure alongside their application code. They should also be familiar with containerization technologies like Docker and orchestration platforms like Kubernetes, which can greatly simplify the process of deploying and scaling applications.

Feature Flags have become an essential tool in modern CD practices. They allow teams to decouple code deployment from feature release, giving more control over when and to whom new features are made available. Engineers need to understand how to implement feature flag systems, which can range from simple configuration files to more complex, dynamically controllable systems. This involves not just the technical implementation, but also understanding the strategic use of feature flags for A/B testing, gradual rollouts, and quick rollbacks in case of issues. Proper use of feature flags can significantly reduce the risk associated with deployments and allow for more frequent, smaller releases.

The benefits of mastering CI/CD are significant. It allows teams to deliver value to users more quickly, reduce the risk associated with each deployment, and spend less time on manual, error-prone deployment processes. It also improves developer productivity and satisfaction by providing quick feedback on code changes and reducing the stress associated with large, infrequent releases.

Cross-Platform Development

In today’s diverse technological landscape, users access digital products through a multitude of devices and platforms. As a result, the ability to develop cross-platform solutions has become an increasingly valuable skill for product engineers.

Responsive Web Design (RWD) forms the foundation of cross-platform web development. It’s an approach to web design that makes web pages render well on a variety of devices and window or screen sizes. The core principle of RWD is flexibility – layouts, images, and cascading style sheet media queries are used to create a fluid design that adapts to the user’s screen size and orientation. They should also understand the principles of mobile-first design, which advocates for designing for mobile devices first and then progressively enhancing the design for larger screens.

Cross-Platform Frameworks have emerged for native mobile development as a popular solution for building mobile apps that can run on multiple platforms with a single codebase. Tools like React Native, Flutter and even Web View allow developers to write code once and deploy it to both iOS and Android, potentially saving significant development time and resources.

Proficiency in cross-platform frameworks requires not just knowledge of the framework itself, but also an understanding of the underlying mobile platforms. Engineers need to know when to use platform-specific code for certain features and how to optimize performance for each platform.

The choice between these different approaches – responsive web, native apps, cross-platform frameworks, or even PWAs – depends on various factors including the target audience, required features, performance needs, and development resources. Engineers need to understand the trade-offs involved in each approach and be able to make informed decisions based on the specific requirements of each project.

Moreover, the field of cross-platform development is rapidly evolving. New tools and frameworks are constantly emerging, and existing ones are regularly updated with new features. For example, Flutter has expanded beyond mobile to support web and desktop platforms as well. React Native is used in the PS5 UI now expanding its reach to home entertainment.

This constant evolution means that cross-platform development skills require ongoing learning and adaptation. Engineers need to stay updated with the latest developments in this field, continuously evaluating new tools and approaches to determine if they can provide benefits for their projects.

Conclusion

These technical skills – data analysis, performance optimization, security and privacy, scalability and reliability, CI/CD, and cross-platform development – form the backbone of an engineer’s technical toolkit. Combined with the non-technical skills we discussed in our previous post, they enable engineers to build products that are not only technically sound but also user-friendly, scalable, and aligned with business goals.

Remember, the field of product engineering is constantly evolving. The most successful engineers are those who commit to lifelong learning, always staying curious and open to new technologies and methodologies.

What technical skills have you found most valuable in your product engineering journey? How do you stay updated with the latest trends and technologies? Share your experiences and tips in the comments below!

Essential Skills for Product Engineers (Part 1): Beyond the Code

In our previous posts, we’ve explored the evolution of product engineering, its core principles, and the mindset that defines successful product engineers. Now, let’s dive into the specific skills that product engineers need to thrive in their roles. This post, the first of two parts, will focus on four essential non-technical skills: goal setting and value targeting, decision making and risk assessment, understanding business models, and design thinking and empathy.

Goal Setting and Value Targeting

One of the most crucial skills for product engineers is the ability to set clear, meaningful goals and target value creation. This skill goes beyond simply meeting technical specifications or delivering features on time.

One of the hardest things about goal setting, is once you set a goal you set your conditions for failure, and no one like to fail, but this is part of the mindset we spoke about before, you need to be ok with failure, its a learning experience, you need to be ok with setting moonshot goals occasionally too.

Effective goal setting involves:

  1. Alignment with business objectives: Goals should directly contribute to the company’s overall strategy and key performance indicators (KPIs).
  2. User-centric focus: Goals should reflect improvements in user experience or solve specific user problems.
  3. Measurability: Goals need to be quantifiable, allowing for clear evaluation of success.
  4. Timebound nature: Setting realistic timelines helps maintain focus and urgency, and also set increments for fast feedback cycles

For example, instead of setting a goal like “Implement a new recommendation system,” an engineering team might frame it as “Increase user engagement by 20% within three months by implementing a personalized recommendation system.”

Value targeting involves identifying and prioritizing the work that will deliver the most significant impact. This requires a deep understanding of both user needs and business priorities. Engineering Teams must constantly ask themselves: “Is this the most valuable thing I could be working on right now?”

Decision Making and Risk Assessment

Product engineers often find themselves at the intersection of technical possibilities, user needs, and business constraints. In this complex environment, the ability to make effective decisions becomes a critical skill. It’s not just about choosing the best technical solution, but about finding the optimal balance between various competing factors.

One of the key aspects of decision making for engineers is adopting a data-driven approach. This involves utilizing both quantitative and qualitative data to inform decisions. Quantitative data might include metrics from A/B tests, performance benchmarks, or usage statistics. This hard data provides concrete evidence of how different options perform. However, it’s equally important to consider qualitative data, such as user feedback or expert opinions. These insights can provide context and nuance that numbers alone might miss. By combining both types of data, Engineers can make more holistic, well-informed decisions.

Another crucial aspect of decision making is the consideration of trade-offs. In the real world, there’s rarely a perfect solution that optimizes for everything. Instead, Engineers must navigate complex trade-offs. For example, they might need to balance the speed of development against the quality of the end product, or weigh short-term gains against long-term sustainability. The skill lies not just in recognizing these trade-offs, but in being able to evaluate them effectively. This often involves quantifying the potential impacts of different choices and making judgment calls based on the specific context of the project and the company’s overall strategy.

Reid Hoffman reflecting on his time at startups once said “Sometimes it’s not about deciding which fire you put out, its about deciding which ones you can let burn”, making trade offs can involve hard choices.

Stakeholder management is another key component of effective decision making. Engineers need to consider how their decisions will impact various stakeholders, from end-users to business teams to other engineering teams. This involves not just making the right decision, but also being able to communicate the rationale effectively. Engineers must be able to explain technical concepts to non-technical stakeholders, articulate the business impact of technical decisions, and build consensus around their chosen approach.

In traditional software development companies, they hire BAs to deal with the business and try to “shield” the engineers or Scrum masters to “keep the wolves at bay”, Product Engineering is about removing these layers, the engineers themselves have enough understanding that they can deal with the stakeholders and this make communication and decision making more effective.

Alongside decision making, risk assessment. In any project or initiative, there are always potential risks that could derail success. The ability to identify these risks, evaluate their potential impact, and develop mitigation strategies is vital.

Engineers need to be able to look at different technical approaches and understand their potential pitfalls. This might involve considering factors like scalability, maintainability, or compatibility with existing systems. It’s about looking beyond the immediate implementation and considering how a technical choice might play out in the long term.

Engineers also need to be able to assess business risks. This involves evaluating how technical decisions might impact business metrics or user satisfaction. For example, a technically elegant solution might be risky if it requires a steep learning curve for users, potentially impacting adoption rates.

Another important aspect of risk assessment is opportunity cost consideration. In the world of product development, choosing one path often means not pursuing others. Engineers need to recognize this and factor it into their decision making. This might involve considering not just the risks of a chosen approach, but also the potential missed opportunities from alternatives not pursued.

Google’s approach to “Moonshot Thinking” in their X development lab provides a great example of how to balance ambitious goals with thoughtful risk assessment. Engineers in this lab are encouraged to tackle huge problems and propose radical solutions – true “moonshots” that could revolutionize entire industries. However, this ambition is tempered with a pragmatic approach to identifying and mitigating risks. Engineers are expected to critically evaluate their ideas, identifying potential failure points and developing strategies to address them. This approach allows for bold innovation while still maintaining a realistic perspective on the challenges involved.

By developing strong skills in decision making and risk assessment, engineers can make choices that balance technical excellence with business needs and user expectations, while also managing potential risks and trade-offs. These skills are what separate great engineers from merely good ones, enabling them to drive real impact and innovation in their organizations.

Understanding Business Models

While product engineers are primarily focused on technical challenges, a solid understanding of business models has become increasingly important in today’s tech landscape. This knowledge isn’t about turning engineers into business experts, but rather about equipping them with the context they need to make decisions that align with the company’s strategy and contribute to its overall success. By understanding the business side of things, engineers can better prioritize their work, make more informed technical decisions on the spot with out escalating to get direction.

One of the key aspects of understanding business models is grasping how the company generates revenue. Revenue streams can vary widely depending on the nature of the business. Some companies rely on subscription models, where users pay a recurring fee for access to a product or service. Others may generate revenue through advertising, leveraging user attention to sell ad space. Transaction fees are another common revenue stream, particularly for e-commerce or financial technology companies. Some businesses may use a combination of these or have more unique revenue models. For engineers, understanding these revenue streams is crucial because it can inform decisions about feature development, user experience design, and system architecture. Especially around immediate systems they work on, they are able to easily related back work they are doing to impact on company bottom line.

Equally important is an understanding of cost structures. Every business has costs associated with delivering its product or service, and these can significantly impact the viability of different technical approaches. Common costs might include server infrastructure, data storage, customer support, etc. Product engineers need to be aware of how their technical decisions might impact these costs. For example, choosing a more complex architecture might increase development and maintenance costs, while optimizing for performance could reduce infrastructure costs, and conversely a negative performance impact or bug could lead to a escalation in support calls. By understanding the cost implications of their decisions, engineers can make choices that balance technical excellence with business sustainability.

Another crucial aspect of business models is understanding customer segments. Most products don’t serve a single, homogeneous user base, but rather cater to different groups of users with varying needs and behaviors. Engineers need to be aware of these different segments and how they interact with the product. This understanding can inform decisions about feature prioritization, user interface design, and even technical architecture. For instance, if a significant customer segment primarily uses the product on mobile devices, that might influence decisions about mobile optimization or the development of mobile-specific features.

Perhaps the most important element of a business model is the value proposition – the unique value that the company offers to its customers. This is what sets the company apart from its competitors and drives customer acquisition and retention. Engineers play a crucial role in delivering and enhancing this value proposition through the technical solutions they develop.

Let’s consider a concrete example to illustrate these concepts. Imagine you’re an engineer working at Spotify. Understanding Spotify’s business model would be crucial to your work. You’d need to know that Spotify operates on a freemium model, with both ad-supported free users and subscription-based premium users. This dual revenue stream (advertising and subscriptions) would inform many of your decisions.

For instance, when developing new features, you’d need to consider how they might impact the conversion rate from free to premium users. A feature that significantly enhances the listening experience might be reserved for premium users to drive subscriptions. On the other hand, a feature that increases engagement might be made available to all users to increase ad revenue from free users and make the platform more attractive to advertisers.

You’d also need to understand Spotify’s cost structure, particularly the significant costs associated with royalty payments to music rights holders. This might influence decisions about caching and data delivery to optimize streaming and reduce costs.

Understanding Spotify’s customer segments would be crucial too. You might need to consider the different needs of casual listeners, music enthusiasts, and artists using the platform. Each of these segments might require different features or optimizations.

Finally, you’d need to keep in mind Spotify’s value proposition of providing easy access to a vast library of music, personalized to each user’s tastes. Your technical decisions would need to support this, perhaps by focusing on recommendation algorithms, seamless playback, or features that enhance music discovery.

By understanding these aspects of Spotify’s business model, you as a engineer would be better equipped to make decisions that not only solve technical challenges but also drive the company’s success in a highly competitive market.

While engineers don’t need to become business experts, a solid grasp of business models is an increasingly valuable skill. It provides crucial context for technical decisions, helps in prioritizing work, and enables more effective collaboration with business stakeholders.

Design Thinking and Empathy

In the realm of product engineering, technical expertise alone is no longer sufficient to create truly impactful solutions. Enter design thinking: a problem-solving approach that places user needs and experiences at the center of the development process. For engineers, incorporating design thinking principles can lead to more innovative, user-friendly, and ultimately successful products.

Design thinking is not a linear process, but rather an iterative approach that encourages continuous learning and refinement. It typically involves five key elements, each of which plays a crucial role in developing user-centered solutions:

The first step is to Empathize. This involves deeply understanding the user’s needs, wants, and pain points. It’s about stepping into the user’s shoes, observing their behaviors, and listening to their experiences. For engineers, this might involve conducting user interviews, analyzing user data, or even spending time using the product as a user would. The goal is to uncover insights that may not be immediately apparent from technical specifications or feature requests.

Next comes the Define stage. Here, the insights gathered during the empathy stage are synthesized to clearly articulate the problem that needs to be solved. This is not about jumping to solutions, but about framing the problem in a way that opens up possibilities for innovative approaches. For engineers, this might involve reframing technical challenges in terms of user needs or business objectives.

The third stage is Ideation. This is where creativity comes to the forefront. The goal is to generate a wide range of possible solutions, without judgment or constraint. Techniques like brainstorming, mind mapping, or even role-playing can be used to spark new ideas. For engineers, this stage is an opportunity to think beyond conventional technical solutions and consider novel approaches that might better serve user needs.

Following ideation comes Prototyping. This involves creating quick, low-fidelity versions of potential solutions. The key here is speed and simplicity – the goal is not to build a perfect product, but to create something tangible that can be tested and refined. For engineers, this might involve creating basic wireframes, simple mock-ups, or even paper prototypes. The focus is on making ideas concrete enough to gather meaningful feedback.

The final stage is Testing. This is where prototypes are put in front of real users to gather feedback. It’s a critical stage that often leads back to earlier stages as new insights emerge. For engineers, this might involve conducting user testing sessions, analyzing usage data from beta releases, or going to a coffee shop and conducting guerilla testing session on patrons in exchange for buying them a coffee. The key is to approach this stage with an open mind, ready to learn and iterate based on user responses.

While all stages of design thinking are important, empathy deserves special attention as it forms the foundation of this approach. For engineers, developing empathy is about more than just understanding user requirements – it’s about truly connecting with the user’s experience.

User perspective is a crucial aspect of empathy. This involves the ability to see the product from the user’s point of view, understanding their context, motivations, and frustrations. It’s about asking questions like: What is the user trying to achieve? What obstacles do they face? How does our product fit into their broader life or work? By adopting the user’s perspective, engineers can make design and technical decisions that truly serve user needs, rather than just meeting specifications.

Diverse user consideration is another key aspect of empathy in product engineering. Users are not a monolithic group – they have diverse needs, abilities, and contexts. Some users might be tech-savvy early adopters, while others might be less comfortable with technology like your Aunty perhaps. Some might be using the product in resource-constrained environments, like low bandwidth internet in remote areas. Recognizing and considering this diversity in product development is crucial for creating truly inclusive and accessible products.

IDEO, the design company that popularized design thinking, emphasizes “human-centered design” as a cornerstone of their approach. Their methodology involves immersing themselves in the user’s world to gain deep, empathetic insights that drive innovation. This might involve spending time in users’ homes or workplaces, observing their behaviors and interactions with products in their natural environment. For engineers, adopting a similar approach – even if less intensive – can yield valuable insights that inform technical decisions and lead to more user-friendly solutions.

Design thinking can help engineers navigate the increasing complexity of modern product development. In a world where technical possibilities are vast and user expectations are high, design thinking provides a framework for focusing on what truly matters – creating solutions that make a meaningful difference in users’ lives.

Conclusion

These skills – goal setting and value targeting, decision making and risk assessment, understanding business models, and design thinking and empathy – form the foundation of a product engineer’s non-technical toolkit. They enable engineers to not just build products, but to create solutions that genuinely meet user needs and drive business success.

In our next post, we’ll explore the second set of essential skills for product engineers, including data analysis, A/B testing, and more. Stay tuned!

What’s your experience with these skills in your engineering work? How have you seen them impact product development? Share your thoughts and experiences in the comments below!

Scaling Dependency Injection: How Agoda Solved DI Challenges with Agoda.IoC

Introduction

In the world of modern software development, Dependency Injection (DI) has become an essential technique for building maintainable, testable, and scalable applications. By allowing us to decouple our code and manage object lifecycles effectively, DI has revolutionized how we structure our applications.

However, as projects grow in size and complexity, even the most beneficial practices can become challenging to manage. This is especially true for large-scale applications with hundreds of developers and thousands of components. At Agoda, we faced this exact challenge with our dependency injection setup, and we’d like to share how we overcame it.

In this post, we’ll explore the problems we encountered with traditional DI approaches at scale, and introduce Agoda.IoC, our open-source solution that has transformed how we handle dependency injection across our codebase.

The Problem: DI at Scale

To understand the magnitude of the challenge we faced, let’s first consider the scale at which Agoda operates its customer facing website:

  • In an average month, we merge over 260 pull requests
  • We add more than 38,000 lines of code
  • We have around 100 active engineers contributing to our codebase

Stats from 2021

With such a large and active development environment, our traditional approach to dependency injection began to show its limitations. Like many .NET projects, we were using the built-in DI container, registering our services in the Startup.cs file or through extension methods. It looked something like this:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<IService, Service>();
    services.AddTransient<IRepository, Repository>();
    // ... hundreds more registrations
}

While this approach works well for smaller projects, we encountered several significant issues as our codebase grew:

  1. Merge Conflicts: With numerous developers working on different features, all needing to register new services, our Startup.cs file became a constant source of merge conflicts. This slowed down our development process and created unnecessary friction.
  2. Lack of Visibility into Object Lifecycles: As our registration code grew and was split into multiple methods and even separate files, it became increasingly difficult for developers to understand the lifecycle of a particular service without digging through configuration code. This lack of visibility could lead to subtle bugs, especially when dealing with scoped or singleton services that might inadvertently capture user-specific data.
  3. Maintenance Nightmare: Our main configuration class ballooned to nearly 4,000 lines of code at its peak. This made it incredibly difficult to maintain, understand, and modify our DI setup.

These issues were not just minor inconveniences. They were actively hindering our ability to develop and release products quickly and reliably. We needed a solution that would allow us to scale our dependency injection along with our codebase and team size.

The “Just Break It Up” Approach

When faced with a massive configuration file, the knee-jerk reaction is often to break it up into smaller pieces. This might look something like this:

public void ConfigureServices(IServiceCollection services)
{
    services.AddDataServices()
            .AddBusinessServices()
            .AddInfrastructureServices();
    // ... more method calls
}

public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddDataServices(this IServiceCollection services)
    {
        services.AddSingleton<IDatabase, Database>();
        services.AddTransient<IUserRepository, UserRepository>();
        // ... more registrations
        return services;
    }

    // ... more extension methods
}

This pattern can make your Startup.cs look cleaner, but it’s really just hiding the complexity rather than addressing it. The registration logic is still centralized, just in different files. This can actually make it harder to find where a particular service is registered, exacerbating our visibility problem.

Introducing Agoda.IoC

To address the challenges we faced with dependency injection at scale, we developed Agoda.IoC, an open-source C# IoC extension library. Agoda.IoC takes a different approach to service registration, moving away from centralized configuration and towards a more distributed, attribute-based model. In building it we also pulled out a bunch of complex but handy registration patterns we found in use.

Agoda.IoC uses C# attributes to define how services should be registered with the dependency injection container. This approach brings several benefits:

  1. Decentralized Configuration: Each service is responsible for its own registration, reducing merge conflicts and improving code organization.
  2. Clear Visibility of Lifecycles: The lifetime of a service is immediately apparent when viewing its code.
  3. Simplified Registration Process: No need to manually add services to a configuration file; the library handles this automatically.

Let’s look at some examples of how Agoda.IoC works in practice:

Basic Registration

Consider a logging service that you want to use throughout your application:

public interface ILogger {}

[RegisterSingleton]
public class Logger : ILogger {}

This replaces the traditional services.AddSingleton<ILogger, Logger>(); in your startup code. By using the [RegisterSingleton] attribute, you ensure that only one instance of Logger is created and used throughout the application’s lifetime. This is ideal for stateless services like loggers, configuration managers, or caching services.

The interface is used to register the class by default.

Factory Registration

Factory registration is useful for services that require complex initialization or depend on runtime parameters. For example, let’s consider a database connection service:

[RegisterSingleton(Factory = typeof(DatabaseConnectionFactory))]
public class DatabaseConnection : IDatabaseConnection
{
    private readonly string _connectionString;
    
    public DatabaseConnection(string connectionString)
    {
        _connectionString = connectionString;
    }
    
    // implementation here
}

public class DatabaseConnectionFactory : IComponentFactory<IDatabaseConnection>
{
    public IDatabaseConnection Build(IComponentResolver resolver)
    {
        var config = resolver.Resolve<IConfiguration>();
        string connectionString = config.GetConnectionString("DefaultConnection");
        return new DatabaseConnection(connectionString);
    }
}

This approach allows you to create a DatabaseConnection with a connection string that’s only known at runtime. The factory can use the IComponentResolver to access other registered services (like IConfiguration) to build the connection.

Explicit Interface Registration

When a class implements multiple interfaces but should only be registered for one, explicit interface registration comes in handy. This is particularly useful in scenarios where you’re adapting third-party libraries or creating adapters:

[RegisterTransient(For = typeof(IExternalServiceAdapter))]
public class ExternalServiceAdapter : IExternalServiceAdapter, IDisposable
{
    private readonly ExternalService _externalService;
    
    public ExternalServiceAdapter(ExternalService externalService)
    {
        _externalService = externalService;
    }
    
    // IExternalServiceAdapter implementation
    
    public void Dispose()
    {
        _externalService.Dispose();
    }
}

In this case, we only want to register ExternalServiceAdapter as IExternalServiceAdapter, not as IDisposable. This prevents other parts of the application from accidentally resolving this class when they ask for an IDisposable.

Collection Registration

Collection registration is powerful when you have multiple implementations of an interface that you want to use together, such as in a pipeline pattern or for plugin-like architectures. Here’s an example with a simplified order processing pipeline:

public interface IOrderProcessor
{
    void Process(Order order);
}

[RegisterSingleton(For = typeof(IOrderProcessor), OfCollection = true, Order = 1)]
public class ValidateOrderProcessor : IOrderProcessor
{
    public void Process(Order order) 
    {
        // Validate the order
    }
}

[RegisterSingleton(For = typeof(IOrderProcessor), OfCollection = true, Order = 2)]
public class InventoryCheckProcessor : IOrderProcessor
{
    public void Process(Order order) 
    {
        // Check inventory
    }
}

[RegisterSingleton(For = typeof(IOrderProcessor), OfCollection = true, Order = 3)]
public class PaymentProcessor : IOrderProcessor
{
    public void Process(Order order) 
    {
        // Process payment
    }
}

With this setup, you can inject IEnumerable<IOrderProcessor> into a service that needs to run all processors in order:

public class OrderService
{
    private readonly IEnumerable<IOrderProcessor> _processors;
    
    public OrderService(IEnumerable<IOrderProcessor> processors)
    {
        _processors = processors;
    }
    
    public void ProcessOrder(Order order)
    {
        foreach (var processor in _processors)
        {
            processor.Process(order);
        }
    }
}

This approach allows you to easily add or remove processing steps without changing the OrderService class.

Keyed Registration

Keyed registration allows you to register multiple implementations of the same interface and retrieve them by a specific key. This is particularly useful when you need different implementations based on runtime conditions.

Example scenario: Multiple payment gateways in an e-commerce application.

public interface IPaymentGateway
{
    bool ProcessPayment(decimal amount);
}

[RegisterSingleton(Key = "Stripe")]
public class StripePaymentGateway : IPaymentGateway
{
    public bool ProcessPayment(decimal amount)
    {
        // Stripe-specific payment processing logic
        return true;
    }
}

[RegisterSingleton(Key = "PayPal")]
public class PayPalPaymentGateway : IPaymentGateway
{
    public bool ProcessPayment(decimal amount)
    {
        // PayPal-specific payment processing logic
        return true;
    }
}

public class PaymentService
{
    private readonly IKeyedComponentFactory<IPaymentGateway> _gatewayFactory;

    public PaymentService(IKeyedComponentFactory<IPaymentGateway> gatewayFactory)
    {
        _gatewayFactory = gatewayFactory;
    }

    public bool ProcessPayment(string gatewayName, decimal amount)
    {
        var gateway = _gatewayFactory.GetByKey(gatewayName);
        return gateway.ProcessPayment(amount);
    }
}

In this example, you can switch between payment gateways at runtime based on user preference or other factors.

Mocked Mode for Testing

Agoda.IoC provides a mocked mode that allows you to easily swap out real implementations with mocks for testing purposes. This is particularly useful for isolating components during unit testing.

Example scenario: Testing a user service that depends on a database repository.

public interface IUserRepository
{
    User GetUserById(int id);
}

[RegisterSingleton(Mock = typeof(MockUserRepository))]
public class UserRepository : IUserRepository
{
    public User GetUserById(int id)
    {
        // Actual database call
        return new User { Id = id, Name = "Real User" };
    }
}

public class MockUserRepository : IUserRepository
{
    public User GetUserById(int id)
    {
        // Return a predefined user for testing
        return new User { Id = id, Name = "Mock User" };
    }
}

public class UserService
{
    private readonly IUserRepository _userRepository;

    public UserService(IUserRepository userRepository)
    {
        _userRepository = userRepository;
    }

    public string GetUserName(int id)
    {
        var user = _userRepository.GetUserById(id);
        return user.Name;
    }
}

When running in normal mode, the real UserRepository will be used. In mocked mode (typically during testing), the MockUserRepository will be injected instead, allowing for predictable test behavior without actual database calls.

Open Generic Service Registration

Agoda.IoC supports registration of open generic services, which is particularly useful when you have a generic interface with multiple implementations.

Example scenario: A generic repository pattern in a data access layer.

public interface IRepository<T> where T : class
{
    T GetById(int id);
    void Save(T entity);
}

[RegisterTransient(For = typeof(IRepository<>))]
public class GenericRepository<T> : IRepository<T> where T : class
{
    public T GetById(int id)
    {
        // Generic implementation
    }

    public void Save(T entity)
    {
        // Generic implementation
    }
}

// Usage
public class UserService
{
    private readonly IRepository<User> _userRepository;

    public UserService(IRepository<User> userRepository)
    {
        _userRepository = userRepository;
    }

    // Service implementation
}

With this setup, Agoda.IoC will automatically create and inject the appropriate GenericRepository<T> when an IRepository<T> is requested for any type T.

These advanced features of Agoda.IoC provide powerful tools for handling complex dependency injection scenarios, from runtime-determined implementations to easier testing and support for generic patterns. By leveraging these features, you can create more flexible and maintainable application architectures.

Implementing Agoda.IoC in Your Project

Now that we’ve explored the features and benefits of Agoda.IoC, let’s walk through the process of implementing it in your .NET project. This guide will cover installation, basic setup, and the migration process from traditional DI registration.

First, you’ll need to install the Agoda.IoC package. You can do this via the NuGet package manager in Visual Studio or by running the following command in your project directory:

dotnet add package Agoda.IoC.NetCore

Basic Setup

Once you’ve installed the package, you need to set up Agoda.IoC in your application’s startup code. The exact location depends on your project structure, but it’s typically in the Startup.cs file for traditional ASP.NET Core projects or in Program.cs for minimal API projects.

For a minimal API project (Program.cs):

using Agoda.IoC.NetCore;

var builder = WebApplication.CreateBuilder(args);

// Your existing service configurations...

// Add this line to set up Agoda.IoC
builder.Services.AutoWireAssembly(new[] { typeof(Program).Assembly }, isMockMode: false);

var app = builder.Build();
// ... rest of your program

The AutoWireAssembly method takes two parameters:

  1. An array of assemblies to scan for registrations. Typically, you’ll want to include your main application assembly.
  2. A boolean indicating whether to run in mock mode (useful for testing, as we saw in the advanced features section).

Before:

// In Startup.cs
services.AddSingleton<IEmailService, EmailService>();
services.AddTransient<IUserRepository, UserRepository>();
services.AddScoped<IOrderProcessor, OrderProcessor>();

// In your classes
public class EmailService : IEmailService { /* ... */ }
public class UserRepository : IUserRepository { /* ... */ }
public class OrderProcessor : IOrderProcessor { /* ... */ }

After:

// In Startup.cs
services.AutoWireAssembly(new[] { typeof(Startup).Assembly }, isMockMode: false);

// In your classes
[RegisterSingleton]
public class EmailService : IEmailService { /* ... */ }

[RegisterTransient]
public class UserRepository : IUserRepository { /* ... */ }

[RegisterPerRequest] // This is equivalent to AddScoped
public class OrderProcessor : IOrderProcessor { /* ... */ }

The Reflection Concern

When introducing a new library into a project, especially one that uses reflection, it’s natural to have concerns about performance.

The key point to understand is that Agoda.IoC primarily uses reflection during application startup, not during runtime execution. Here’s how it breaks down:

  1. Startup Time: Agoda.IoC scans the specified assemblies for classes with registration attributes. This process happens once during application startup.
  2. Runtime: Once services are registered, resolving dependencies uses the same mechanisms as the built-in .NET Core DI container. There’s no additional reflection overhead during normal application execution.

Agoda.IoC.Generator: Enhancing Performance with Source Generators

While the reflection-based approach of Agoda.IoC is performant for most scenarios, we understand that some projects, especially those targeting AOT (Ahead-of-Time) compilation, may require alternatives. This is where Agoda.IoC.Generator comes into play.

To use Agoda.IoC.Generator, you simply need to add it to your project alongside Agoda.IoC. The source generator will automatically detect the Agoda.IoC attributes and generate the appropriate registration code.

By offering both reflection-based and source generator-based solutions, we ensure that Agoda.IoC can meet the needs of a wide range of projects, from traditional JIT-compiled applications to those requiring AOT compilation.

Conclusion: Embracing Agoda.IoC for Scalable Dependency Injection

As we’ve explored throughout this blog post, dependency injection is a crucial technique for building maintainable and scalable applications. However, as projects grow in size and complexity, traditional DI approaches can become unwieldy. This is where Agoda.IoC steps in.

Let’s recap the key benefits of Agoda.IoC:

  1. Decentralized Configuration: By moving service registration to attributes on the classes themselves, Agoda.IoC eliminates the need for a centralized configuration file. This reduces merge conflicts and makes it easier to understand the lifecycle of each service.
  2. Improved Code Organization: With Agoda.IoC, the registration details are right where they belong – with the service implementations. This improves code readability and maintainability.
  3. Flexibility: From basic registrations to more complex scenarios like keyed services and open generics, Agoda.IoC provides the flexibility to handle a wide range of dependency injection needs.
  4. Testing Support: The mocked mode feature makes it easier to write and run unit tests, allowing you to easily swap out real implementations for mocks.
  5. Performance: Despite using reflection, Agoda.IoC is designed to be performant, with minimal impact on startup time and runtime performance. And for scenarios requiring AOT compilation, Agoda.IoC.Generator provides a source generator-based alternative.
  6. Scalability: As your project grows from a small application to a large, complex system, Agoda.IoC scales with you, maintaining clean and manageable dependency registration.

At Agoda, we’ve successfully used this library to manage dependency injection in our large-scale applications, handling thousands of services across a team of hundreds of developers. It has significantly reduced the friction in our development process and helped us maintain a clean, understandable codebase even as our systems have grown.

Of course, like any tool, Agoda.IoC isn’t a silver bullet. It’s important to understand your project’s specific needs and constraints. For some smaller projects, the built-in DI container in .NET might be sufficient. For others, especially larger, more complex applications, Agoda.IoC can provide substantial benefits.

We encourage you to give Agoda.IoC a try in your projects. Start with the basic features, and as you become more comfortable, explore the advanced capabilities like keyed registration and collection registration. We believe you’ll find, as we have, that it makes managing dependencies in large projects significantly easier and more maintainable.

In the end, the goal of any development tool or practice is to make our lives as developers easier and our code better. We believe Agoda.IoC does just that for dependency injection in .NET applications. We hope you’ll find it as useful in your projects as we have in ours.

The Product Engineering Mindset: Bridging Technology and Business

In our previous posts, we explored the evolution of software development and the core principles of product engineering. Today, we’re diving into the product engineering mindset – the set of attitudes and approaches that define successful product engineers. This mindset is what truly sets product engineering apart from traditional software development roles.

The T-Shaped Professional

At the heart of the mindset is the concept of the T-shaped professional. This term, popularized by IDEO CEO Tim Brown, describes individuals who have deep expertise in one area (the vertical bar of the T) coupled with a broad understanding of other related fields (the horizontal bar of the T).

For engineers, the vertical bar typically represents their technical skills – be it front-end development, back-end systems, data engineering, or any other specific domain. The horizontal bar, however, is what truly defines this mindset. It includes:

  1. Understanding of user experience and design principles
  2. Knowledge of business models and metrics
  3. Familiarity with product management concepts
  4. Basic understanding of data analysis and interpretation
  5. Awareness of market trends and competitive landscape

This T-shaped skillset allows these engineers to collaborate effectively across disciplines, make informed decisions, and understand the broader impact of their work.

Customer-Centric Thinking

At the heart of product engineering lies a fundamental principle: an unwavering focus on the customer. Product engineers don’t just build features; they solve real problems for real people. This customer-centric approach permeates every aspect of their work, from initial concept to final implementation and beyond.

Central to this mindset is empathy – the ability to understand and share the feelings of another. This means going beyond surface-level user requirements to truly comprehend the user’s context, needs, and pain points. It’s about putting yourself in the user’s shoes, understanding their frustrations, their goals, and the environment in which they use your product.

Curiosity is another crucial component of customer-centric thinking. Engineers are not content with surface-level understanding; they constantly ask “why?” to get to the root of problems. This curiosity drives them to dig deeper, to question assumptions, and to seek out the underlying causes of user behavior and preferences.

For example, if users aren’t engaging with a particular feature, a curious engineer won’t simply accept this at face value. They’ll ask: Why aren’t users engaging? Is the feature difficult to find? Is it not solving the problem it was intended to solve? Is there a more fundamental issue that we haven’t addressed? This relentless curiosity leads to deeper insights and more effective solutions.

Observation is the third pillar of customer-centric thinking. Engineers pay close attention to how users actually interact with their products, not just how they’re expected to. This often involves going beyond analytics and user feedback to engage in direct observation and user testing.

Consider an engineer working on an e-commerce platform. They might set up user testing sessions where they observe customers navigating the site, making purchases, and encountering obstacles. They might analyze heatmaps and user flows to understand where customers are dropping off or getting confused. They might even use techniques like contextual inquiry, observing users in their natural environments to understand how the product fits into their daily lives.

Amazon’s “working backwards” process exemplifies this customer-centric mindset in action. Before writing a single line of code, product teams at Amazon start by writing a press release from the customer’s perspective. This press release describes the finished product, its features, and most importantly, the value it provides to the customer.

This approach forces teams to think deeply about the customer’s needs and desires from the very beginning of the product development process. It ensures that every feature is grounded in real customer value, not just technical possibilities or internal priorities.

In the end, customer-centric thinking is what transforms a good product engineer into a great one. It’s the difference between building features and creating solutions, between meeting specifications and delighting users.

Balancing Technical Skills with Business Acumen

While deep technical skills form the foundation of a product engineer’s expertise, the modern tech landscape demands a broader perspective. Today’s engineers need to bridge the gap between technology and business, understanding not just how to build products, but why they’re building them and how they fit into the larger business strategy.

This balance begins with a solid understanding of the business model. Engineers need to grasp how their company generates revenue and manages costs. This isn’t about becoming financial experts, but rather about understanding the basic mechanics of the business. For instance, an engineer at a SaaS company should understand the concepts of customer acquisition costs, lifetime value, and churn rate. They should know whether the company operates on a freemium model, enterprise sales, or something in between. This understanding helps engineers make informed decisions about where to invest their time and effort, aligning their technical work with the company’s financial goals.

Equally important is a grasp of key performance indicators (KPIs) and how engineering decisions impact these metrics. Different businesses will have different KPIs, but common examples include user acquisition, retention rates, conversion rates, and average revenue per user. engineers need to understand which metrics matter most to their business and how their work can move the needle on these KPIs.

At Airbnb, for example, engineers don’t just focus on building a fast and reliable booking system. They understand how factors like booking conversion rate, host retention, and customer lifetime value impact the company’s success. This knowledge informs their technical decisions, ensuring that their work aligns with and supports the company’s broader goals.

Awareness of market dynamics is another crucial aspect of business acumen for engineers. This involves understanding who the competitors are, what they’re doing, and how the market is evolving. Engineers should have a sense of where their product fits in the competitive landscape and what sets it apart.

This market awareness also extends to understanding broader industry trends that might impact the product. For instance, an engineer working on a mobile app needs to be aware of trends in mobile technology, changes in app store policies, and shifts in user behavior. This knowledge helps them anticipate challenges and opportunities, informing both short-term decisions and long-term strategy.

Consider an engineer at a streaming service like Netflix. They need to be aware of not just direct competitors in the streaming space, but also broader trends in entertainment consumption. Understanding the rise of short-form video content on platforms like TikTok, for example, might inform decisions about feature and infrastructure development or content recommendation algorithms.

Balancing technical skills with business acumen doesn’t mean that engineers need to become business experts. Rather, it’s about developing enough understanding to make informed decisions and communicate effectively with business stakeholders.

Developing this business acumen is an ongoing process. It involves curiosity about the broader context of one’s work, a willingness to engage with non-technical stakeholders, and a commitment to understanding the “why” behind product decisions.

Embracing Uncertainty and Learning

The product engineering mindset is characterized by a unique comfort with uncertainty and an unwavering commitment to continuous learning. In the fast-paced world of technology, where change is the only constant, this mindset is not just beneficial—it’s essential for success.

At the heart of this mindset is a willingness to experiment. Engineers understand that innovation often comes from trying new approaches, even when the outcome is uncertain. They view each project not just as a task to be completed, but as an opportunity to explore and learn. This experimental approach extends beyond just trying new technologies; it encompasses new methodologies, team structures, and problem-solving techniques.

Crucially, these engineers see both successes and failures as valuable learning experiences. When an experiment succeeds, they analyze what went right and how to replicate that success. When it fails, they don’t see it as a setback, but as a rich source of information. They ask: What didn’t work? Why? What can we learn from this? This resilience in the face of failure, coupled with a curiosity to understand and learn from it, is a hallmark of the product engineering mindset.

Data-driven decision making is another key aspect of this mindset. Product engineers don’t rely on hunches or assumptions; they seek out data to inform their choices. This might involve A/B testing different features, analyzing user behavior metrics, or conducting performance benchmarks. They’re comfortable with analytics tools and basic statistical concepts, using these to derive insights that guide their work.

However they also understand the limitations of data. They know that not everything can be quantified and that sometimes, especially when innovating, there may not be historical data to rely on. In these cases, they balance data with intuition and experience. They’re not paralyzed by a lack of complete information but are willing to make informed judgments when necessary.

Spotify’s “fail fast” culture exemplifies this mindset in action. Engineers are encouraged to experiment with new ideas, measure the results, and quickly iterate or pivot based on what they learn. This approach not only leads to innovative solutions but also creates an environment where learning is valued and uncertainty is seen as an opportunity rather than a threat.

Collaborative Problem-Solving

Product engineers don’t work in silos. The complexity of modern software products demands a collaborative approach, where diverse perspectives and skill sets come together to create solutions. Product engineers collaborate closely with designers, product managers, data scientists, and other stakeholders, each bringing their unique expertise to the table.

Teamwork is another crucial aspect of collaborative problem-solving. Engineers must be willing to share their ideas openly, knowing that exposure to different viewpoints can refine and improve their initial concepts. They need to be open to feedback, seeing it not as criticism but as an opportunity for growth and improvement. At the same time, they should be ready to offer constructive feedback to others, always keeping the common goal in mind. This give-and-take of ideas, when done in a spirit of mutual respect and shared purpose, can lead to breakthroughs that no single individual could have achieved alone.

Often, these engineers find themselves in the role of facilitator, especially when it comes to technical decisions that impact the broader product strategy. They may need to guide discussions, helping the team navigate complex technical tradeoffs while considering business and user experience implications. This requires not just technical knowledge, but also the ability to listen actively, synthesize different viewpoints, and guide the team towards consensus. It’s about finding the delicate balance between driving decisions and ensuring all voices are heard.

At Google, this collaborative mindset is embodied in their design sprint process. In these intensive, time-boxed sessions, cross-functional teams come together to tackle complex problems. Engineers work side-by-side with designers, product managers, and other stakeholders, rapidly prototyping and testing ideas. This process not only leads to innovative solutions but also builds stronger, more cohesive teams.

Conclusion

The product engineering mindset is about much more than coding skills. It’s about understanding the bigger picture, taking ownership of outcomes, focusing relentlessly on user needs, and working collaboratively to solve complex problems.

Developing this mindset is a journey. It requires curiosity, empathy, and a willingness to step outside the comfort zone of pure technical work. But for those who embrace it, this mindset opens up new opportunities to create meaningful impact and drive innovation.

In our next post, we’ll dive into the specific skills that product engineers need to cultivate to be successful in their roles. We’ll explore both technical and non-technical skills that are crucial in the world of product engineering.

What aspects of the product engineering mindset resonate with you? How have you seen this mindset impact product development in your organization? Share your thoughts and experiences in the comments below!

Understanding Product Engineering: A New Paradigm in Software Development

In our previous post, we explored how the software development landscape is rapidly changing and why traditional methods are becoming less effective. Today, we’re diving deep into the concept of product engineering – a paradigm shift that’s reshaping how we approach software development.

What is Product Engineering?

At its core, product engineering is a holistic approach to software development that combines technical expertise with a deep understanding of user needs and business goals. It’s not just about writing code or delivering features; it’s about creating products that solve real problems and provide tangible value to users.

Product engineering teams are cross-functional, typically including software engineers, designers, product managers, and sometimes data scientists or other specialists. These teams work collaboratively, with each member bringing their unique perspective to the table.

The Purpose of Product Engineering

1. Innovating on Behalf of the Customer

The primary purpose of product engineering is to innovate on behalf of the customer. This means going beyond simply fulfilling feature requests or specifications. Instead, product engineers strive to deeply understand the problems customers face and develop innovative solutions – sometimes before customers even realize they need them.

For example, when Amazon introduced 1-Click ordering in 1999, they weren’t responding to a specific customer request. Instead, they identified a pain point in the online shopping experience (the tedious checkout process) and innovated a solution that dramatically improved user experience.

2. Building Uncompromisingly High-Quality Products

Teams are committed to building high-quality products that customers love to use. This goes beyond just ensuring that the code works correctly. It encompasses:

  • Performance: Ensuring the product is fast and responsive
  • Reliability: Building systems that are stable and dependable
  • User Experience: Creating intuitive, enjoyable interfaces
  • Scalability: Designing systems that can grow with user demand

Take Spotify as an example. Their product engineering teams don’t just focus on adding new features. They continually work on improving streaming quality, reducing latency, and enhancing the user interface – all elements that contribute to a high-quality product that keeps users coming back.

3. Driving the Business

While product engineering is customer-centric, it also plays a crucial role in driving business success. Engineers need to understand the business model and how their work contributes to key performance indicators (KPIs).

For instance, at Agoda, a travel booking platform, teams might focus on metrics like “Incremental Bookings per Day” in the booking funnel or “Activations” in the Accommodation Supply side. These metrics directly tie to business success while also reflecting improvements in the customer experience.

Key Principles of Product Engineering

1. Problem-Solving Over Feature Building

Teams focus on solving problems rather than just building features. Instead of working from a list of specifications, they start with a problem statement. For example, rather than “Build feature X to specification Y,” a product engineering team might tackle “We don’t have a good enough conversion rate on our booking funnel.”

This approach allows for more creative solutions and ensures that the team’s efforts are always aligned with real user needs and business goals.

2. Cross-Functional Collaboration

Teams are enabled with all the expertise needed to solve the problem at hand. This might include UX designers, security experts, or even legacy system specialists, depending on the project’s needs.

This cross-functional collaboration ensures that all aspects of the product – from its technical architecture to its user interface – are considered from the start, leading to more cohesive and effective solutions.

3. Ownership of Results

Teams take ownership of the results, not just the delivery of features. If a change doesn’t increase conversion rates or solve the intended problem, it’s up to the team to iterate and improve until they achieve the desired results.

This shift from being judged on feature delivery to business results can be challenging for engineers used to traditional methods. As one engineer put it, “It was easier before when I just had to deliver 22 story points. Now you expect me to deliver business results?” However, this ownership leads to more impactful work and a deeper sense of satisfaction when real improvements are achieved.

The Shift from Feature Factories to Problem-Solving Teams

Traditional software development often operates like a “feature factory.” Requirements come in, code goes out, and success is measured by how many features are delivered to specification. This approach can lead to bloated software with features that aren’t used or don’t provide real value, remember our 37% unused software? that’s how companies get to this number.

Product engineering turns this model on its head. Teams are given problems to solve rather than features to build. They have the autonomy to explore different solutions, run experiments, and iterate based on real-world feedback. Success is measured not by features delivered, but by problems solved and value created for users and the business.

Conclusion

Product engineering represents a fundamental shift in how we approach software development. By focusing on customer needs, maintaining a commitment to quality, and aligning closely with business goals, teams are able to create software that truly makes a difference.

In our next post, we’ll explore the mindset required for successful product engineering. We’ll discuss the concept of T-shaped professionals and the balance of technical skills with business acumen that characterizes great product engineers.

What’s your experience with product engineering? Have you seen this approach in action in your organization? Share your thoughts and experiences in the comments below!

The Evolution of Product Engineering: Adapting to a Rapidly Changing World

In today’s fast-paced digital landscape, the way we approach software development is undergoing a significant transformation. As a product engineer with decades of experience in the field, I’ve witnessed firsthand the shift from traditional methodologies to a more dynamic, customer-centric approach. This blog post, the first in our series on Product Engineering, will explore this evolution and why it’s crucial for modern businesses to adapt.

The Changing Landscape of Software Development

Remember the days when software projects followed rigid, long-term plans? When we’d spend months mapping out every detail, stake holder meetings,d esign reviews for weeks architecting a massive new system, before writing a single line of code? Well it’s becoming increasingly clear that it’s no longer sufficient in our rapidly evolving digital world.

The reality is that by the time we finish implementing software based on these detailed plans, the world has often moved on. Our assumptions become outdated, and our solutions may no longer fit the problem at hand. As Mike Tyson says, “Everyone has a plan until they get punched in the mouth.” In software development, that punch often comes in the form of changing market conditions, disruptive technologies, or shifts in user behavior.

The Pitfalls of Traditional Methods

Let’s consider a real-world example. The finance industry has been turned on its head by small, agile fintech startups. Traditional banks, confident in their market position, initially dismissed these newcomers, thinking, “They aren’t stealing our core market.” But before they knew it, these startups were nibbling away at their core business. By the time the banks started planning their response, it was often too late – they were too slow to adapt.

PayPal and Square as examples revolutionized online and mobile payments. While banks were still relying on traditional credit card systems, these startups made it easy for individuals and small businesses to accept payments digitally. By the time banks caught up, PayPal had become a household name, processing over $936 billion in payments in 2020.

Robinhood as well disrupted the investment world by offering commission-free trades and fractional shares, making investing accessible to a new generation. Established brokerages were forced to eliminate trading fees to compete, significantly impacting their revenue models.

This scenario isn’t unique to finance. Across industries, we’re seeing that the old ways of developing software – with long planning cycles and rigid roadmaps – are becoming less effective. In fact, a staggering statistic reveals that 37% of software in large corporations is rarely or never used. Think about that for a moment. We constantly hear about the scarcity of engineering talent, yet more than a third of the software we produce doesn’t provide value. Clearly, something needs to change.

The Rise of Product Engineering

Enter product engineering – a approach that’s gaining traction among the most innovative companies in the world. But what sets apart companies like Spotify, Amazon, and Airbnb? Why do they consistently build software that we love to use?

The answer lies in their approach to product development. These companies understand a fundamental truth that Steve Jobs articulated so well: “A lot of times, people don’t know what they want until you show it to them.” And as far back as Henry Ford as well said, “If I had asked people what they wanted, they would have said faster horses.”

Product engineering isn’t about blindly following customer requests or building features that someone thinks people want. It’s about deeply understanding customer problems and innovating on their behalf. It’s about creating solutions that customers might not even realize they need – yet come to love.

The Need for a New Approach

In the traditional models many companies have built, engineers are often isolated from the product side of things. They’re told to focus solely on coding, “go code, do what you are good at”, protect this precious engineering resource, and don’t let them be disturbed by non-engineering things, with the assumption that someone else will worry about whether the product actually enhances the customer’s life or gets used at all.

This leads to what I call the “feature factory” – a system where engineers are fed requirements through tools like Jira, expected to churn out code, and measured solely on their ability to deliver features to specification. The dreaded term “pixel perfect” comes to mind. But this approach misses a crucial point: the true measure of our work isn’t in the features we ship, but in the value we create for our customers and our business.

Product engineering flips this model on its head. It brings engineers into the heart of the product development process, encouraging them to think deeply about the problems they’re solving and the impact of their work. It’s about creating cross-functional teams that are empowered to make decisions, experiment, and iterate quickly based on real-world feedback.

Looking Ahead

As we dive deeper into this series on Product Engineering, we’ll explore the specific skills, mindsets, and practices that define this approach. We’ll look at how to build empowered, cross-functional teams, how to make decisions in the face of uncertainty, and how to measure success in ways that truly matter.

The evolution of product engineering isn’t just a trend – it’s a necessary adaptation to the realities of modern software development. By embracing this approach, we can create better products, reduce waste, and ultimately deliver more value to our customers and our businesses.

Stay tuned for our next post, where we’ll dive deeper into what exactly makes a product engineering team tick.

What’s your experience with traditional software development versus more modern, product-focused approaches? Share your thoughts in the comments below!