Building Awesome Teams: An Engineering Manager’s Guide

Welcome to our comprehensive series on the art and science of building exceptional engineering teams. As we embark on this journey together, let’s start with a fundamental question that lies at the heart of engineering leadership:

“What is the job of an Engineering Manager?”

If you were to ask me this question, my answer would be simple yet profound: “Building Awesome Teams.”

This straightforward statement encapsulates the essence of engineering management, a role that extends beyond technical oversight or project management. Building awesome teams is a continuous process that begins the moment you start hiring and continues through every stage of an engineer’s journey with your team, right up to and including the point when they move on to new challenges.

Tech: A People Problem in Disguise

One of the most crucial insights that any engineering leader can gain is this: Tech is, first and foremost, a people problem. The sooner you realize this truth, the sooner you’ll start winning at tech.

Yes, we work with complex systems, intricate code, and cutting-edge technologies. But at the end of the day, it’s people who write the code, design the systems, and push the boundaries of what’s possible. It’s people who collaborate, innovate, and turn ideas into reality. And it’s people who can make or break a project, a product, or even an entire company.

Exploring the Art of Team Building

In this series, we’ll dive deep into all aspects of building awesome teams. We’ll cover topics such as:

  1. Hiring: How to attract, identify, and onboard the right talent for your team.
  2. Performance Management: Strategies for nurturing growth, providing feedback, and helping your team members excel.
  3. Execution: Techniques for forming effective teams, collaborating across departments, and delivering results.
  4. Managing Exits: How to handle both voluntary and involuntary departures in a way that respects individuals and maintains team morale.

Each of these topics is crucial in its own right, but they also interlink and influence each other. By mastering these areas, you’ll be well on your way to building and maintaining truly awesome teams.

Why This Matters

In the fast-paced world of technology, having a high-performing team isn’t just a nice-to-have—it’s a necessity. They’re better equipped to solve complex problems, adapt to changing circumstances, and deliver value to your organization and its customers.

Moreover, awesome teams create a positive feedback loop. They attract more great talent, inspire each other to greater heights, and create an environment where everyone can do their best work. As an engineering manager, there’s no greater satisfaction than seeing your team thrive and achieve things they never thought possible.

Join Us on This Journey

Whether you’re a seasoned engineering leader or just starting your management journey, this series has something for you. We’ll blend theoretical insights with practical advice, drawing on real-world experiences and best practices from the field.

So, are you ready to dive in and start building awesome teams? Let’s begin this exciting journey together!

Stay tuned for our first installment in the series

7 Golden Rules for Library Development: Ensuring Stability and Reliability

As software engineers, we often rely on libraries to streamline our development process and enhance our applications. However, creating and maintaining a library comes with great responsibility. In this post, we’ll explore five essential practices that every library developer should follow to ensure their code remains stable, reliable, and user-friendly.

Before we dive in, let’s consider this famous quote from Linus Torvalds, the creator of Linux:

“WE DO NOT BREAK USERSPACE!”

This statement encapsulates a core principle of software development, especially relevant to library creators. It underscores the importance of maintaining compatibility and stability for the end-users of our code.

1. Preserve Contract Integrity: No Breaking Changes

The cardinal rule of library development is to never introduce breaking changes to your public contracts. This means:

  • Use method overloads instead of modifying existing signatures
  • Add new properties rather than altering existing ones
  • Think critically about your public interfaces before implementation

Remember, the urge to “make the code cleaner” is rarely a sufficient reason to break existing contracts. Put more effort into designing robust public interfaces from the start.

Code Examples

Let’s look at some examples in Kotlin to illustrate how to preserve contract integrity:

C# Examples

using System;

// Original version
public class UserService
{
    public void CreateUser(string name, string email)
    {
        // Implementation
    }
}

// Good: Adding an overload instead of modifying the existing method
public class UserService
{
    public void CreateUser(string name, string email)
    {
        // Original implementation
    }
    
    public void CreateUser(string name, string email, int age)
    {
        // New implementation that includes age
    }
}

// Bad: Changing the signature of an existing method
public class UserService
{
    // This would break existing code
    public void CreateUser(string name, string email, int age)
    {
        // Modified implementation
    }
}

// Good: Adding a new property instead of modifying an existing one
public class User
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }
    public DateTime CreatedAt { get; set; } = DateTime.UtcNow; // New property with a default value
}

// Better: Avoid using primitives in parameters
public class UserService
{
    public void CreateUser(User user)
    {
        // Modified implementation
    }
}

2. Maintain Functional Consistency

Contract changes are the basic one that people are usually aware of, functional changes are changes that change what you expect from a library under a given condition, this is a little harder, but again its a simple practice to follow to achieve it.

Functional consistency is crucial for maintaining trust with your users. To achieve this:

  • Have good test coverage
  • Only add new tests; never modify existing tests

This approach ensures that you don’t inadvertently introduce functional changes that could disrupt your users’ applications.

3. Embrace the “Bug” as a Feature

Counter-intuitive as it may seem, fixing certain bugs can sometimes do more harm than good. Here’s why:

  • Users often build their code around existing behavior, including bugs
  • Changing this behavior, even if it’s “incorrect,” can break dependent systems, and cause more problems than you fix

Unless you can fix a bug without modifying existing tests, it’s often safer to leave it be and document the behavior thoroughly.

4. Default to Non-Public Access: Minimize Your Public API Surface

When developing libraries, it’s crucial to be intentional about what you expose to users. A good rule of thumb is to default to non-public access for all elements of your library. This approach offers several significant benefits for both library maintainers and users.

Firstly, minimizing your public API surface provides you with greater flexibility for future changes. The less you expose publicly, the more room you have to make internal modifications without breaking compatibility for your users. This flexibility is invaluable as your library evolves over time.

Secondly, a smaller public API reduces your long-term maintenance burden. Every public API element represents a commitment to long-term support. By keeping this surface area minimal, you effectively decrease your future workload and the potential for introducing breaking changes.

Lastly, a more focused public API often results in a clearer, more understandable interface for your users. When users aren’t overwhelmed with unnecessary public methods or properties, they can more easily grasp the core functionality of your library and use it effectively.

To implement this principle effectively, consider separating your public interfaces and contracts into a distinct area of your codebase, or even into a separate project or library. This separation makes it easier to manage and maintain your public API over time.

Once an element becomes part of your public API, treat it as a long-term commitment. Any changes to public elements should be thoroughly considered and, ideally, avoided if they would break existing user code. This careful approach helps maintain trust with your users and ensures the stability of projects that depend on your library.

In languages that support various access modifiers, use them judiciously. Employ ‘internal’, ‘protected’, or ‘private’ modifiers liberally, reserving ‘public’ only for those elements that are explicitly part of your library’s interface. This practice helps enforce the principle of information hiding and gives you more control over your library’s evolution.

For the elements you do make public, provide comprehensive documentation. Thorough documentation helps users understand the intended use of your API and can prevent misuse that might lead to dependency on unintended behavior.

Consider the following C# example:

// Public API - in a separate file or project
public interface IUserService
{
    User CreateUser(string name, string email);
    User GetUser(int id);
}

// Implementation - in the main library project
internal class UserService : IUserService
{
    public User CreateUser(string name, string email)
    {
        // Implementation
    }

    public User GetUser(int id)
    {
        // Implementation
    }

    // Internal helper method - can be changed without affecting public API
    internal void ValidateUserData(string name, string email)
    {
        // Implementation
    }
}

In this example, only the IUserService interface is public. The actual implementation (UserService) and its helper methods are internal, providing you with the freedom to modify them as needed without breaking user code.

Remember, anything you make public becomes part of your contract with users. By keeping your public API surface as small as possible, you maintain the maximum flexibility to evolve your library over time while ensuring stability for your users. This approach embodies the spirit of Linus Torvalds’ mandate: “WE DO NOT BREAK USERSPACE!” It allows you to respect your users’ time and effort by providing a stable, reliable foundation for their projects.

6. Avoid Transient Dependencies: Empower Users with Flexibility

An often overlooked aspect of library design is the management of dependencies. While it’s tempting to include powerful third-party libraries to enhance your functionality, doing so can lead to unforeseen complications for your users. Instead, strive to minimize transient dependencies and provide mechanisms for users to wire in their own implementations. This approach not only reduces potential conflicts but also increases the flexibility and longevity of your library.

Consider a scenario where your library includes functions for pretty-printing output. Rather than hardcoding a dependency on a specific logging or formatting library, design your interface to accept generic logging or formatting functions. This allows users to integrate your library seamlessly with their existing tools and preferences.

Here’s an example of how you might implement this in C#:

// Instead of this:
public class PrettyPrinter
{
    private readonly ILogger _logger;

    public PrettyPrinter()
    {
        _logger = new SpecificLogger(); // Forcing a specific implementation
    }

    public void Print(string message)
    {
        var formattedMessage = FormatMessage(message);
        _logger.Log(formattedMessage);
    }
}

// Do this:
public class PrettyPrinter
{
    private readonly Action<string> _logAction;

    public PrettyPrinter(Action<string> logAction)
    {
        _logAction = logAction ?? throw new ArgumentNullException(nameof(logAction));
    }

    public void Print(string message)
    {
        var formattedMessage = FormatMessage(message);
        _logAction(formattedMessage);
    }
}
// Or this: (When the framework has support for generic implementations like logging and DI)
public class PrettyPrinter
{
    private readonly ILogger _logger;

    public PrettyPrinter(ILogger<PrettyPrinter> logger)
    {
        _logger = logger ?? throw new ArgumentNullException(nameof(logger));
    }

    public void Print(string message)
    {
        var formattedMessage = FormatMessage(message);
        _logger.LogInformation(formattedMessage);
    }
}

In the improved version, users can provide their own logging function, which could be from any logging framework they prefer or even a custom implementation. This approach offers several benefits:

  1. Flexibility: Users aren’t forced to adopt a logging framework they may not want or need.
  2. Reduced Conflicts: By not including a specific logging library, you avoid potential version conflicts with other libraries or the user’s own code.
  3. Testability: It becomes easier to unit test your library without needing to mock specific third-party dependencies.
  4. Future-proofing: Your library remains compatible even if the user decides to change their logging implementation in the future.

This principle extends beyond just logging. Apply it to any functionality where users might reasonably want to use their own implementations. Database access, HTTP clients, serialization libraries – all of these are candidates for this pattern.

By allowing users to wire in their own dependencies, you’re not just creating a library; you’re providing a flexible tool that can adapt to a wide variety of use cases and environments. This approach aligns perfectly with our overall goal of creating stable, user-friendly libraries that stand the test of time.

You can also consider writing extension libraries that add default implementations.

For example, your base Library MyLibrary doesn’t include a serializer, just the interface, and you create MyLibrary.Newtonsoft that contains the Newtonsoft Json serializer implementation for the interface in your library and wires it up for the user. This gives teh consumer the convenance of an optional default, but flexibility the change.

7. Target Minimal Required Versions: Maximize Compatibility

When developing a library, it’s tempting to use the latest features of a programming language or framework. However, this approach can significantly limit your library’s usability. A crucial principle in library development is to target the minimal required version of your language or framework that supports the features you need.

By targeting older, stable versions, you ensure that your library can be used by a wider range of projects. Many development teams, especially in enterprise environments, cannot always upgrade to the latest versions due to various constraints. By supporting older versions, you make your library accessible to these teams as well.

Here are some key considerations:

  1. Assess Your Requirements: Carefully evaluate which language or framework features are truly necessary for your library. Often, you can achieve your goals without the newest features.
  2. Research Adoption Rates: Look into the adoption rates of different versions of your target language or framework. This can help you make an informed decision about which version to target.
  3. Use Conditional Compilation: If you do need to use newer features, consider using conditional compilation to provide alternative implementations for older versions.
  4. Document Minimum Requirements: Clearly state the minimum required versions in your documentation. This helps users quickly determine if your library is compatible with their project.
  5. Consider Long-Term Support (LTS) Versions: If applicable, consider targeting LTS versions of frameworks, as these are often used in enterprise environments for extended periods.

Here’s an example in C# demonstrating how you might use conditional compilation to support multiple framework versions:

public class MyLibraryClass
{
    public string ProcessData(string input)
    {
#if NETSTANDARD2_0
        // Implementation for .NET Standard 2.0
        return input.Trim().ToUpper();
#elif NETSTANDARD2_1
        // Implementation using a feature available in .NET Standard 2.1
        return input.Trim().ToUpperInvariant();
#else
        // Implementation for newer versions
        return input.Trim().ToUpperInvariant();
#endif
    }
}

In this example, we provide different implementations based on the target framework version. This allows the library to work with older versions while still taking advantage of newer features when available.

Remember, the goal is to make your library as widely usable as possible. By targeting minimal required versions, you’re ensuring that your library can be integrated into a diverse range of projects, increasing its potential user base and overall value to the developer community.

8. Internal Libraries: Freedom with Responsibility

Yes there’s a 8th, but it only applies to internal.

While internal libraries offer more flexibility, it’s crucial not to abuse this freedom:

  • Use tools like SourceGraph to track usage of internal methods
  • Don’t let this capability become an excuse to ignore best practices
  • Strive to maintain the same level of stability as you would for public libraries

Remember, avoiding breaking changes altogether eliminates the need for extensive usage tracking, saving you time and effort in the long run.

Tips

  1. Set GenerateDocumentationFile to true in your csproj files, it will enable a static code analysis rule that errors if you don’t have xml comments for documentation of public methods. It will make you write documentation for all public methods that will help you think about “should this be public” and if the answer is yes think about the contract.
  2. Use Analyzers: Implement custom Roslyn analyzers, eslint, etc. to enforce your library’s usage patterns and catch potential misuse at compile-time. (Example)
  3. Performance Matters: Include benchmarks in your test suite to catch performance regressions early. Document performance characteristics of key operations.
  4. Version Thoughtfully: Use semantic versioning (SemVer) to communicate the nature of changes in your library. Major version changes should be avoided and reserved for breaking changes, minor versions for new features, and patches for bug fixes.

Conclusion

Developing a library is more than just writing code; it’s about creating a tool that empowers other developers and stands the test of time. By adhering to the golden rules we’ve discussed – from preserving contract integrity to targeting minimal required versions – you’re not just building a library, you’re crafting a reliable foundation for countless projects.

Remember, every public API you expose is a promise to your users. By defaulting to non-public access, avoiding transient dependencies, and embracing stability even in the face of “bugs,” you’re honoring that promise. You’re telling your users, “You can build on this with confidence.”

The words of Linus Torvalds, “WE DO NOT BREAK USERSPACE!”, serve as a powerful reminder of our responsibility as library developers. We’re not just writing code for ourselves; we’re creating ecosystems that others will inhabit and build upon.

As you develop your libraries, keep these principles in mind. Strive for clarity in your public interfaces, be thoughtful about dependencies, and always consider the long-term implications of your design decisions. By doing so, you’ll create libraries that are not just useful, but respected and relied upon.

In the end, the mark of a truly great library isn’t just in its functionality, but in its reliability, its adaptability, and the trust it builds with its users. By following these best practices, you’re well on your way to creating such a library. Happy coding, and may your libraries stand the test of time!

The F5 Experience (Local Setup)

In our journey to achieve the perfect F5 Experience, one of the most critical aspects is local setup. The ability to clone a repository and immediately start working, without any additional configuration, is the cornerstone of a smooth development process. In this post, we’ll explore various techniques and best practices that contribute to a zero-setup local environment.

1. .gitattributes: Ensuring Cross-Platform Compatibility

One often overlooked but crucial file is .gitattributes. This file can prevent frustrating issues when working across different operating systems, particularly between Unix-based systems and Windows.

# Set default behavior to automatically normalize line endings
* text=auto

# Explicitly declare text files you want to always be normalized and converted
# to native line endings on checkout
*.sh text eol=lf

By specifying eol=lf for shell scripts, we ensure that they maintain Unix-style line endings even when cloned on Windows. This prevents the dreaded “bad interpreter” errors that can occur when Windows-style CRLF line endings sneak into shell scripts.

2. .editorconfig: Consistent Coding Styles Across IDEs

An .editorconfig file helps maintain consistent coding styles across different editors and IDEs. This is particularly useful in teams where developers have personal preferences for their development environment.

# Top-most EditorConfig file
root = true

# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
charset = utf-8
indent_style = space
indent_size = 2

# 4 space indentation for Python files
[*.py]
indent_size = 4

3. IDE Run Configurations: Streamlining Scala and Kotlin Setups

For Scala and Kotlin applications, storing IntelliJ IDEA run configurations in XML format and pushing them to the repository can save significant setup time. Create a .idea/runConfigurations directory and add XML files for each run configuration:

<component name="ProjectRunConfigurationManager">
  <configuration default="false" name="MyApp" type="Application" factoryName="Application">
    <option name="MAIN_CLASS_NAME" value="com.example.MyApp" />
    <module name="myapp" />
    <method v="2">
      <option name="Make" enabled="true" />
    </method>
  </configuration>
</component>

This avoids manual configuration after cloning the repo.

4. Package Manager Configurations: Handling Private Repositories

For .NET projects, a nuget.config file can specify private NuGet servers:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
    <add key="MyPrivateRepo" value="https://nuget.example.com/v3/index.json" />
  </packageSources>
</configuration>

nuget will search all parent folders looking for a nuget config, so put it high up once.

Similarly, for Node.js projects, you can use .npmrc or .yarnrc files to configure private npm registries:

registry=https://registry.npmjs.org/
@myorg:registry=https://npm.example.com/

5. launchSettings.json: Configuring .NET Core Apps

While launchSettings.json is often in .gitignore, including it can provide a consistent run configuration for .NET Core applications:

{
  "profiles": {
    "MyApp": {
      "commandName": "Project",
      "launchBrowser": true,
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

6. Stick to Native Commands

While it’s tempting to create custom scripts to automate setup, this can lead to a proliferation of project-specific commands that new team members must learn, also it can lead to dependencies on not only things being installed but platform specific scripts (bash on windows vs linux vs mac, or version of python, etc). Instead, stick to well-known, native commands when possible:

  • For .NET: dotnet builddotnet rundotnet test
  • For Node.js: npm run devnpm test
  • For Scala: sbt runsbt test

This approach reduces the learning curve for new contributors and maintains consistency across projects.

7. Gitpod for Casual Contributors

While experienced developers often prefer local setups, Gitpod can be an excellent option for casual contributors. It provides a cloud-based development environment that can be spun up with a single click. Consider adding a Gitpod configuration to your repository for quick contributions and code reviews.

8. README.md: The Canary in the Coal Mine

Your README.md should be concise and focused on getting started. If your README contains more than a few simple steps to get the project running, it’s a sign that your setup process needs optimization.

A ideal README might look like this:

# MyAwesomeProject

## Getting Started

1. Clone the repository
2. Open in your preferred IDE

That's it! If you need to do anything more than this, please open an issue so we can improve our setup process.

Conclusion

Achieving a seamless F5 Experience for local setup is an ongoing process. By implementing these practices, you can significantly reduce the friction for new contributors and ensure a consistent development experience across your team.

Remember, the goal is to make the setup process as close to zero as possible. Every step you can eliminate or automate is a win for developer productivity and satisfaction.

In our next post, we’ll dive into optimizing the build and compile process to further enhance the F5 Experience. Stay tuned!

The F5 Experience (Speed)

Is a term I’ve been using for years; I originally learned it from a consultant I worked with at Readify years ago.

Back then we were working a lot on .NET, and Visual Studio was the go-to IDE. In Visual Studio, the button you press to debug was “F5”. So we used to ask the question:

“Can we git clone, and then just press F5 (Debug), and the application works locally?”

And also:

“What happens after this? Is it fast?”

So there are 2 parts to the F5 Experience really:

  1. Setup (is it Zero)
  2. Debug (is it fast)

Let’s start with the second part of the problem statement and what work we’ve done there.

Is it fast to build?

This is the first question we asked, so let’s measure compile time locally.

We’ve had devs report that things are slow, but it’s hard to know anecdotally because you don’t know in practice how often people need to clean build vs. incremental build vs. hot reload, and this can make a big difference.

For example, if you measure the three, and just for example’s sake they measure:

  • Clean: 25 minutes
  • Incremental: 30 seconds
  • Hot reload: 1 second

You might think, this is fine because it’s highly unlikely people need to clean build, right?

Wrong. The first step in troubleshooting any compilation error is “clean build it”, then try something else. Also, updates in dependencies can cause invalidation of cache and recalculations and re-downloading of some dependencies. With some package managers, this can take a long time. On top of this, you have your IDE reindexing, which can take a long time in some languages too. I still have bad memories about seeing IntelliJ having a 2 hr+ in-progress counter for some of our larger Scala projects years ago.

So you need to measure these to understand what the experience is actually like; otherwise, it’s just subjective opinions and guesswork. And if it is serious, solving this can have big impacts on velocity, especially if you have a large number of engineers working on a project.

How do we do this?

Most compilers have an ability to add plugins or something of the sort to enable this. We created a series of libraries for this. Here are the open-source ones for .NET and webpack/vite:

I’ll use the .NET one as an example because it’s our most mature one for backend, then go into what differences we have on client-side systems like webpack and vite later in another post.

So after adding this, we now have data in Hadoop for local compilation time for our projects.

And it was “amazing”; even for our legacy projects, it was showing 20-30 seconds, which I couldn’t believe. So I went to talk to one of our engineers and sat down and asked him:

“From when you push debug in your IDE to when the browser pops up and you can check your changes, does it take 20-30 seconds?”

He laughed.

He said it’s at least 4-5 minutes.

So we dug in a bit more. .NET has really good compilation time if you have a well-laid-out and small project structure, and this is what we were reporting. After it’s finished compiling the code though, it has to start the web server, and sometimes this takes time, especially if you have a large monolithic application that is optimized for production. In production, we do things like prewarm cache with large amounts of data. In his case, there wasn’t any mocking or optimizations done for local; it just connects to a QA server that, while having less data than production, still has enough that it impacts it in a huge way. On top of this, add remote/hybrid work, when you are downloading this over a VPN, and boom! Your startup time goes through the roof.

So what can we do? Measure this too, of course.

Let’s look a little bit at the web server lifecycle in .NET though (it’s pretty similar in other platforms):

The thread will hang on app.Run() until the web server stops; however, the web server itself has lifecycle hooks we can use. In .NET’s case, HostApplicationLifetime has an OnStarted event. So we can handle this.

However, the web browser may have “popped up,” but the page is still loading. This is because if you don’t initialize the DI dependencies of an HTTP controller before app.Run(), it will the first time the page is accessed.

So we need another measurement to complete the loop, which is:

“The time of first HTTP Request completion after startup”

This will give us the full loop of “Press F5 (Debug)” to “ready to check” on my local.

To do this, we need some middleware, which is in the .NET library mentioned above as well.

So now we have the full loop; let’s look at some data we collected:

Here’s one of our systems that takes 2-3 min to startup on an average day. We saw that there was an even higher number of 3min+ for the first request, so total waiting time of about 5 minutes. So we started to dig into why.

Before I mentioned the Web Browser “popping up,” this is the behavior on Visual Studio. Most of our engineers use Rider (or other JetBrains IDEs depending on their platform). When we looked into it, we found it wasn’t a huge load time of the first request; it was only taking about 20 seconds. What we found is that because JetBrains IDEs depended on the user opening the browser, the developer opens the browser minutes after it was ready. But why weren’t they opening it straight away? What was this other delay?

We were actually capturing another data point which proved valuable: it was the time the engineer context switches because they know it will take a few minutes, they go off and do something else.

The longer the compile and startup time, the longer they context switch (the bigger tasks they take on while waiting). It starts with checking email and Slack, to going and getting a coffee.

On some repos, we saw extreme examples of 15 to 20 min average for developers opening browsers on some days when the compile and startup time gets high. Probably a busy coffee machine on this day! 🙂

We had a look at some of our other repos that were faster:

In this one, we see that the startup is about 20-30 seconds (including compile time). The first request does take some time (we measured 5-10 seconds), but we are seeing about 30 seconds for the devs, so it’s unlikely they are context switching a lot.

We dug into this number some more though. We found most of the system owners weren’t context switching; they were waiting.

The people that were context switching were the contributors from other areas. We contacted a few of them to understand why. And they told us:

“I honestly didn’t expect it to be that fast, so after pressing debug, I would go make a coffee or do something else.”

To curb this behavior, we found that you can change Rider to pop up the browser, and by doing this, it would interrupt the devs’ context switch, and they would know it’s fast and hopefully change their behavior.

Conclusion

The F5 Experience highlights a critical aspect of developer productivity that often goes unmeasured and unoptimized. Through our investigation and data collection, we’ve uncovered several key insights:

  1. Compilation time alone doesn’t tell the whole story. The full cycle from pressing F5 to having a workable application can be significantly longer than expected.
  2. Developer behavior adapts to system performance. Slower systems lead to more context switching, which can further reduce productivity.
  3. Different IDEs and workflows can have unexpected impacts on the overall development experience.
  4. Even small changes, like automatically opening the browser in Rider, can have a positive impact on developer workflow.

By focusing on the F5 Experience, we can identify bottlenecks in the development process that might otherwise go unnoticed. This holistic approach to measuring and improving the development environment can lead to substantial gains in productivity and developer satisfaction.

Moving forward, teams should consider:

  • Regularly measuring and monitoring their F5 Experience metrics
  • Optimizing local development environments, including mocking or lightweight alternatives to production services
  • Continuously seeking feedback from developers about their workflow and pain points

Remember, the goal is not just to have fast compile times, but to create a seamless, efficient development experience that allows developers to stay in their flow and deliver high-quality code more quickly.

By prioritizing the F5 Experience, we can create development environments that not only compile quickly but also support developers in doing their best work with minimal frustration and waiting. This investment in developer experience will pay dividends in increased productivity, better code quality, and happier development teams.

Anecdote

Another thing we were capturing with this data was information like machine architecture. We noticed 3 out of about 150 Engineers working on one of our larger repos had a compile time that was 3x the others, 3-4 minutes compare to a minute or so. We also noticed they had 7th gen vs the 9th gen intel’s that most fo the engineers had at the time, so we immediately connected out IT support to get them new laptops 🙂