You (probably) don't need dependency injection
Dependency injection (DI) frameworks promise to make developer lives easier by layering applications into testable, decoupled components. However, they can quickly result in a mess of autowired classes that exist for no better reason than the framework requires it. Developers often bend their will to the framework, creating ever-more classes with single methods or—at the other extreme—classes with 20 dependencies handling various cross-cutting concerns.
But does it have to be that way? Can you fight off the complexity trying to turn your codebase into a plate of Enterprise® spaghetti? More importantly: does your application even need a DI container to be clean and testable?
What is dependency injection?
But first: what is dependency injection, why is it a popular concept, and what benefits does it provide?
Example content is written in TypeScript, but the principles apply to any language.
Let's take a simple Next.js page as provided in the docs. The initial page component imports db and posts from @/lib/db and then uses them in the Page() method:
import { db, posts } from '@/lib/db' export default async function Page() { const allPosts = await db.select().from(posts) return ( <ul> {allPosts.map((post) => ( <li key={post.id}>{post.title}</li> ))} </ul> ) }
Already this comes with some challenges since each page has to know about your db and posts, but also when it comes time to test you either need to create a stub for the DB that can respond to select() which has to return a stub that can respond to from(posts) OR you need to back the Page with a real database (e.g. via a Testcontainer). Either way, your Page component loses focus on its job of rendering a React component.
Additionally, any other page that needs to fetch a list of posts will need to follow the same pattern, remembering to pass in any additional params if you want to by default hide archived pages, for example. Again, your Page knows too much about how to fetch the posts versus calling an interface.
Let's move the data fetching logic to @/lib/data/posts as the Next.js docs do a little further down:
import { cache } from 'react' import { db, posts, eq } from '@/lib/db' export const listPosts = async (limit?: number, skip?: number) => { let query = db.select().from(posts) if (typeof limit === 'number') { query = query.limit(limit) } if (typeof skip === 'number') { query = query.offset(skip) } const results = await query; return results; } export const getPost = cache(async (id: string) => { const post = await db.query.posts.findFirst({ where: eq(posts.id, parseInt(id)), }); return post; })
That's better since there's now a programmable interface that helps the Page component get back to its original job:
import { listPosts } from '@/lib/data/posts' export default async function Page() { const allPosts = await listPosts(); return ( <ul> {allPosts.map((post) => ( <li key={post.id}>{post.title}</li> ))} </ul> ) }
The Page component now only knows about its nearest neighbor (@/lib/data/posts). In essence your exported methods are acting as the module interface.
We are also now able to reuse that same @/lib/data/posts module in other pages or components, such as an RSS feed, a recent posts widget, etc. We've separated "what" (fetching a list of posts) from the "how", meaning we could now simply swap out the implementation for the listPosts method in the Page tests. This is called a facade pattern.
But once you move logic into separate files, the question becomes: how do these pieces talk to each other without becoming a tangled web of import statements? In the example above we see db, posts, and eq being imported from @/lib/db, but how do those get constructed? And how do you test them without resorting to hacks like mocking imports a la Jest import mocks?
Dependency injection frameworks
Before we continue, a distinction I use for "frameworks" versus "libraries": I treat frameworks as systems you slot your code into whereas libraries are code that your system consumes. The SQS client from the AWS SDK is a client library, as is Knex or any other DB ORM, while Ruby on Rails, NestJS, and Spring Boot are frameworks that do the heavy lifting so long as the code you write fits into their patterns.
NestJS, Spring Boot, Ruby on Rails, Laravel etc. provide a framework that handles bootstrapping your application, reading configuration, and constructing your graph of dependencies before serving traffic for your HTTP service, your JobWorker, etc. It's powerful, but sometimes very magical.
The good: AppContext
The main feature of most dependency injection (DI) systems is that they will auto-wire what is called an "application context" or AppContext: a single, managed object containing all of your instantiated dependencies. Each component in your application somehow registers its dependency on one or more components which may declare their dependency on yet more components. In NestJS, for example, this is typically done via constructor parameters that the DI container scans (via reflection) and finds matching @Injectable components.
// posts-controller.ts @Controller('posts') class PostsController { constructor(private postsService: PostsService) {} @Post() async create(@Body createPostDTO: CreatePostDTO) { const post = await this.postsService.createPost(...) } } // posts-service.ts @Injectable() class PostsService { constructor(private postsRepository: PostsRepository) {} async listPosts(limit?: number, skip?: number): Promise<ListResult<Post>> { return this.postRepository.findAll({ limit, skip }) } } // posts-repository.ts class PostsRepository { ... }
Look mah, we're Enterprise®!
The result is that developers are "freed" from the burden of having to wire up all of those dependencies by hand while still being able to define different layers of the application focused on single responsibilities and create stand-ins/mocks and feel confident about our unit tests.
The bad: debugging your runtime
Until we start running into issues, that is. The DI container hides the complexity of constructing, resolving, and injecting all of those dependencies into our application, but now requires testing that your DI container is constructed correctly along with testing your application's actual functionality.
And then there's local testing versus your deployment environments. That promise of being able to use stand-in components starts collapsing when you realise that your real dependencies don't work the same as your stand-in components or that one of those seemingly-benign environment variables resulted in a completely different graph of dependencies being resolved by the DI container (or, even better, a failing constructor or bean conflict).
The ugly: components without a purpose
"But that won't happen to my codebase, I know better!"
-Engineers
"Wait who the fsck structured it like this? Oh. I mean...who wrote this beautiful spaghetti"
-Also engineers
We've all been there. We start off building beautiful, structured MVCS or DDD code and inevitably a deadline approaches or our caffeine-fuelled inspiration gets the best of us and we either spaghetti together some code in a class OR we go overboard and create dedicated, unit-testable classes that are really an injected function in disguise.
Soon your constructors have 20 arguments that are a mixture of single-method classes with no stand-ins alongside a swath of dependencies that in reality should be abstracted into a separate component.
It's not a question of "if" your code will become a legacy horror story, rather when it will become a legacy horror story. Your job is to stave off that inevitable future.
The best way to mitigate this is to establish a clear set of component stereotypes for your project, define their purpose, and enforce those standards, revisiting them as new component stereotypes look potentially useful.
You (probably) don't need a DI framework
But let's get back to the title of this post. We've established why DI containers exist (construct a graph of dependencies, create them in the correct order, and inject them into their consuming classes) and their great promise (freedom from constructing your AppContext by hand), but let's look at the reality of how difficult constructing an AppContext by hand really is.
Dependency tiers
In most applications there tend to be 3-4 tiers of dependencies*:
- configuration: the configuration used by all other layers
- usually constructed with a combination of environment variables and constants
- clients: "bare metal" clients for interacting with things outside of your runtime (think databases, APIs, etc)
- Ideally these are as close to the generated SDK / client library as possible
- For REST APIs or other bring your own client scenarios this would be one client per service with methods for each API operation required by your codebase ("write to the SDK you wish existed")
- May only depend on
configuration
- stores (aka repositories): handles mapping domain objects to / from storage, such as reading from / writing to databases / APIs or enqueuing a message
- Focused on data transformation and persistence
- Validation is primarily only on data correctness
- Includes minimal business logic
- Examples include
UserStore,PaymentTransactionStore - May only depend on
configurationandclients
- services / use cases: business-level operations to accomplish a single transactional task, optionally wrapped into resource-oriented Service classes
- Focused on accomplishing a task, such as
CreateFrienshipConnectionorFinalizePaymentTransaction - May depend on any of the above or on other
use cases/services
- Focused on accomplishing a task, such as
- transport: The outermost layer that listens for incoming HTTP or gRPC requests, polls for messages via SQS, or parses your CLI arguments. Regardless, the transport layer typically maps the inputs to your internal runtime's shape, fetches data from your
storesor invokes aservice/use caseoperation to complete a task, and then maps the result to the correct response (or error) for that transport layer.
For simple applications the use case / service layer may be overkill and may be merged into the transport layer. No need to over-architect for the sake of over-architecting.
Notice that each of these tiers has a very defined purpose and can only depend on the tiers above it with the one exception being the services / use case tier. No magic DI container required.
Creating an AppContext
With the above philosophical discussion out of the way let's get into what this looks like. The AppContext is a single object that includes all of your dependencies, bootstrapped and ready to be used. It's created via a single createAppContext method that accepts a config and then wires up all of your clients, stores, and services based on that configuration.
// src/app/context.ts import type { AppConfig } from './config.ts' export type AppContext = { config: AppConfig clients: { db: Knex.Client sqs: SQSClient } stores: { authors: AuthorStore comments: CommentStore posts: PostStore tags: TagStore // stores user profiles // users could be authors or commenters (or both) users: UserStore } services: { authors: AuthorService // manage comments on posts comments: CommentService posts: PostService // user profile management users: UserService } } export type AppClients = AppContext['clients']; export type AppStores = AppContext['stores']; export type AppServices = AppContext['services']; export function createAppContext(config: AppConfig): AppContext { const clients = { db: knex(config.db), sqs: new SQSClient(config.aws), } const stores = { // NOTE: do not destructure the dependencies // This will be important when it comes to testing authors: new AuthorStore(clients), comments: new CommentStore(clients), tags: new TagStore(clients), posts: new PostStore(clients), users: new UserStore(clients), } // user service is needed by the other services // construct it outside and inject it const userService = new UserService(config.services.users, clients, stores); const services = { authors: new AuthorService(stores, { userService }), comments: new CommentService(stores, { userService }), posts: new PostService(stores, { userService }), users: userService, } return { config, clients, stores, services }; }
See? That wasn't so bad. We can even split out the construction of each layer into separate methods if we want or need.
Once you have your AppContext constructed in your equivalent of an outer main.js you can pass it into your HTTP server, your SQS listener, whatever framework or transport layer you need and use it.
In fact, you can even use the same AppContext in all of your transport layer functions if you're okay with some extra dependencies being constructed and never used.
Declaring your dependencies
Passing in entire dependency graphs to your constructors sounds ripe for abuse and impossibly hard to document, right? Thankfully in TypeScript we have the solution by way of Pick<>. Pick is a "utility type" that allows you to create derivative types by picking specific fields from the source type.
You've probably seen or used Pick in React components to declare props:
// A hypothetical User type type User = { id: string avatarUrl: string displayName: string // all of these extra attributes that we don't care about in this component memberSince: Date roles: Role[] followers: UserId[] following: UserId[] description: string } // Declare what all fields you need from the User type UserAvatarProps = Pick<User, 'avatarUrl' | 'displayName'> export function UserAvatarImage({ avatarUrl, displayName }: UserAvatarProps) { return ( <ItemImage url={avatarUrl} alt={displayName} data-test-id="UserAvatar" /> ) }
Using these derivative types provides a few benefits:
- Better documentation: declare what exact fields you need
- Easier testing: inputs only need to implement the declared fields
- Compile time safety: fail compilation if an undeclared field is used
Remember that types are only focused on compile time safety, meaning typing information does not restrict or prevent the data that appears at runtime. Always check and sanitize your inputs at application edges and never rely purely on types for enforcing what data is or is not available.
So back to declaring dependencies in your app. Let's say you need the DB client for your UserStore. In the store's file you will create a non-exported ClientDeps type that uses Pick<AppClients, ...> and then use it in your store's constructor:
// src/users/user-store.ts import type { AppClients } from '../context'; import type { User, CreateUserInput } from './types'; // declare your client dependencies // NOTE: no need to export this type type ClientDeps = Pick<AppClients, 'db'> export class UserStore { constructor( private clients: ClientDeps ) {} async create(input: CreateUserInput, opts?: StoreOpts = {}): Promise<User> { const trx = opts.transaction ?? this.clients.db.transaction(); const encoded = this.encodeRecord(input); const result = await this.clients.db('users') .insert(encoded) .transacting(trx) return this.decodeRecord(result); } // ... more implementation here }
TypeScript will complain if you try to access this.clients.sqs since it's not declared in ClientDeps, giving you compile-time safety.
export class UserStore { constructor( private clients: ClientDeps ) {} async create(input: CreateUserInput, opts?: StoreOpts = {}): Promise<User> { const trx = opts.transaction ?? this.clients.db.transaction(); const encoded = this.encodeRecord(input); const result = await this.clients.db('users') .insert(encoded) .transacting(trx) // ERROR: "Property 'sqs' does not exist on type 'ClientDeps'" await this.clients.sqs.sendMessage({ ... }) return this.decodeRecord(result); } // ... more implementation here }
One more reminder: at runtime the clients object will still have sqs present, but that's okay. It's not our goal to block the dependency being available at runtime, just that it's documented at compile time.
We don't care that other dependencies are technically available at runtime, only that we've explicitly declared what we are using.
Testing
Now for testing. In JavaScript all properties pointing to objects are references, meaning the pattern above allows you to swap out or mock dependencies in one layer and have its effects work in any consuming layer, which means you can introduce a DB fault in the clients.db and test that the HTTP layer returns the correct HTTP response. You can also spy on your SQS client to see that a sendMessage call was sent from your use case.
To make testing easier I generally create a separate createTestContext that returns a TestContext that includes all of the AppContext along with any test containers or mocked web servers that are only available in the test suite:
// test/context.ts import { LocalstackContainer, StartedLocalStackContainer, } from "@testcontainers/localstack"; // Extend AppContext with mocks and teardown type TestContext = AppContext & { mocks: { localStack: StartedLocalStackContainer postgres: StartedPostgreSQLContainer } teardown(): Promise<void> } export async function createTestContext(): Promise<TestContext> { const localStack = await new LocalStackContainer(LOCALSTACK_IMAGE).start(); const postgres = await new PostgreSQLContainer(PG_IMAGE).start(); const mocks = { localStack, postgres, } const config = { ...appConfig, aws: { ...config.aws, endpoint: localstack.getConnectionUri(), }, db: { client: 'pg', connection: container.getConnectionUri() } } // Stops test containers to free up the ports, etc const teardown = () => Promise.all([ localStack.stop(), postgres.stop() ]) return { ...createAppContext(config), mocks, teardown }; }
Then in your tests you have access to all layers of your TestContext:
describe('UserService', () => { let testContext: TestContext; beforeEach(async () => { // you could also do setup / teardown in a beforeAll/afterAll instead // just remember to reset any stubs or spies! testContext = await createTestContext(); }); afterEach(async () => { await testContext.teardown() }) it('activateUser() sends an SQS message', async () => { const sendSpy = jest.spyOn(testContext.clients.sqs, 'sendMessage') // do the thing expect(sendSpy).toHaveBeenCalled() }); })
I want to go a bit more in depth on testing strategies that have worked well for me in the past, but I'll save that for a different post.
Scaling your legacy legendary masterpiece
"But will it scale?" you may ask. Well, yes, actually. I've used this pattern successfully in NodeJS projects ranging from a microservice exposing a single domain's HTTP traffic (along with its RabbitMQ worker using the same codebase booted into a different mode) all the way to a production monolith that internally held over 26 service facades (with one or more backing stores each) along with server-side rendered React components that needed internal data fetching. In fact this website uses the same pattern.
Again, monoliths don't have to be legacy horror stories, they can be legendary masterpieces. It does take discipline, though.
When does DI make sense?
If your application is already running within a DI framework, use it! Again, dependency injection in its own right is not evil and serves a purpose. But try to use it when it adds value, don't just create @Injectable components everywhere just because the framework forces you to do so.
Regardless of whether you use a DI container or not, the point still stands that having a list of component stereotypes with singular, focused responsibilities will extend the life of your application's maintainability and make testing significantly easier and more focused.
Alongside clients, stores, and use cases / services I typically find these other component stereotypes to be helpful in more complex situations:
- Codecs:
encode()anddecode()your domain / business object to and from external shapes (e.g. database records, legacy API shapes) with no additional data fetches- Used by stores (e.g. to/from a document or SQL record) and transport layer (e.g. to/from HTTP or Protobuf)
- Easy to unit test: domain object in, transformed object out
- Resolvers: given a set of domain object inputs calculates a resultant set of outputs
- Used by use cases (e.g. given these hardcoded defaults, these A/B treatment variants, and this customer's preferences generate a fully resolved preferences object)
- Easy to unit test: create a set of inputs to trigger different states, no stubbed API calls required
Alternative design patterns
As with all things there are other design patterns that you may fit either your project layout or mental model better compared to clients/stores/services.
A few examples:
- Model-View-Controller (MVC/MVCS/MVVM)
- Domain driven design
In the end, the main thing is that your components have specific stereotypes / roles to play and stay in their lane. Yes, sometimes you may need to violate the design pattern for better readability or to co-locate some gnarly code to hide its interface and that's okay. But patterns are there to help with predictability, maintainability, and separation of concerns. It also helps for faster onboarding of others into your codebase since they can see the pattern and replicate it for any new contributions.