Lambda handler

The handler isn't sacred, it's just infrastructure. Doing the handler right affords us portability and decoupling from our implementation.
Handlers (a type of controller) reside in the Adapter layer.
As I wrote in one of the introductory chapters, a relatively common "misimplementation" is to think of the Lambda handler as the full extent of the function. This is all straightforward in trivial contexts, but we gain a significant improvement by being able to remove the pure setup and boilerplate from the business side of things.
The semantic concept of "handler" is somewhat particular to how we talk about function handlers or event handlers. On a more generic software architecture note, this layer could often be translated into what goes into the "controller" term in the MVC school. I've been known to use the "controller" term and set a dedicated folder in the structure at an earlier stage in my career, but I now refrain from it and go with "adapters" instead, simply as its an ever wider concept and since we now open for any type of driver of our functions.
Enough introduction, let's go ahead and look at a handler:
code/Reservation/Reservation/src/infrastructure/adapters/web/ReserveSlot.ts
1
import {
2
APIGatewayProxyEvent,
3
Context,
4
APIGatewayProxyResult,
5
} from "aws-lambda";
6
import { MikroLog } from "mikrolog";
7
​
8
import { ReserveSlotUseCase } from "../../../application/usecases/ReserveSlotUseCase";
9
​
10
import { MissingRequestBodyError } from "../../../application/errors/MissingRequestBodyError";
11
import { UnsupportedVersionError } from "../../../application/errors/UnsupportedVersionError";
12
​
13
import { setupDependencies } from "../../utils/setupDependencies";
14
import { getVersion } from "../../utils/getVersion";
15
import { setCorrelationId } from "../../utils/userMetadata";
16
​
17
import { metadataConfig } from "../../../config/metadata";
18
​
19
/**
20
* @description Reserve a slot.
21
*/
22
export async function handler(
23
event: APIGatewayProxyEvent,
24
context: Context
25
): Promise<APIGatewayProxyResult> {
26
try {
27
MikroLog.start({
28
metadataConfig: { ...metadataConfig, service: "ReserveSlot" },
29
event,
30
context,
31
});
32
if (getVersion(event) !== 1) throw new UnsupportedVersionError();
33
​
34
const body: Record<string, string | number> =
35
typeof event.body === "string" ? JSON.parse(event.body) : event.body;
36
if (!body || JSON.stringify(body) === "{}")
37
throw new MissingRequestBodyError();
38
const slotId = body.id as string;
39
const hostName = body.host as string;
40
​
41
setCorrelationId(event, context);
42
​
43
const dependencies = setupDependencies(metadataConfig("ReserveSlot"));
44
​
45
const response = await ReserveSlotUseCase(dependencies, {
46
slotId,
47
hostName,
48
});
49
​
50
return {
51
statusCode: 200,
52
body: JSON.stringify(response),
53
};
54
} catch (error: any) {
55
return {
56
statusCode: 400,
57
body: JSON.stringify(error.message),
58
};
59
}
60
}
At the top, we get the imports, nothing much to add there, and we see that the handler is exported as an async function. This is per Lambda convention.

Handling the API/event input

I've been somewhat loose on the parameters, as the event is just any old Record (object) but the context is an actual typed AWS context object. This is up for opinion, sure, but I find that the event itself is just easier to deal with when it is untyped and because its structure may significantly change based on which integration mechanism is used—in our case if it's via API Gateway or EventBridge. They ensure this doesn't blow up or bloat all of our functions in this service we've made a small getDTO() utility function to accurately piece together a fully formed Data Transfer Object from the input. Because it's a utility and not business-oriented we want to avoid any deep considerations or logic in that function, as seen below:
code/Analytics/SlotAnalytics/src/infrastructure/utils/getDTO.ts
1
import { AnalyticalRecord } from "../../interfaces/AnalyticalRecord";
2
​
3
/**
4
* @description Utility function to get data transfer object from either event or request payload.
5
*/
6
export function getDTO(event: Record<string, any>): AnalyticalRecord | void {
7
if (!event) return;
8
​
9
// Match for EventBridge case
10
if (event?.detail) return createEventBridgeDto(event);
11
​
12
// Match for typical API GW input
13
const body =
14
event.body && typeof event.body === "string"
15
? JSON.parse(event.body)
16
: event.body;
17
if (body) return createApiGatewayDto(body);
18
else return;
19
}
20
​
21
function createEventBridgeDto(event: any) {
22
return {
23
id: event?.detail?.metadata?.id || "",
24
correlationId: event?.detail?.metadata?.correlationId || "",
25
event: event?.detail?.data?.event || "",
26
slotId: event?.detail?.data?.slotId || "",
27
startsAt: event?.detail?.data?.startTime || "",
28
hostName: event?.detail?.data?.hostName || "",
29
};
30
}
31
​
32
function createApiGatewayDto(body: any) {
33
return {
34
id: body.id || "",
35
correlationId: body.correlationId || "",
36
event: body.event || "",
37
slotId: body.slotId || "",
38
startsAt: body.startTime || "",
39
hostName: body.hostName || "",
40
};
41
}
We use the Data Transfer Object, or DTO, simply to carry around a representation of data. We could call this object Input or something if we wanted, but I'll keep it simply as data here.
Back in the handler, you'll see that we start a logger (MikroLog ) so that it's available during our complete function duration (we never know when and if something breaks so let's do that setup at first thing!). See this as the right place for you to set up any other similar components if you have any.
Note also how we wrap the outer perimeter of the handler—being the first thing that is run, after all—in a try/catch block. This ensures that we can respond back on the main cases: "All is well", or "it's a dumpster fire". More complex examples could absolutely be dynamic and set things like the error code dependent on the error. Once again, here we are keeping at the fundamentals.

Using unique errors/exceptions

On line 20 we have:
if (!data) throw new MissingDataFieldsError();
We throw a unique exception (or error) based on the lack of data. Unique errors/exceptions are a good thing to start using, as it also means we can set "identities" on all the failure modes of our application.

Dependency inversion and injection

On lines 22 and 24 the magic starts happening:
const dependencies = setupDependencies();
​
await AddRecordUseCase(dependencies, data);
Notice that there's a dedicated utility function setupDependencies() to create various required dependencies. For this particular service, we need only a database.
code/Analytics/SlotAnalytics/src/infrastructure/utils/setupDependencies.ts
1
import { Dependencies } from "../../interfaces/Dependencies";
2
​
3
import { createNewDynamoDbRepository } from "../repositories/DynamoDbRepository";
4
import { makeNewLocalRepository } from "../repositories/LocalRepository";
5
​
6
/**
7
* @description Utility that returns a complete dependencies object
8
* based on implementations either "real" infrastructure or mocked ones.
9
*/
10
export function setupDependencies(localUse = false): Dependencies {
11
const repository = localUse
12
? makeNewLocalRepository()
13
: createNewDynamoDbRepository();
14
​
15
return {
16
repository,
17
};
18
}
In the other services we use this same pattern but sometimes return more objects depending on the exact needs. In this case, we are receiving either the mock database (for testing and development) or we are getting an instance of DynamoDB. This means we are encapsulating the logic for when we test, rather than spreading this across everything—note that there are still places where we do need to interact prior to tests, but this is the most important bit.
Why bother with this at all? Well, pretty easy. If we want to follow Uncle Bob's Clean Architecture, as well as following the D in SOLID, we have to bring lower-level (more concrete; more volatile; less business-oriented) components into those that are more business oriented. The magic disconnection we want to create between the infrastructural components (like the database or repository) and the actual use case is now in place.
Note how we just run the use case, injecting it with a set of dependencies making it very easy to replicate and test. We call this pattern dependency injection (DI)—more specifically some have called the approach used here "poor man's DI" or "pure DI". In my opinion, it's just the way that makes the most sense: It adds no dependencies, it's easy to use, and it is completely non-magical. You have this opinion echoed by people like Khalil Stemmler as well.
Finally, the correct place to set this "object graph" of dependencies is in what is called the "composition root", which in our case is the handler function, just like we see it being used here.

In closing

So if all these smart patterns are already happening in the handler, are there any bells and whistles left? There sure are! What's happening in the handler is, no matter how you slice it, completely infrastructural boilerplate. While the getDTO() function might need to, well, know, what you actually want, there just isn't that much "business logic" going here.
Wiring up your handlers this way allows you to be very nimble and totally divorce the connection between the use case that orchestrates business logic, and the boilerplate needed to ensure basic conformity with the handler, its API, and all of that. Using DI we also make future testing a lot easier as we can drive the use case with any repository or other dependencies we want.
All in all, for some this might have been obvious and for others, this might be eye-opening, but if nothing else, I definitely saw my own code improve a lot when I started using these patterns.