@geekmidas/constructs
A comprehensive framework for building type-safe HTTP endpoints, cloud functions, scheduled tasks, and event subscribers with AWS Lambda and Hono support.
Installation
pnpm add @geekmidas/constructsFeatures
- ✅ Fluent endpoint builder pattern
- ✅ Full TypeScript type inference
- ✅ StandardSchema validation (Zod, Valibot, etc.)
- ✅ AWS Lambda adapter support (API Gateway v1/v2)
- ✅ Hono framework adapter
- ✅ Service dependency injection
- ✅ Built-in error handling
- ✅ Session and authorization management
- ✅ Structured logging
- ✅ Rate limiting
- ✅ Audit logging
- ✅ Row Level Security (RLS) with PostgreSQL
- ✅ Response handling (cookies, headers, status codes)
- ✅ Event publishing
- ✅ Testing utilities
Package Exports
| Export | Description |
|---|---|
/ | Core types and utilities |
/endpoints | HTTP endpoint builder (e, EndpointFactory) and types |
/functions | Cloud function builder (f) |
/crons | Scheduled task builder (c) |
/subscribers | Event subscriber builder (s) |
/types | Type definitions |
/hono | Hono framework adapter (HonoEndpoint) |
/aws | AWS Lambda adaptors (API Gateway v1/v2) |
/testing | Testing utilities (TestEndpointAdaptor, TestFunctionAdaptor) |
Basic Usage
Creating an Endpoint
import { e } from '@geekmidas/constructs/endpoints';
import { z } from 'zod';
export const getUserEndpoint = e
.get('/users/:id')
.params(z.object({
id: z.string().uuid(),
}))
.output(z.object({
id: z.string(),
name: z.string(),
email: z.string(),
}))
.handle(async ({ params }) => {
const user = await db.users.findById(params.id);
if (!user) {
throw createError.notFound('User not found');
}
return user;
});Input Validation
Endpoints support three types of input validation:
const createUserEndpoint = e
.post('/users')
.body(z.object({
name: z.string(),
email: z.string().email(),
}))
.query(z.object({
sendEmail: z.boolean().optional(),
}))
.handle(async ({ body, query }) => {
const user = await createUser(body);
if (query.sendEmail) {
await sendWelcomeEmail(user.email);
}
return user;
});Error Handling
import { createError } from '@geekmidas/errors';
// Built-in HTTP errors
throw createError.badRequest('Invalid input');
throw createError.unauthorized('Not authenticated');
throw createError.forbidden('Not authorized');
throw createError.notFound('Resource not found');
throw createError.conflict('Resource already exists');
throw createError.internalServerError('Something went wrong');Service Pattern
Define services as object literals with dependency injection:
import type { Service } from '@geekmidas/constructs';
import type { EnvironmentParser } from '@geekmidas/envkit';
const databaseService = {
serviceName: 'database' as const,
async register(envParser: EnvironmentParser<{}>) {
const config = envParser.create((get) => ({
url: get('DATABASE_URL').string()
})).parse();
const db = await createConnection(config.url);
return db;
}
} satisfies Service<'database', Database>;
// Use in endpoint
const endpoint = e
.get('/data')
.services([databaseService])
.handle(async ({ services }) => {
const db = services.database;
return await db.query('...');
});Advanced Usage
Request Context
The endpoint handler receives a context object that includes all request data, services, logger, headers, cookies, and session information.
Reading Cookies
Access incoming request cookies using the cookie function:
const endpoint = e
.get('/dashboard')
.handle(async ({ cookie }) => {
const sessionId = cookie('session');
const theme = cookie('theme') || 'light';
if (!sessionId) {
throw createError.unauthorized('Session cookie missing');
}
return {
sessionId,
theme,
};
});Using Cookies for Authentication
Combine cookie reading with session management:
const protectedEndpoint = e
.get('/profile')
.getSession(async ({ cookie, services }) => {
const sessionToken = cookie('session');
if (!sessionToken) {
return null;
}
return await services.auth.verifySession(sessionToken);
})
.authorize(({ session }) => {
return session !== null;
})
.handle(async ({ session }) => {
return {
userId: session.userId,
email: session.email,
};
});Response Handling
The endpoint handler receives two parameters: the context object and a response builder. The response builder allows you to set cookies, custom headers, and override status codes.
Setting Cookies
import { e } from '@geekmidas/constructs/endpoints';
import { z } from 'zod';
const loginEndpoint = e
.post('/auth/login')
.body(z.object({
email: z.string().email(),
password: z.string(),
}))
.output(z.object({
id: z.string(),
email: z.string(),
}))
.handle(async ({ body }, response) => {
const user = await authenticateUser(body);
// Set authentication cookie
return response
.cookie('session', user.sessionToken, {
httpOnly: true,
secure: true,
sameSite: 'strict',
maxAge: 60 * 60 * 24 * 7, // 7 days in seconds
path: '/',
})
.send(user);
});Cookie Options:
domain: Cookie domainpath: Cookie path (default: '/')expires: Expiration datemaxAge: Maximum age in secondshttpOnly: Prevent JavaScript accesssecure: Only send over HTTPSsameSite: CSRF protection ('strict' | 'lax' | 'none')
Deleting Cookies
const logoutEndpoint = e
.post('/auth/logout')
.output(z.object({ success: z.boolean() }))
.handle(async (ctx, response) => {
// Delete the session cookie
return response
.deleteCookie('session', { path: '/' })
.send({ success: true });
});Custom Headers
Set custom response headers for cache control, content disposition, or custom metadata:
const downloadEndpoint = e
.get('/files/:id/download')
.params(z.object({ id: z.string() }))
.handle(async ({ params }, response) => {
const file = await getFile(params.id);
return response
.header('Content-Disposition', `attachment; filename="${file.name}"`)
.header('X-File-Size', file.size.toString())
.header('Cache-Control', 'private, max-age=3600')
.send(file.data);
});Dynamic Status Codes
Override the default status code (200) or the status set in the builder:
import { SuccessStatus } from '@geekmidas/constructs/endpoints';
const createEndpoint = e
.post('/users')
.body(z.object({
name: z.string(),
email: z.string().email(),
}))
.output(z.object({
id: z.string(),
name: z.string(),
email: z.string(),
}))
.handle(async ({ body }, response) => {
const user = await createUser(body);
// Return 201 Created with Location header
return response
.status(SuccessStatus.Created)
.header('Location', `/users/${user.id}`)
.send(user);
});Available Success Status Codes:
SuccessStatus.OK- 200SuccessStatus.Created- 201SuccessStatus.Accepted- 202SuccessStatus.NoContent- 204SuccessStatus.ResetContent- 205SuccessStatus.PartialContent- 206
Default Headers
Set headers that apply to all responses from an endpoint:
const apiEndpoint = e
.get('/api/data')
.header('X-API-Version', '1.0')
.headers({
'Cache-Control': 'no-cache',
'X-Custom-Header': 'value',
})
.output(dataSchema)
.handle(async () => {
return await getData();
});Simple Responses (No Modifications)
If you don't need to modify the response, simply return the data directly:
const getEndpoint = e
.get('/users/:id')
.params(z.object({ id: z.string() }))
.output(userSchema)
.handle(async ({ params }) => {
// No response parameter needed if not using it
return await getUser(params.id);
});Complete Example
Combining multiple request and response features:
const uploadEndpoint = e
.post('/files/upload')
.body(z.object({
file: z.string(),
name: z.string(),
}))
.output(z.object({
id: z.string(),
url: z.string(),
}))
.handle(async ({ body, cookie, logger }, response) => {
// Read existing preference cookie
const preferredFormat = cookie('format') || 'standard';
const file = await uploadFile(body, preferredFormat);
logger.info({ fileId: file.id }, 'File uploaded successfully');
// Set multiple cookies and headers in response
return response
.status(SuccessStatus.Created)
.header('Location', `/files/${file.id}`)
.header('X-File-Id', file.id)
.cookie('last-upload', file.id, {
maxAge: 60 * 60, // 1 hour
httpOnly: true,
})
.cookie('upload-count', String(getUploadCount() + 1), {
maxAge: 60 * 60 * 24 * 365, // 1 year
})
.send({
id: file.id,
url: file.url,
});
});Authorization and Sessions
const protectedEndpoint = e
.get('/protected')
.getSession(async ({ header, services }) => {
const token = header('authorization')?.replace('Bearer ', '');
if (!token) return null;
return await services.auth.verifyToken(token);
})
.authorize(({ session }) => {
return session !== null;
})
.handle(async ({ session }) => {
return { userId: session.id };
});Rate Limiting
import { InMemoryCache } from '@geekmidas/cache/memory';
const rateLimitedEndpoint = e
.post('/api/messages')
.rateLimit({
limit: 10,
windowMs: 60000, // 1 minute
cache: new InMemoryCache(),
})
.body(messageSchema)
.handle(async ({ body }) => {
return await sendMessage(body);
});Audit Logging
Endpoints support declarative and manual audit logging via integration with @geekmidas/audit. Audits can be recorded automatically after a handler completes or manually inside the handler, and they are flushed atomically within the same database transaction when possible.
Setting Up Audit Storage
Define an audit storage service and attach it to an endpoint with .auditor():
import { e } from '@geekmidas/constructs/endpoints';
import { KyselyAuditStorage } from '@geekmidas/audit/kysely';
import type { Service } from '@geekmidas/services';
import type { AuditableAction } from '@geekmidas/audit';
// Define type-safe audit actions
type AppAuditAction =
| AuditableAction<'user.created', { userId: string; email: string }>
| AuditableAction<'user.updated', { userId: string; changes: string[] }>
| AuditableAction<'user.deleted', { userId: string }>;
// Create audit storage service
const auditStorageService = {
serviceName: 'auditStorage' as const,
async register(envParser) {
return new KyselyAuditStorage<Database>({
db: kyselyDb,
tableName: 'audit_logs',
});
},
} satisfies Service<'auditStorage', KyselyAuditStorage<Database>>;
const endpoint = e
.post('/users')
.auditor(auditStorageService)
.handle(async ({ auditor }) => {
// auditor is now available in the handler context
return { id: '123' };
});Actor Extraction
Use .actor() to identify who performed the action. The extractor receives the request context and returns an AuditActor:
const endpoint = e
.post('/users')
.auditor(auditStorageService)
.actor(({ session }) => ({
id: session.sub,
type: 'user',
data: { email: session.email },
}))
.handle(async ({ auditor }) => {
// auditor.actor is { id: session.sub, type: 'user', ... }
return { id: '123' };
});The actor extractor can also be async and has access to services, session, header, cookie, and logger.
Declarative Audits
Use .audit() to define audits that fire automatically after the handler returns successfully. Each audit receives the handler's response to extract the payload:
import { z } from 'zod';
const createUserEndpoint = e
.post('/users')
.auditor(auditStorageService)
.actor(({ session }) => ({ id: session.sub, type: 'user' }))
.body(z.object({ name: z.string(), email: z.string() }))
.output(z.object({ id: z.string(), email: z.string() }))
.audit([
{
type: 'user.created',
payload: (response) => ({
userId: response.id,
email: response.email,
}),
entityId: (response) => response.id,
table: 'users',
},
])
.handle(async ({ body }) => {
const user = await createUser(body);
return user;
});Audit definition fields:
| Field | Type | Description |
|---|---|---|
type | string | The audit action type (must match a defined AuditableAction) |
payload | (response) => object | Extracts the payload from the handler response |
when | (response) => boolean | Optional condition — skips the audit if it returns false |
entityId | (response) => string | Optional entity identifier for querying |
table | string | Optional table name for querying |
Conditional Audits
Use the when clause to only record audits under certain conditions:
const updateUserEndpoint = e
.patch('/users/:id')
.auditor(auditStorageService)
.actor(({ session }) => ({ id: session.sub, type: 'user' }))
.output(userResponseSchema)
.audit([
{
type: 'user.updated',
payload: (response) => ({
userId: response.id,
changes: response.changedFields,
}),
when: (response) => response.changedFields.length > 0,
},
])
.handle(async ({ body, params }) => {
return await updateUser(params.id, body);
});Manual Audits in Handlers
When you need more control, call ctx.auditor directly inside the handler. This is useful for auditing intermediate steps or conditional logic:
const transferEndpoint = e
.post('/transfers')
.auditor(auditStorageService)
.actor(({ session }) => ({ id: session.sub, type: 'user' }))
.handle(async ({ body, auditor }) => {
const result = await processTransfer(body);
auditor.audit('transfer.completed', {
transferId: result.id,
amount: result.amount,
});
if (result.flagged) {
auditor.audit('transfer.flagged', {
transferId: result.id,
reason: result.flagReason,
});
}
return result;
});Manual audits are buffered in memory and flushed together with any declarative audits when the handler completes.
Factory-Level Defaults
Set .auditor() and .actor() on an EndpointFactory so all endpoints inherit the configuration:
import { EndpointFactory } from '@geekmidas/constructs/endpoints';
const api = new EndpointFactory()
.services([databaseService, auditStorageService])
.session(extractSession)
.authorizer('jwt')
.auditor(auditStorageService)
.actor(({ session }) => ({
id: session.sub,
type: 'user',
}));
// All endpoints inherit auditor and actor
const createUser = api
.post('/users')
.audit([{
type: 'user.created',
payload: (response) => ({
userId: response.id,
email: response.email,
}),
}])
.handle(async ({ body }) => {
return await createUser(body);
});
const deleteUser = api
.delete('/users/:id')
.handle(async ({ params, auditor }) => {
await removeUser(params.id);
auditor.audit('user.deleted', { userId: params.id });
return { success: true };
});Transaction Coordination
When the audit storage uses the same database as the endpoint (e.g., both use the same Kysely instance), audits are flushed inside the same database transaction. This guarantees atomicity — if the handler fails, both the data changes and the audit records are rolled back.
const api = new EndpointFactory()
.services([databaseService, auditStorageService])
.database(databaseService)
.auditor(auditStorageService)
.actor(({ session }) => ({ id: session.sub, type: 'user' }));
const endpoint = api
.post('/users')
.output(userSchema)
.audit([{
type: 'user.created',
payload: (response) => ({
userId: response.id,
email: response.email,
}),
}])
.handle(async ({ body, db }) => {
// db is a transaction — both the insert and audit write
// happen atomically in the same transaction
const user = await db
.insertInto('users')
.values(body)
.returningAll()
.executeTakeFirstOrThrow();
return user;
});TIP
When .database() is configured and the audit storage's databaseServiceName matches the endpoint's database service, the framework automatically wraps the handler and audit flush in a single transaction. No extra configuration is needed.
Row Level Security (RLS)
Endpoints support PostgreSQL Row Level Security via the .rls() method. When configured, the handler receives a db parameter — a transaction with PostgreSQL session variables set — so RLS policies can filter rows automatically.
Setting Up RLS on the Factory
Configure RLS once on the factory so all endpoints inherit it:
import { EndpointFactory } from '@geekmidas/constructs/endpoints';
const api = new EndpointFactory()
.services([databaseService])
.database(databaseService)
.session(extractSession)
.authorizer('jwt')
.rls({
extractor: ({ session }) => ({
user_id: session.sub,
tenant_id: session.tenantId,
}),
prefix: 'app', // optional, default: 'app'
});The .database() call tells the factory which service provides the database connection. The .rls() extractor receives the request context (session, services, headers, cookies, logger) and returns key-value pairs that become PostgreSQL session variables (e.g. app.user_id).
Using db in Handlers
When RLS is configured, handlers receive a db parameter — a transaction with the RLS context applied:
const listOrders = api
.get('/orders')
.handle(async ({ db }) => {
// db is a transaction with app.user_id and app.tenant_id set
// PostgreSQL policies using current_setting('app.user_id') filter rows automatically
return db
.selectFrom('orders')
.selectAll()
.execute();
});Important
Always use db from the handler context when RLS is configured. Using services.database directly bypasses the RLS transaction and PostgreSQL session variables won't be set.
// Wrong - bypasses RLS
.handle(async ({ services }) => {
return services.database.selectFrom('orders').execute();
});
// Correct - uses RLS transaction
.handle(async ({ db }) => {
return db.selectFrom('orders').execute();
});Bypassing RLS for Specific Endpoints
For admin endpoints that need unrestricted access, bypass the factory-level RLS:
// Using .rls(false)
const adminOrders = api
.get('/admin/orders')
.rls(false)
.handle(async ({ services }) => {
return services.database
.selectFrom('orders')
.selectAll()
.execute();
});
// Or using .rlsBypass()
const adminUsers = api
.get('/admin/users')
.rlsBypass()
.handle(async ({ services }) => {
return services.database
.selectFrom('users')
.selectAll()
.execute();
});Per-Endpoint RLS
You can also configure RLS on individual endpoints instead of (or in addition to) the factory:
import { e } from '@geekmidas/constructs/endpoints';
const endpoint = e
.get('/orders')
.services([databaseService])
.database(databaseService)
.rls({
extractor: ({ session, header }) => ({
user_id: session.userId,
ip_address: header('x-forwarded-for'),
}),
})
.handle(async ({ db }) => {
return db
.selectFrom('orders')
.selectAll()
.execute();
});PostgreSQL Policy Example
The session variables set by the RLS extractor are consumed by PostgreSQL policies:
-- Enable RLS on the table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
-- Create policies using session variables
CREATE POLICY tenant_isolation ON orders
USING (tenant_id = current_setting('app.tenant_id', true));
CREATE POLICY user_access ON orders
USING (user_id = current_setting('app.user_id', true));Combining RLS with Other Handler Context
The db parameter coexists with services, logger, session, and other context:
const endpoint = api
.get('/orders')
.handle(async ({ db, services, logger, session }) => {
const orders = await db
.selectFrom('orders')
.selectAll()
.execute();
logger.info({ count: orders.length }, 'Fetched orders');
return orders;
});Event Publishing
Endpoints can declare events that are published automatically after the handler returns:
import { e } from '@geekmidas/constructs/endpoints';
const createOrder = e
.post('/orders')
.services([databaseService])
.publisher(eventPublisherService)
.body(orderSchema)
.output(orderResponseSchema)
.events([
{
type: 'order.created',
payload: (response) => ({
orderId: response.id,
total: response.total,
}),
},
])
.handle(async ({ body, services }) => {
return await services.database
.insertInto('orders')
.values(body)
.returningAll()
.executeTakeFirstOrThrow();
});You can also publish events manually inside the handler using the publish function:
const endpoint = e
.post('/transfers')
.publisher(eventPublisherService)
.handle(async ({ body, publish }) => {
const result = await processTransfer(body);
await publish('transfer.completed', {
transferId: result.id,
amount: result.amount,
});
return result;
});The .publisher() method accepts an event publisher service that implements the @geekmidas/events publisher interface. See Defining a Publisher Service below for how to create one.
Defining a Publisher Service
The publisher service follows the standard service pattern. It wraps a Publisher from @geekmidas/events and uses EVENT_PUBLISHER_CONNECTION_STRING to connect to the correct backend:
import type { Service } from '@geekmidas/services';
import type { EventPublisher, PublishableMessage } from '@geekmidas/events';
import { Publisher } from '@geekmidas/events';
// 1. Define your event types
type AppEvents =
| PublishableMessage<'user.created', { userId: string; email: string }>
| PublishableMessage<'order.placed', { orderId: string; total: number }>
| PublishableMessage<'notification.sent', { userId: string; type: string }>;
// 2. Create the publisher service
const eventPublisherService = {
serviceName: 'eventPublisher' as const,
async register(envParser) {
const config = envParser.create((get) => ({
connectionString: get('EVENT_PUBLISHER_CONNECTION_STRING').string(),
})).parse();
return Publisher.fromConnectionString<AppEvents>(config.connectionString);
},
} satisfies Service<'eventPublisher', EventPublisher<AppEvents>>;
// 3. Use with endpoints
const createUser = e
.post('/users')
.publisher(eventPublisherService)
.output(userSchema)
.event({
type: 'user.created',
payload: (response) => ({ userId: response.id, email: response.email }),
})
.handle(async ({ body }) => {
return await createUser(body);
});The connection string protocol determines which backend is used (pgboss://, rabbitmq://, sns://, sqs://, basic://). When using services.events in your workspace config, the CLI auto-generates this env var.
Factory-Level Publisher
Set a default publisher on the factory so all endpoints inherit it:
const api = new EndpointFactory()
.services([databaseService])
.publisher(eventPublisherService);
// All endpoints can use .event() without specifying .publisher()
const createOrder = api
.post('/orders')
.event({
type: 'order.placed',
payload: (response) => ({ orderId: response.id, total: response.total }),
})
.handle(async ({ body }) => { /* ... */ });Event Subscribers
The s builder creates event subscribers that react to published events. Subscribers receive batches of events and can use services, logging, and even publish follow-up events.
Basic Subscriber
import { s } from '@geekmidas/constructs/subscribers';
export const onUserCreated = s
.subscribe('user.created')
.handle(async ({ events, logger }) => {
for (const event of events) {
logger.info({ userId: event.payload.userId }, 'Processing new user');
await sendWelcomeEmail(event.payload.email);
}
});With Services
export const onOrderPlaced = s
.services([databaseService, emailService])
.subscribe('order.placed')
.handle(async ({ events, services, logger }) => {
for (const event of events) {
const order = await services.database
.selectFrom('orders')
.where('id', '=', event.payload.orderId)
.selectAll()
.executeTakeFirstOrThrow();
await services.email.send('order-confirmation', {
to: order.customerEmail,
props: { orderId: order.id, total: order.total },
});
logger.info({ orderId: order.id }, 'Order confirmation sent');
}
});Subscribing to Multiple Events
export const onUserEvents = s
.subscribe(['user.created', 'user.updated'])
.handle(async ({ events, logger }) => {
for (const event of events) {
if (event.type === 'user.created') {
// TypeScript narrows payload to { userId: string; email: string }
await indexNewUser(event.payload.userId);
} else {
// event.type === 'user.updated'
await reindexUser(event.payload.userId);
}
}
});With Publisher (Chaining Events)
Subscribers can publish follow-up events using a publisher service:
export const onUserCreated = s
.publisher(eventPublisherService)
.subscribe('user.created')
.handle(async ({ events, publish }) => {
for (const event of events) {
await createUserProfile(event.payload.userId);
// Publish a follow-up event
await publish('notification.sent', {
userId: event.payload.userId,
type: 'welcome',
});
}
});Configuration Options
| Method | Description |
|---|---|
.subscribe(events) | Event type(s) to listen for (string or string array) |
.services(services) | Inject services into the handler context |
.publisher(service) | Set publisher service (provides publish in context) |
.output(schema) | Validate the return value with a StandardSchema |
.logger(logger) | Set a custom logger instance |
.timeout(ms) | Set execution timeout in milliseconds |
.handle(fn) | Define the handler function and build the Subscriber instance |
Handler Context
The handler receives:
{
events: Array<{ type: string; payload: T }>; // Batch of events (type-safe)
services: ServiceRecord; // Registered services
logger: Logger; // Logger instance
publish: (type, payload) => Promise<void>; // If .publisher() is set
}How Subscribers Run
Development (gkm dev): The CLI scans your routes for exported Subscriber instances, generates polling code, and starts listening on server startup. It uses EVENT_SUBSCRIBER_CONNECTION_STRING to connect. See Events: Dev Server for details.
Production (AWS Lambda): Each subscriber is compiled into a Lambda handler via AWSLambdaSubscriber. The handler parses SQS/SNS events, filters to subscribed types, and invokes the handler. Configure event source mappings in your IaC tool.
import { AWSLambdaSubscriber } from '@geekmidas/constructs/aws';
import { EnvironmentParser } from '@geekmidas/envkit';
import { onUserCreated } from './subscribers/userEvents';
const envParser = new EnvironmentParser(process.env);
const adaptor = new AWSLambdaSubscriber(envParser, onUserCreated);
export const handler = adaptor.handler;Production (Server): When building with gkm build --provider server, subscribers are included in the generated server and poll using the configured connection string.
Testing Subscribers
import { TestSubscriberAdaptor } from '@geekmidas/constructs/testing';
import { describe, it, expect } from 'vitest';
describe('onUserCreated subscriber', () => {
it('should send welcome email', async () => {
const adaptor = new TestSubscriberAdaptor(onUserCreated);
const result = await adaptor.invoke({
events: [
{
type: 'user.created',
payload: { userId: '123', email: 'test@example.com' },
},
],
});
// Assert side effects (email sent, etc.)
});
});Project Configuration
Register subscriber files in gkm.config.ts so the CLI can discover them:
import { defineConfig } from '@geekmidas/cli/config';
export default defineConfig({
routes: './src/endpoints/**/*.ts',
subscribers: './src/subscribers/**/*.ts',
envParser: './src/config/env',
logger: './src/logger',
});TIP
Subscribers can also be co-located with endpoints in the same route files. The CLI discovers all exported Subscriber instances regardless of file location.
Cron Jobs
The c builder creates scheduled tasks that run on a cron or rate schedule. Cron jobs extend the same function builder as cloud functions, so they support services, input/output schemas, logging, event publishing, and database access.
Basic Cron
import { c } from '@geekmidas/constructs/crons';
export const dailyCleanup = c
.schedule('rate(1 day)')
.handle(async ({ logger }) => {
logger.info('Running daily cleanup');
await cleanupExpiredSessions();
return { cleaned: true };
});Schedule Expressions
Crons accept two schedule formats:
Rate expressions — run at a fixed interval:
c.schedule('rate(5 minutes)')
c.schedule('rate(1 hour)')
c.schedule('rate(7 days)')Cron expressions — run on a specific schedule (minute, hour, day, month, weekday):
// Every day at midnight
c.schedule('cron(0 0 * * *)')
// Every Monday at 9am
c.schedule('cron(0 9 * * MON)')
// Every 15 minutes during business hours on weekdays
c.schedule('cron(*/15 9-17 * * MON-FRI)')
// First day of every month at noon
c.schedule('cron(0 12 1 * *)')With Services
import { c } from '@geekmidas/constructs/crons';
export const syncCron = c
.services([databaseService, cacheService])
.schedule('rate(30 minutes)')
.handle(async ({ services, logger }) => {
const db = services.database;
const cache = services.cache;
const staleRecords = await db
.selectFrom('records')
.where('updated_at', '<', new Date(Date.now() - 3600000))
.selectAll()
.execute();
for (const record of staleRecords) {
await cache.delete(`record:${record.id}`);
}
logger.info({ count: staleRecords.length }, 'Cache invalidated');
return { invalidated: staleRecords.length };
});With Input and Output Schemas
import { c } from '@geekmidas/constructs/crons';
import { z } from 'zod';
export const reportCron = c
.input(z.object({
reportType: z.enum(['daily', 'weekly', 'monthly']),
}))
.output(z.object({
generatedAt: z.string(),
rowCount: z.number(),
}))
.schedule('cron(0 6 * * MON)')
.handle(async ({ input }) => {
const report = await generateReport(input.reportType);
return {
generatedAt: new Date().toISOString(),
rowCount: report.rows.length,
};
});With Database Access
import { c } from '@geekmidas/constructs/crons';
export const archiveCron = c
.services([databaseService])
.database(databaseService)
.schedule('cron(0 2 * * *)')
.handle(async ({ db, logger }) => {
const cutoff = new Date(Date.now() - 90 * 24 * 3600000); // 90 days
const result = await db
.deleteFrom('logs')
.where('created_at', '<', cutoff)
.executeTakeFirst();
logger.info({ deleted: result.numDeletedRows }, 'Archived old logs');
return { deleted: Number(result.numDeletedRows) };
});With Event Publishing
import { c } from '@geekmidas/constructs/crons';
export const reminderCron = c
.services([databaseService])
.publisher(eventPublisherService)
.schedule('rate(1 hour)')
.handle(async ({ services, publish }) => {
const users = await services.database
.selectFrom('users')
.where('reminder_due', '<', new Date())
.selectAll()
.execute();
for (const user of users) {
await publish('reminder.due', { userId: user.id });
}
return { notified: users.length };
});Configuration Options
| Method | Description |
|---|---|
.schedule(expression) | Set the cron or rate schedule expression |
.input(schema) | Validate the input payload with a StandardSchema |
.output(schema) | Validate the return value with a StandardSchema |
.services(services) | Inject services into the handler context |
.database(service) | Set the database service (provides db in context) |
.publisher(service) | Set the event publisher service (provides publish in context) |
.logger(logger) | Set a custom logger instance |
.timeout(ms) | Set the execution timeout in milliseconds (default: 30000) |
.memorySize(mb) | Set the memory allocation in MB (AWS Lambda) |
.handle(fn) | Define the handler function and build the Cron instance |
Project Configuration
Register your cron files in gkm.config.ts so the CLI can discover and build them:
import { defineConfig } from '@geekmidas/cli/config';
export default defineConfig({
routes: './src/endpoints/**/*.ts',
crons: './src/crons/**/*.ts', // glob pattern for cron files
envParser: './src/config/env',
logger: './src/logger',
});AWS Lambda Deployment
When building with gkm build --provider aws-lambda, each cron is compiled into a separate Lambda handler with its schedule expression included in the build manifest. Use the manifest to configure EventBridge rules in your IaC tool (SST, CDK, Terraform, etc.).
Deployment
Hono Integration
import { HonoEndpoint } from '@geekmidas/constructs/hono';
import { Hono } from 'hono';
import { EnvironmentParser } from '@geekmidas/envkit';
import { ConsoleLogger } from '@geekmidas/logger/console';
const app = new Hono();
const logger = new ConsoleLogger();
const envParser = new EnvironmentParser(process.env);
await HonoEndpoint.fromRoutes(
['./src/endpoints/**/*.ts'],
envParser,
app,
logger,
process.cwd(),
{
docsPath: '/__docs',
openApiOptions: {
title: 'My API',
version: '1.0.0',
},
}
);
export default app;AWS Lambda (API Gateway v2)
import { AmazonApiGatewayV2Endpoint } from '@geekmidas/constructs/aws';
import { EnvironmentParser } from '@geekmidas/envkit';
import { getUserEndpoint } from './endpoints/users';
const envParser = new EnvironmentParser(process.env);
const adaptor = new AmazonApiGatewayV2Endpoint(envParser, getUserEndpoint);
export const handler = adaptor.handler;AWS Lambda (API Gateway v1)
import { AmazonApiGatewayV1Endpoint } from '@geekmidas/constructs/aws';
import { EnvironmentParser } from '@geekmidas/envkit';
import { getUserEndpoint } from './endpoints/users';
const envParser = new EnvironmentParser(process.env);
const adaptor = new AmazonApiGatewayV1Endpoint(envParser, getUserEndpoint);
export const handler = adaptor.handler;Testing
Testing Endpoints
import { TestEndpointAdaptor } from '@geekmidas/constructs/testing';
import { describe, it, expect } from 'vitest';
describe('Login Endpoint', () => {
it('should set session cookie on successful login', async () => {
const adaptor = new TestEndpointAdaptor(loginEndpoint);
const result = await adaptor.request({
body: {
email: 'user@example.com',
password: 'password123',
},
headers: {
'content-type': 'application/json',
},
services: {},
});
// Check response data
expect(result.data).toMatchObject({
id: expect.any(String),
email: 'user@example.com',
});
// Check response metadata
expect(result.metadata.cookies?.has('session')).toBe(true);
const sessionCookie = result.metadata.cookies?.get('session');
expect(sessionCookie?.options).toMatchObject({
httpOnly: true,
secure: true,
sameSite: 'strict',
});
});
it('should read and use request cookies', async () => {
const adaptor = new TestEndpointAdaptor(profileEndpoint);
const result = await adaptor.request({
headers: {
cookie: 'session=abc123; theme=dark',
},
services: {},
});
expect(result.data).toMatchObject({
userId: expect.any(String),
preferences: {
theme: 'dark',
},
});
});
it('should delete session cookie on logout', async () => {
const adaptor = new TestEndpointAdaptor(logoutEndpoint);
const result = await adaptor.request({
headers: {},
services: {},
});
expect(result.data.success).toBe(true);
const sessionCookie = result.metadata.cookies?.get('session');
expect(sessionCookie?.value).toBe('');
expect(sessionCookie?.options?.maxAge).toBe(0);
});
it('should return 201 status code for resource creation', async () => {
const adaptor = new TestEndpointAdaptor(createUserEndpoint);
const result = await adaptor.request({
body: {
name: 'John Doe',
email: 'john@example.com',
},
headers: {},
services: {},
});
expect(result.metadata.status).toBe(201);
expect(result.metadata.headers?.['Location']).toBe(`/users/${result.data.id}`);
});
it('should set custom headers', async () => {
const adaptor = new TestEndpointAdaptor(downloadEndpoint);
const result = await adaptor.request({
params: { id: 'file-123' },
headers: {},
services: {},
});
expect(result.metadata.headers).toMatchObject({
'Content-Disposition': expect.stringContaining('attachment'),
'Cache-Control': 'private, max-age=3600',
});
});
});Test Result Structure:
When using response.send(), the test adaptor returns:
{
data: T, // The actual response data
metadata: {
headers?: Record<string, string>,
cookies?: Map<string, { value: string, options?: CookieOptions }>,
status?: SuccessStatus,
}
}Testing Simple Responses (without metadata):
For endpoints that don't use the response builder, the test adaptor returns just the data:
describe('Simple Endpoint', () => {
it('should return user data', async () => {
const adaptor = new TestEndpointAdaptor(getUserEndpoint);
const result = await adaptor.request({
params: { id: 'user-123' },
headers: {},
services: {},
});
// Direct data access (no metadata wrapper)
expect(result).toMatchObject({
id: 'user-123',
name: expect.any(String),
});
});
});OpenAPI Documentation
Endpoints automatically generate OpenAPI 3.1 documentation:
import { Endpoint } from '@geekmidas/constructs/endpoints';
const schema = await Endpoint.buildOpenApiSchema(
[getUserEndpoint, createUserEndpoint],
{
title: 'User API',
version: '1.0.0',
description: 'API for managing users',
}
);The Hono adapter automatically serves OpenAPI docs at /docs (configurable).