.proto files directly. Instead, you define it by creating a collection of Trusted Documents - named GraphQL queries and mutations that represent your desired API surface.
These operations are then compiled into Protocol Buffer definitions that serve as the stable interface for your consumers.
1. Creating Trusted Documents
Trusted Documents are simply standard GraphQL operations saved in.graphql files. Each file should contain exactly one named operation.
Create a directory for your service operations (e.g. services/) and add your GraphQL files there.
Rules for Mapping Operations to RPC Methods:
-
One operation per file: Each
.graphqlfile must contain only one operation. -
PascalCase Naming: The operation name must use PascalCase (e.g.
GetEmployeeById). This name will become the RPC method name. - No Root-Level Aliases: Aliases are not allowed at the root of the query, but nested aliases are permitted.
Example operations
Query exampleservices/GetEmployeeById.graphql
services/UpdateEmployeeMood.graphql
2. Generating Proto Definitions
Once your operations are defined, use thewgc CLI to generate the corresponding Protocol Buffer service definition.
This process compiles your operations and schema into a .proto file that defines the services, methods and message types.
Run the following command from your project root:
grpc-service generate CLI reference.
This command will generate two files in your output directory: service.proto and service.proto.lock.json.
How mapping works
At a high level, each GraphQL operation is translated into a single RPC method with strongly typed request and response messages. The generator automatically maps GraphQL concepts to protobuf:| GraphQL | Protobuf |
|---|---|
| Operation name | RPC method |
| Variables | Request message |
| Selection set | Response message |
| Scalar types | Protobuf scalar equivalents |
- Query operations are marked with an
idempotency_level = NO_SIDE_EFFECTSoption, enabling support for HTTP GET requests.
3. Organizing Multiple Services
The Cosmo Router supports serving multiple gRPC services and packages simultaneously. It achieves this by recursively walking the directory specified in your router configuration to discover.proto files and their associated .graphql operations.
Because discovery is recursive and based on the package declaration within the generated .proto files, you have flexibility in how you organize your directories.
Standard Package Organization
A common pattern is to organize services by their package name. Example: Single Service per PackageFlexible Organization
Since the router relies on the package declaration in the proto file and not the directory name, you can organize directories for convenience.Important: Nested Discovery Rules
While the router searches recursively, it has a specific rule regarding nested proto files: Nested proto files are not discovered if a parent directory already contains a proto file. Once the router finds a .proto file in a directory, it stops searching deeper in that specific branch.- ✅ Discovered and used (employee.proto, op1.graphql, op2.graphql)
- ❌ Not discovered (other.proto - parent directory already contains a proto file)
4. Versioning & Stability
Versioning and compatibility are handled automatically so you can safely evolve your API without manually managing protobuf field numbers.You usually don’t need to think about this.A critical part of maintaining a gRPC, or any API, is ensuring forward compatibility for your clients. This means that as you evolve your GraphQL schema and operations, existing clients must continue to work. The
The lock file exists to guarantee compatibility automatically as your API evolves.
wgc grpc-service generate command manages this automatically using a lock file.
The service.proto.lock.json file
When you generate your proto definitions for the first time a service.proto.lock.json file is created alongside the .proto file.
This file records the unique field numbers assigned to every field in your protobuf messages.
You should commit this file to version control.
On subsequent runs, the generator reads this lock file to ensure that:
- existing fields retain their assigned numbers.
- new fields are assigned new, unused numbers.
- removed fields have their numbers marked as “reserved” so that they are not reused.
How it works
The lock file tracks field numbers using the full, dot-notation path of nested messages. This ensures that fields in different messages with the same name (e.g.Details for User vs Details for Product) are tracked independently.
Example 1: Stable Field Numbers
Adding fields is always safe. New fields get new numbers; existing fields keep theirs.Example 2: Handling Removed Fields
Removing fields is safe. Removed field numbers are reserved and never reused.Deeply Nested Messages (advanced)
You don’t need to manage this manually. The locking mechanism works regardless of nesting depth. It uses full paths likeGetDeepResponse.GetDeep.Level2.Level3.Level4.Level5 to uniquely identify every message scope, ensuring precise control over field numbering throughout your entire API surface.
Best Practices
- Commit the lock file: Always commit service.proto.lock.json to your version control system along with your .graphql and .proto files.
- Do not edit manually: Never manually modify or delete the lock file. Let the wgc CLI manage it.
- Generate on CI: Run the generation step as part of your CI/CD pipeline to ensure the lock file is always up-to-date with your operations.