Configuration
Learn how to configure your router.
The router provides three different ways of customization:
Configure the router runtime: You can specify a
config.yaml
for convenience or pass environment variables. In both ways, you can configure the global behavior of the router. For a full reference of all available options see below or use your IDE of choice.Configure how your graph is served: This file can be provided as config option or is pulled automatically from the cdn. It contains information on how to resolve your federated schema. The engine uses the information to build a highly optimized query planner. For more information see
wgc router compose
to build the file locally for development orwgc router fetch
to download the latest production version.Customize the router programatically through Go modules. It is unlikely that we will provide every possible feature as an in-built functionality. For advanced use cases or more control, you can build Go modules and compile the Router in a few commands. If you are uncertain about if your use case should be implemented as a custom module, don't hesitate to open an issue. We might already have a plan for this or can assist you with the implementation.
Recommendation Create a config file and use environment variable expansion to avoid storing secrets on the file system.
Config file
For convenience, you can create a config.yaml
to specify all router options. Start the router in the same directory or pass the path to the file as a CONFIG_PATH
environment variable.
Values specified in the config file have precedence over Environment variables. This also includes empty values so only specify values that should be overwritten. That means, you can see the config file as a single source of truth.
Expand Environment Variables
You can expand environment variables in the file like this:
This will replace the value of the environment variable LOG_LEVEL
with the value of the key log_level
in your config file. For numeric values, ensure quotes are omitted.
Config Validation & Auto-completion
We know configuration is hard, especially for a software component like the router that can be customized entirely to your needs. In order to simplify this, we use JSON schema to validate the router configuration. This comes with huge benefits, all right at your fingertips:
Auto-completion
Documentation (Usage, Examples)
Detect deprecated fields
Detect typos or invalid values.
Some options require the router to validate them. This requires starting the router. Once your router has started successfully, you can be sure that your configuration is valid.
IDE Configuration
VsCode: Install the YAML extension in your IDE.
JetBrains: Support out of the box but in some circumstances it conflicts with other default mappings. Go to
Languages & Frameworks
->Schemas and DTDs
->JSON Schemas Mappings
configure the mapping yourself.
As the next step, add the following line to the head of your config.yaml
file. This line informs your IDE, to download the correct JSON schema file to validate the config file.
If you want to pin to a specific router version use the following URL:
Now, you should get auto-completion 🌟 .
Environment Variables
Many configuration options can be set as environment variables. For a complete list of options, please look at the Router config tables.
Router
The following sections describe each configuration in detail with all available options and their defaults.
Intervals, timeouts, and delays are specified in Go duration syntax e.g 1s, 5m or 1h.
Sizes can be specified in 2MB, 1mib.
LISTEN_ADDR
listen_addr
The server listener address.
localhost:3002
CONTROLPLANE_URL
controlplane_url
The controlplane url. Not required when a static execution config is provided.
PLAYGROUND_ENABLED
playground_enabled
Enables the GraphQL playground on ($LISTEN_ADDR/
)
true
PLAYGROUND_PATH
playground_path
The path where the playground is served
"/"
INTROSPECTION_ENABLED
introspection_enabled
true
QUERY_PLANS_ENABLED
query_plans_enabled
The Router can return Query plans as part of the response, which might be useful to understand the execution.
true
LOG_LEVEL
log_level
debug / info / warning / error / fatal / panic
info
JSON_LOG
json_log
Render the log output in JSON format (true) or human readable (false)
true
SHUTDOWN_DELAY
shutdown_delay
Maximum time in seconds the server has to shutdown gracefully. Should be higher than GRACE_PERIOD
60s
GRACE_PERIOD
grace_period
Maximum time in seconds the server has between schema updates to gracefully clean up all resources. Should be smaller than SHUTDOWN_DELAY
30s
POLL_INTERVAL
poll_interval
The interval of how often the router should check for new schema updates
10s
POLL_JITTER
poll_jitter
The maximum delay added to the poll interval to mitigate thundering herd issues in router fleets scenarios.
5s
HEALTH_CHECK_PATH
health_check_path
Health check path. Returns 200
when the router is alive
/health
READINESS_CHECK_PATH
readiness_check_path
Readiness check path. Return 200
when the router is ready to accept traffic, otherwise 503
/health/ready
LIVENESS_CHECK_PATH
liveness_check_path
Liveness check path. Return 200 when the router is alive
/health/live
GRAPHQL_PATH
graphql_path
The path where the GraphQL Handler is served
/graphql
PLAYGROUND_PATH
playground_path
The path where the playground is served
/
LOCALHOST_FALLBACK_INSIDE_DOCKER
localhost_fallback_inside_docker
Enable fallback for requests that fail to connect to localhost while running in Docker
true
DEV_MODE
dev_mode
false
INSTANCE_ID
If not specified, a new ID will be generated with each router start. A stable ID ensures that metrics with the same ID are grouped together and the same server can be identified on the platform.
Example configuration:
Access Logs
For a detailed example, please refer to the Access Logs section.
access_logs
Enable the access logs. The access logs are used to log the incoming requests. By default, the access logs are enabled and logged to the standard output.
ACCESS_LOGS_ENABLED
access_logs.enabled
Enable the access logs. The access logs are used to log the incoming requests. By default, the access logs are enabled and logged to the standard output.
true
access_logs.buffer
The buffer is used to buffer the logs before writing them to the output.
ACCESS_LOGS_BUFFER_ENABLED
access_logs.buffer.enabled
Enable the buffer.
false
ACCESS_LOGS_BUFFER_SIZE
access_logs.buffer.size
The size of the buffer. The default value is 256KB.
ACCESS_LOGS_FLUSH_INTERVAL
access_logs.buffer.flush_interval
The interval at which the buffer is flushed. The period is specified as a string with a number and a unit, e.g. 10ms, 1s, 1m, 1h. The supported units are 'ms', 's', 'm', 'h'.
access_logs.output
The log destination. The supported destinations are stdout and file. Only one option can be enabled. The default destination is stdout.
ACCESS_LOGS_OUTPUT_STDOUT_ENABLED
access_logs.output.stdout.enabled
true
ACCESS_LOGS_OUTPUT_FILE_ENABLED
access_logs.output.file.enabled
false
ACCESS_LOGS_FILE_OUTPUT_PATH
access_logs.output.file.path
The path to the log file. The path is used to specify the path to the log file.
access_logs.router
The configuration for access logs for the router.
access_logs.router.fields
The fields to add to the access logs for router. The fields are added to the logs as key-value pairs.
[]
access_logs.router.fields.key
The key of the field to add to the logs.
access_logs.router.fields.default
The default value of the field. If the value is not set, value_from is used. If both value and value_from are set, value_from has precedence and in case of a missing value_from, the default value is used.
access_logs.router.value_from
Defines a source for the field value e.g. from a request header. If both default and value_from are set, value_from has precedence.
access_logs.router.fields.value_from.request_header
The name of the request header from which to extract the value. The value is only extracted when a request context is available otherwise the default value is used.
access_logs.router.fields.value_from.context_field
The field name of the context from which to extract the value. The value is only extracted when a context is available otherwise the default value is used. One of: [ "operation_name", "operation_type", "operation_service_names", "operation_hash", "persisted_operation_sha256", "operation_sha256", "response_error_message", "graphql_error_codes", "graphql_error_service_names", "operation_parsing_time", "operation_validation_time", "operation_planning_time", "operation_normalization_time" ]
access_logs.subgraphs
The subgraph access logs configuration
access_logs.subgraphs.enabled
Enable the subgraphs access logs.
false
access_logs.subgraphs.fields
The fields to add to the logs when printing subgraph access logs. The fields are added to the logs as key-value pairs.
access_logs.subgraphs.fields.key
The key of the field to add to the logs.
access_logs.subgraphs.fields.default
The default value of the field. If the value is not set, value_from is used. If both value and value_from are set, value_from has precedence and in case of a missing value_from, the default value is used.
access_logs.subgraphs.value_from
Defines a source for the field value e.g. from a request header. If both default and value_from are set, value_from has precedence.
access_logs.subgraphs.fields.value_from.request_header
The name of the request header from which to extract the value. The value is only extracted when a request context is available otherwise the default value is used.
access_logs.subgraphs.fields.value_from.response_header
The name of the response header from which to extract the value. The value is only extracted when a request context is available otherwise the default value is used.
access_logs.subgraphs.fields.value_from.context_field
The field name of the context from which to extract the value. The value is only extracted when a context is available otherwise the default value is used. One of: [ "operation_name", "operation_type", "operation_service_names", "operation_hash", "persisted_operation_sha256", "operation_sha256", "operation_parsing_time", "operation_validation_time", "operation_planning_time", "operation_normalization_time" ]
Example YAML config:
Telemetry
Graph
Overall configuration for the Graph that's configured for this Router.
GRAPH_API_TOKEN
token
Example YAML config:
TLS
The Router supports TLS and mTLS for secure communication with your clients and infrastructure components like load balancer.
Server TLS
TLS_SERVER_ENABLED
enabled
Enables server TLS support.
false
TLS_SERVER_CERT_FILE
cert_file
The path to the server certificate file.
TLS_SERVER_KEY_FILE
key_file
The path to the server private key file.
Example YAML config:
Client Authentication
TLS_CLIENT_AUTH_CERT_FILE
cert_file
Enables client authentication support. The file to the certificate file used to authenthicate clients.
""
TLS_CLIENT_AUTH_REQUIRED
required
Enforces a valid client certificate to establish a connection.
false
Example YAML config:
Compliance
The configuration for the compliance. Includes for example the configuration for the anonymization of the IP addresses.
IP Anonymization
SECURITY_ANONYMIZE_IP_ENABLED
enabled
Enables IP anonymization in traces and logs.
true
SECURITY_ANONYMIZE_IP_METHOD
method
The metod to anonymize IP addresses. Can be "hash" or "redact".
"redact"
Example YAML config:
Cluster
CLUSTER_NAME
name
The logical name of the router cluster. The name is used for analytic purpose.
Example YAML config:
Telemetry
TELEMETRY_SERVICE_NAME
service_name
cosmo-router
resource_attributes
The resource attributes to add to OTEL metrics and traces. The resource attributes identify the entity producing the traces and metrics.
resource_attributes.key
The key of the attribute.
resource_attributes.value
The value of the attribute.
attributes
The attributes to add to OTEL metrics and traces. Because Prometheus metrics rely on the OpenTelemetry metrics, the attributes are also added to the Prometheus metrics.
[]
attributes.key
The key of the attribute.
attributes.default
The value of the attribute.
attributes.value_from
Defines a source for the attribute value e.g. from a request header. If both default and value_from are set, value_from has precedence.
attributes.value_from.request_header
The name of the request header from which to extract the value. The value is only extracted when a request context is available otherwise the default value is used. Don't forget to add the header to your CORS settings.
Example YAML config:
Tracing
TRACING_ENABLED
enabled
true
TRACING_SAMPLING_RATE
sampling_rate
The sampling rate for the traces. The value must be between 0 and 1. If the value is 0, no traces will be sampled. If the value is 1, all traces will be sampled.
1
TRACING_PARENT_BASED_SAMPLER
parent_based_sampler
Enable the parent-based sampler. The parent-based sampler is used to sample the traces based on the parent trace.
true
TRACING_BATCH_TIMEOUT
The maximum delay allowed before spans are exported.
10s
TRACING_EXPORT_GRAPHQL_VARIABLES
export_graphql_variables
Export GraphQL variables as span attribute. Variables may contain sensitive data.
false
with_new_root
Starts the root span always at the router.
false
Example YAML config:
Exporters
disabled
bool
exporter
one of: http,grpc
endpoint
path
headers
Example YAML config:
Propagation
trace_context
true
jaeger
b3
baggage
datadog
Enable Datadog trace propagation
false
Example YAML config:
Metrics
OTLP
METRICS_OTLP_ENABLED
enabled
Enables OTEL metrics instrumentation
true
METRICS_OTLP_ROUTER_RUNTIME
router_runtime
Enable the collection of metrics for the router runtime.
true
METRICS_OTLP_GRAPHQL_CACHE
graphql_cache
Enable the collection of metrics for the GraphQL operation router caches.
false
METRICS_OTLP_EXCLUDE_METRICS
exclude_metrics
The metrics to exclude from the OTEL metrics. Accepts a list of Go regular expressions. Use https://regex101.com/ to test your regular expressions.
[]
METRICS_OTLP_EXCLUDE_METRIC_LABELS
exclude_metric_labels
The metric labels to exclude from the OTEL metrics. Accepts a list of Go regular expressions. Use https://regex101.com/ to test your regular expressions.
[]
Attributes
attributes
The attributes to add to OTLP Metrics and Prometheus.
[]
attributes.key
The key of the field.
attributes.default
The default value of the field. If the value is not set, value_from is used. If both value and value_from are set, value_from has precedence and in case of a missing value_from, the default value is used.
attributes.value_from
Defines a source for the field value e.g. from a request header or request context. If both default and value_from are set, value_from has precedence.
attributes.value_from
Defines a source for the field value e.g. from a request header or request context. If both default and value_from are set, value_from has precedence.
attributes.value_from.request_header
The name of the request header from which to extract the value. The value is only extracted when a request context is available otherwise the default value is used.
attributes.value_from.context_field
The field name of the context from which to extract the value. The value is only extracted when a context is available otherwise the default value is used.
One of: ["operation_service_names", "graphql_error_codes", "graphql_error_service_names", "operation_sha256"]
Example YAML config:
Prometheus
PROMETHEUS_ENABLED
enabled
Enables prometheus metrics support
true
PROMETHEUS_HTTP_PATH
path
The HTTP path where metrics are exposed.
"/metrics"
PROMETHEUS_LISTEN_ADDR
listen_addr
The prometheus listener address
"127.0.0.1:8088"
PROMETHEUS_GRAPHQL_CACHE
graphql_cache
Enable the collection of metrics for the GraphQL operation router caches.
false
PROMETHEUS_EXCLUDE_METRICS
exclude_metrics
PROMETHEUS_EXCLUDE_METRIC_LABELS
exclude_metric_labels
PROMETHEUS_EXCLUDE_SCOPE_INFO
exclude_scope_info
Exclude scope info from Prometheus metrics.
false
Example YAML config:
Exporter
disabled
exporter
one of: http,grpc
endpoint
path
The path to which the metrics are exported. This is ignored when using 'grpc' as exporter and can be omitted.
headers
temporality
Temporality defines the window that an aggregation is calculated over. one of: delta, cumulative
Example YAML config:
GraphQL Metrics
GRAPHQL_METRICS_ENABLED
enabled
true
GRAPHQL_METRICS_COLLECTOR_ENDPOINT
collector_endpoint
Default endpoint
Example YAML config:
CORS
CORS_ENABLED
enabled
Set this to enable/disable the CORS middleware. It is enabled by default. When disabled, the rest of the properties for CORS have no effect.
true
CORS_ALLOW_ORIGINS
allow_origins
This is a list of origins which are allowed. You can provide origins with wildcards
*
CORS_ALLOW_METHODS
allow_methods
HEAD,GET,POST
CORS_ALLOW_HEADERS
allow_headers
Origin, Content-Length, Content-Type
CORS_ALLOW_CREDENTIALS
allow_credentials
true
CORS_MAX_AGE
max_age
5m
Example YAML config:
Cache Control Policy
CACHE_CONTROL_POLICY_ENABLED
enabled
Set this to enable/disable the strict cache control policy. It is false by default
false
CACHE_CONTROL_POLICY_VALUE
value
The default value for the cache control policy. It will be applied to all requests, unless a subgraph has a more strict one
Example YAML Config:
Custom Modules
Configure your custom Modules. More information on this feature can be found here: Custom Modules
Example YAML config:
Headers
Configure Header propagation rules for all Subgraphs or individual Subgraphs by name.
Cookie Whitelist
When Cookie
is a propagated header, you may want to filter the keys that are forwarded to the subgraph from the client, you can do this via the cookie_whitelist
option, which is a list of string cookie keys that will not be discarded. An empty value means allow all. If you'd like to block all cookies, disable the header propagation entirely.
The cookie whitelist can also affect custom modules that read request cookies, even if propagation is disabled for the Cookie
header. This is because the whitelisting happens very early in the request lifecycle, before it reaches subgraphs or custom modules.
Example YAML config:
Global Header Rules
Apply to requests/responses to/from "all" Subgraphs. These will be applied globally in the graph
request
List of Request Header Rules
response
List of Response Header Rules
Example YAML config:
Request Header Rule
Apply to requests to specific Subgraphs.
op
oneof=propagate, set
matching
matching is the regex to match the header name against
named
named is the exact header name to match
rename
renames the header's key to the provided value
default
default is the default value to set if the header is not present
name
If op
is set
, name
is the name of the desired header to set
value
If op
is set
, value
is the value of the desired header to set
Example YAML config:
Response Header Rule
These rules can be applied to all responses, as well as just to specific subgraphs, and used to manipulate and propagate response headers from subgraphs to the client. By configuring the rule, users can define how headers should be handled when multiple subgraphs provide conflicting values for a specific header.
op
oneof=propagate
algorithm
oneof=first_write, last_write, append
matching
matching is the regex to match the header name against. This
named
named is the exact header name to match
default
default is the default value to set if the header is not present
rename
renames the header's key to the provided value
Example YAML config:
Storage Providers
The configuration for the storage providers. Storage providers can be used to store the persisted operations and the execution config.
Example YAML config:
Users can supply a list of URLs for their redis storage provider.
If
cluster_enabled: false
, then we will use the first URL for the connection URL.If
cluster_enabled: true
, then we will use all of the URLs for the Redis Cluster connection.
URLs can be supplied with redis configuration options embedded, such as:
redis://myUser:myPass@localhost:6379?ssl=true&db=1@connectTimeout=2
Prior to router@v0.169.0, the redis configuration looks like:
Storage Provider Yaml Options
These rules apply to requests being made from the Router to all Subgraphs.
cdn
CDN storage provider.
cdn.id
Unique ID of the provider. It is used as reference in persisted_operations
and execution_config
sections.
cdn.url
"https://cosmo-cdn.wundergraph.com"
redis
Redis storage provider
STORAGE_PROVIDER_REDIS_ID
redis.id
Unique ID of the provider. It is used as a reference in the automatic_persisted_queries
section
STORAGE_PROVIDER_REDIS_CLUSTER_ENABLED
redis.cluster_enabled
If the Redis instance is a Redis cluster
STORAGE_PROVIDER_REDIS_URLS
redis.urls
List of Redis urls, containing port and auth information if necessary. Must contain at least one element
s3
S3 storage provider
s3.id
Unique ID of the privider. It is used as reference in persisted_operations
and execution_config
sections.
s3.endpoint
The endpoint of the S3 bucket. The endpoint is used to specify the endpoint of the S3 bucket.
s3.bucket
The name of the S3 bucket. The S3 bucket is used to store the execution config.
s3.access_key
The access key of the S3 bucket. The access key ID is used to authenticate with the S3 bucket.
s3.secret_key
The secret key of the S3 bucket. The secret access key is used to authenticate with the S3 bucket.
s3.region
The region of the S3 bucket. The region is used to specify the region of the S3 bucket
s3.secure
Enables https in the provided endpoint. Must be set to false
when accessing http endpoints
true
Persisted Operations
The configuration for the persisted operations allows you to maintain a fixed set of GraphQL operations that can be queried against the router without exposing your entire graph to the public. This approach enhances security and performance.
Example YAML config:
Persisted Operations Configuration Options
These rules apply to requests being made from the Router to all Subgraphs.
persisted_operations
The configuration for the persisted operations.
persisted_operations.cache
LRU cache for persisted operations.
PERSISTED_OPERATIONS_CACHE_SIZE
persisted_operations.cache.size
The size of the cache in SI unit.
"100MB"
persisted_operations.storage
The storage provider for persisted operation. Only one provider can be active. When no provider is specified, the router will fallback to the Cosmo CDN provider to download the persisted operations.
PERSISTED_OPERATIONS_STORAGE_PROVIDER_ID
persisted_operations.storage.provider_id
The ID of the storage provider. The ID must match the ID of the storage provider in the storage_providers
section.
PERSISTED_OPERATIONS_STORAGE_OBJECT_PREFIX
persisted_operations.storage.object_prefix
The prefix of the object in the storage provider location. The prefix is put in front of the operation SHA256 hash. $prefix/SHA256.json
PERSISTED_OPERATIONS_LOG_UNKNOWN
persisted_operations.log_unknown
Log operations (sent with the operation body) which haven't yet been persisted. If the value is true, all operations not yet persisted are logged to the router logs.
false
PERSISTED_OPERATIONS_SAFELIST_ENABLED
persisted_operations.safelist.enabled
Only allows persisted operations (sent with operation body). If the value is true, all operations not explicitly added to the safelist are blocked.
false
Automatic Persisted Queries
The configuration for automatic persisted queries allows you to enable automated caching of select GraphQL operations that can be queried against the router, using both POST and GET requests. This approach enhances performance.
It defaults to using a local cache (with the size defined in cache.size
), but users can optionally use a Redis storage
Example YAML config:
Configuration Options
These rules apply to requests being made from the Router to all Subgraphs.
automatic_persisted_queries
The configuration for the persisted operations.
automatic_persisted_queries.enabled
Whether or not automatic persisted queries is enabled
True
automatic_persisted_queries.cache
LRU cache for persisted operations.
automatic_persisted_queries.cache.size
The size of the cache in SI unit.
"100MB"
automatic_persisted_queries.cache.ttl
The TTL of the cache, in seconds. Set to 0 to set no TTL
automatic_persisted_queries.storage
The external storage provider (redis) for automatic persisted operation. Only one provider can be active. When no provider is specified, the router will fallback to using a local in-memory cache (configured in the automatic_persisted_queries.cache
options)
automatic_persisted_queries.storage.provider_id
The ID of the Redis storage provider. The ID must match the ID of the storage provider in the storage_providers.redis
section.
automatic_persisted_queries.storage.object_prefix
The prefix of the object in the storage provider location. The prefix is put in front of the operation SHA256 hash. $prefix/SHA256
Execution Config
The configuration for the execution setup contains instructions for the router to plan and execute your GraphQL operations. You can specify the storage provider from which the configuration should be fetched.
Example YAML config:
Subgraph Request Rules
These rules apply to requests being made from the Router to all Subgraphs.
execution_config
The configuration for the execution config.
file
The configuration for the execution config file. The config file is used to load the execution config from the local file system. The file has precedence over the storage provider.
EXECUTION_CONFIG_FILE_PATH
file.path
The path to the execution config file. The path is used to load the execution config from the local file system.
EXECUTION_CONFIG_FILE_WATCH
file.watch
Enable the watch mode. The watch mode is used to watch the execution config file for changes. If the file changes, the router will reload the execution config without downtime.
"true"
execution_config.storage
The storage provider for the execution config. Only one provider can be active. When no provider is specified, the router will fallback to the Cosmo CDN provider to download the execution config.
EXECUTION_CONFIG_STORAGE_PROVIDER_ID
execution_config.storage.provider_id
The ID of the storage provider. The ID must match the ID of the storage provider in the storage_providers
section.
EXECUTION_CONFIG_STORAGE_OBJECT_PATH
execution_config.storage.object_path
The path to the execution config in the storage provider. The path is used to download the execution config from the S3 bucket.
EXECUTION_CONFIG_FALLBACK_STORAGE_ENABLED
execution_config.fallback_storage.enabled
Enable a fallback storage to fetch the execution config in case the above primary source fails.
EXECUTION_CONFIG_FALLBACK_STORAGE_PROVIDER_ID
execution_config.fallback_storage.provider_id
The ID of the storage provider. The ID must match the ID of the storage provider in the storage_providers
section.
EXECUTION_CONFIG_FALLBACK_STORAGE_OBJECT_PATH
execution_config.fallback_storage.object_path
The path to the execution config in the storage provider. The path is used to download the execution config from the S3 bucket.
Traffic Shaping
Configure rules for traffic shaping like maximum request body size, timeouts, retry behavior, etc. For more info, check this section in the docs: Traffic shaping
Example YAML config:
Subgraph Request Rules
These rules apply to requests being made from the Router to all Subgraphs.
retry
request_timeout
60s
dial_timeout
30s
response_header_timeout
0s
expect_continue_timeout
0s
tls_handshake_timeout
10s
keep_alive_idle_timeout
0s
keep_alive_probe_interval
30s
max_idle_conns
1024
max_conns_per_host
100
max_idle_conns_per_host
20
Subgraph specific request rules
In addition to the general traffic shaping rules, we also allow users to set subgraph specific timeout options, overriding the default traffic rules defined in all
(if present)
request_timeout
60s
dial_timeout
30s
response_header_timeout
0s
expect_continue_timeout
0s
tls_handshake_timeout
10s
keep_alive_idle_timeout
0s
keep_alive_probe_interval
30s
max_idle_conns
1024
max_conns_per_host
100
max_idle_conns_per_host
20
Jitter Retry
RETRY_ENABLED
enabled
true
algorithm
backoff_jitter
backoff_jitter
max_attempts
max_duration
interval
Client Request Request Rules
These rules apply to requests being made from clients to the Router.
max_request_body_size
5mb
MAX_HEADER_BYTES
max_header_bytes
The maximum size of the request headers. Setting this to 0 uses the default value from the http standard lib, which is 1MiB.
1mib
decompression_enabled
When enabled, the router will check incoming requests for a 'Content-Encoding' header and decompress the body accordingly.
Note: Currently only "gzip" is supported
true
WebSocket
Configure WebSocket handlers, protocols, and more.
WebSocket Configuration
WEBSOCKETS_ENABLED
enabled
true
absinthe_protocol
forward_upgrade_headers
Forward all useful Headers from the Upgrade Request, like User-Agent or Authorization in the extensions field when subscribing on a Subgraph
forward_upgrade_query_params
Forward all query parameters from the Upgrade Request in the extensions field when subscribing on a Subgraph
WEBSOCKETS_FORWARD_INITIAL_PAYLOAD
forward_initial_payload
Forward the initial payload from a client subscription in the extensions field when subscribing on a Subgraph
true
Absinthe Protocol Configuration
Legacy WebSocket clients that use the Absinthe protocol might not be able to send a Subprotocol Header. For such clients, you can use the Absinthe Endpoint which automatically chooses the Subprotocol for them so that no Subprotocol Header needs to be set.
WEBSOCKETS_ABSINTHE_ENABLED
enabled
true
WEBSOCKETS_ABSINTHE_HANDLER_PATH
handler_path
The path to mount the Absinthe handler on
/absinthe/socket
WebSocket Authentication
It's possible that Authentication for a WebSocket connection is not possible at the HTTP layer. In such a case, you can enable Authentication "from_initial_payload". This will extract a value from the "initial_payload" field in the first WebSocket message which is responsible for negotiating the protocol between client and server.
In addition, it's possible to export the extracted value into a Request Header, which allows the Router to propagate it using Header Propagation Rules in subsequent Subgraph Requests.
Example WebSocket YAML config:
Authentication
Configure different authentication providers.
New Authentication Config (Router Version ≥ 0.169.0)
JWKS
url
The URL of the JWKs. The JWKs are used to verify the JWT (JSON Web Token). The URL is specified as a string with the format 'scheme://host:port'.
refresh_interval
The interval at which the JWKs are refreshed. The period is specified as a string with a number and a unit, e.g. 10ms, 1s, 1m, 1h. The supported units are 'ms', 's', 'm', 'h'.
1m
algorithms
The allowed algorithms for the keys that are retrieved from the JWKs. An empty list means that all algorithms are allowed. The following algorithms are supported "HS256", "HS384", "HS512", "RS256", "RS384", "RS512", "ES256", "ES384", "ES512", "PS256", "PS384", "PS512", "EdDSA"
[] (all allowed)
JWT
header_name
The name of the header. The header is used to extract the token from the request. The default value is 'Authorization'.
Authorization
header_value_prefix
The prefix of the header value. The prefix is used to extract the token from the header value. The default value is 'Bearer'.
Bearer
Header Sources
type
The type of the source. The only currently supported type is 'header'.
name
The name of the header. The header is used to extract the token from the request.
value_prefixes
The prefixes of the header value. The prefixes are used to extract the token from the header value.
Example YAML config V2:
Old Authentication Config (Router Version < 0.XXX.X)
Provider
name
Name of the provider
jwks
JWK Provider
JWK Provider
url
header_names
header_value_prefixes
refresh_interval
1m
Example YAML config:
Authorization
REQUIRE_AUTHENTICATION
require_authentication
Set to true to disallow unauthenticated requests
false
REJECT_OPERATION_IF_UNAUTHORIZED
reject_operation_if_unauthorized
false
Example YAML config:
CDN
CDN_URL
url
The URL of the CDN where the Router will fetch its Config. Not required when a static execution config is provided.
CDN_CACHE_SIZE
cache_size
Cosmo Router caches responses from the CDN in memory, this defines the cache size.
100MB
Example YAML config:
Events
The Events section lets you define Event Sources for Event-Driven Federated Subscriptions (EDFS).
We support NATS and Kafka as event bus provider.
Provider
provider
one of: nats, kafka
NATS Provider
id
The ID of the provider. This have to match with the ID specified in the subgraph schema.
url
NATS Connection string
authentication
Authentication configuration for the NATS provider.
authentication.token
Token based authentication.
authentication.user_info
User-Info based authentication.
authentication.user_info.username
Username.
authentication.user_info.password
Password.
Kafka Provider
id
The ID of the provider. This have to match with the ID specified in the subgraph schema.
brokers
A list of broker URLs.
[]
authentication
Authentication settings
authentication.sasl_plain
SASL/Plain Authentication method
authentication.sasl_plain.username
SASL/Plain Username
authentication.sasl_plain.password
SASL/Plain Password
tls
TLS configuration for the Kafka provider. If enabled, it uses SystemCertPool for RootCAs by default.
tls.enabled
Enable the TLS.
Nats Provider
Router Engine Configuration
Configure the GraphQL Execution Engine of the Router.
ENGINE_ENABLE_SINGLE_FLIGHT
enable_single_flight
Deduplicate exactly the same in-flight origin request
true
ENGINE_ENABLE_REQUEST_TRACING
enable_request_tracing
true
ENGINE_ENABLE_EXECUTION_PLAN_CACHE_RESPONSE_HEADER
enable_execution_plan_cache_response_header
Usually only required for testing. When enabled, the Router sets the response Header "X-WG-Execution-Plan-Cache" to "HIT" or "MISS"
false
ENGINE_MAX_CONCURRENT_RESOLVERS
max_concurrent_resolvers
Use this to limit the concurrency in the GraphQL Engine. A high number will lead to more memory usage. A number too low will slow down your Router.
32
ENGINE_ENABLE_NET_POLL
enable_net_poll
Enables the more efficient poll implementation for all WebSocket implementations (client, server) of the router. This is only available on Linux and MacOS. On Windows or when the host system is limited, the default synchronous implementation is used.
true
ENGINE_WEBSOCKET_CLIENT_POLL_TIMEOUT
websocket_client_poll_timeout
The timeout for the poll loop of the WebSocket client implementation. The period is specified as a string with a number and a unit
1s
ENGINE_WEBSOCKET_CLIENT_CONN_BUFFER_SIZE
websocket_client_conn_buffer_size
The buffer size for the poll buffer of the WebSocket client implementation. The buffer size determines how many connections can be handled in one loop.
128
ENGINE_WEBSOCKET_CLIENT_READ_TIMEOUT
websocket_client_read_timeout
The timeout for the websocket read of the WebSocket client implementation.
5s
ENGINE_EXECUTION_PLAN_CACHE_SIZE
execution_plan_cache_size
Define how many GraphQL Operations should be stored in the execution plan cache. A low number will lead to more frequent cache misses, which will lead to increased latency.
1024
ENGINE_MINIFY_SUBGRAPH_OPERATIONS
minify_subgraph_operations
Minify the subgraph operations. If the value is true, GraphQL Operations get minified after planning. This reduces the amount of GraphQL AST nodes the Subgraph has to parse, which ultimately saves CPU time and memory, resulting in faster response times.
false
ENGINE_ENABLE_PERSISTED_OPERATIONS_CACHE
enable_persisted_operations_cache
Enable the persisted operations cache. The persisted operations cache is used to cache normalized persisted operations to improve performance.
true
ENGINE_ENABLE_NORMALIZATION_CACHE
enable_normalization_cache
Enable the normalization cache. The normalization cache is used to cache normalized operations to improve performance.
true
ENGINE_NORMALIZATION_CACHE_SIZE
normalization_cache_size
The size of the normalization cache.
1024
ENGINE_PARSEKIT_POOL_SIZE
parsekit_pool_size
The size of the ParseKit pool. The ParseKit pool provides re-usable Resources for parsing, normalizing, validating and planning GraphQL Operations. Setting the pool size to a value much higher than the number of CPU Threads available will not improve performance, but only increase memory usage.
8
ENGINE_RESOLVER_MAX_RECYCLABLE_PARSER_SIZE
resolver_max_recyclable_parser_size
Limits the size of the Parser that can be recycled back into the Pool. If set to 0, no limit is applied. This helps keep the Heap size more maintainable if you regularly perform large queries.
32768
ENGINE_ENABLE_VALIDATION_CACHE
enable_validation_cache
Enable the validation cache. The validation cache is used to cache results of validating GraphQL Operations.
true
ENGINE_VALIDATION_CACHE_SIZE
validation_cache_size
The size of the validation cache.
1024
ENGINE_ENABLE_SUBGRAPH_FETCH_OPERATION_NAME
enable_subgraph_fetch_operation_name
Enable appending the operation name to subgraph fetches. This will ensure that the operation name will be included in the corresponding subgraph requests using the following format: $operationName__$subgraphName__$sequenceID.
true
ENGINE_SUBSCRIPTION_FETCH_TIMEOUT
subscription_fetch_timeout
The maximum time a subscription fetch can take before it is considered timed out.
30s
Example YAML config:
Debug Configuration
ENGINE_DEBUG_PRINT_OPERATION_TRANSFORMATIONS
print_operation_transformations
Print the operation transformations.
false
ENGINE_DEBUG_PRINT_OPERATION_ENABLE_AST_REFS
print_operation_enable_ast_refs
Print the operation enable AST refs.
false
ENGINE_DEBUG_PRINT_PLANNING_PATHS
print_planning_paths
Print the planning paths.
false
ENGINE_DEBUG_PRINT_QUERY_PLANS
print_query_plans
Print the query plans.
false
ENGINE_DEBUG_PRINT_NODE_SUGGESTIONS
print_node_suggestions
Print the node suggestions.
false
ENGINE_DEBUG_CONFIGURATION_VISITOR
configuration_visitor
Print the configuration visitor.
false
ENGINE_DEBUG_PLANNING_VISITOR
planning_visitor
Print the planning visitor.
false
ENGINE_DEBUG_DATASOURCE_VISITOR
datasource_visitor
Print the datasource visitor.
false
ENGINE_DEBUG_REPORT_WEBSOCKET_CONNECTIONS
report_websocket_connections
Print the websocket connections.
false
ENGINE_DEBUG_REPORT_MEMORY_USAGE
report_memory_usage
Print the memory usage.
false
ENGINE_DEBUG_ENABLE_RESOLVER_DEBUGGING
enable_resolver_debugging
Enable verbose debug logging for the Resolver.
false
ENGINE_DEBUG_ENABLE_PERSISTED_OPERATIONS_CACHE_RESPONSE_HEADER
enable_persisted_operations_cache_response_header
Enable the persisted operations cache response header. The persisted operations cache response header is used to cache the persisted operations in the client.
false
ENGINE_DEBUG_ENABLE_NORMALIZATION_CACHE_RESPONSE_HEADER
enable_normalization_cache_response_header
Enable the normalization cache response header. The normalization cache response header is used to cache the normalized operations in the client.
false
ENGINE_DEBUG_ALWAYS_INCLUDE_QUERY_PLAN
always_include_query_plan
Always include the query plan in the response.
false
ENGINE_DEBUG_ALWAYS_SKIP_LOADER
always_skip_loader
Always skip the loader. This will return no data but only render response extensions, e.g. to expose the query plan.
false
Example YAML config:
Rate Limiting
Configures a rate limiter on the outgoing subgraphs requests. When enabled, a rate of 10 req/s with a burst of 10 requests is configured.
The rate limiter requires Redis version 3.2 or newer since it relies on replicate_commands feature. ElastiCache for Redis only works in non-clustered mode. You can enable a failover instance to achieve high availability.
Key Suffix Expression
As you can see in the config table below, you can define an expression to generate the a rate limiting key suffix. The evaluation of the expression must return a string, which will be appended to the key prefix.
Using a key suffix expression, you're able to dynamically choose a rate limiting key, e.g. based on the user authentication, a header, or a combination. Here's an example expression that uses the sub
claim if available, and a Header as the fallback.
For mor information on how to use the expression language, please refer to the Template Expressionssection.
General Rate Limiting Configuration
RATE_LIMIT_ENABLED
enabled
Enable / Disable rate limiting globally
false
RATE_LIMIT_STRATEGY
strategy
The rate limit strategy
simple
simple_strategy
storage
RATE_LIMIT_KEY_SUFFIX_EXPRESSION
key_suffix_expression
The expression to define a key suffix for the rate limit, e.g. by using request headers, claims, or a combination of both with a fallback strategy. The expression is specified as a string and needs to evaluate to a string. Please see https://expr-lang.org/ for more information.
error_extension_code
Rate Limiting Redis Storage
RATE_LIMIT_REDIS_URLS
urls
List of the connection URL(s).
[redis://localhost:6379]
RATE_LIMIT_REDIS_CLUSTER_ENABLED
cluster_enabled
If the Redis instance is a Redis cluster
false
RATE_LIMIT_REDIS_KEY_PREFIX
key_prefix
This prefix is used to namespace the ratelimit keys
cosmo_rate_limit
Rate Limiting Simple Strategy
RATE_LIMIT_SIMPLE_RATE
rate
Allowed request rate (number)
10
RATE_LIMIT_SIMPLE_BURST
burst
Allowed burst rate (number) - max rate per one request
10
RATE_LIMIT_SIMPLE_PERIOD
period
The rate limiting period, e.g. "10s", "1m", etc...
1s
RATE_LIMIT_SIMPLE_REJECT_EXCEEDING_REQUESTS
reject_exceeding_requests
Reject the complete request if a sub-request exceeds the rate limit. If set to false, partial responses are possible.
false
RATE_LIMIT_SIMPLE_HIDE_STATS_FROM_RESPONSE_EXTENSION
hide_stats_from_response_extension
Hide the rate limit stats from the response extension. If the value is true, the rate limit stats are not included in the response extension.
false
Rate Limit Error Extension Code
RATE_LIMIT_ERROR_EXTENSION_CODE_ENABLED
enabled
If enabled, a code will be added to the extensions.code field of error objects related to rate limiting. This allows clients to easily identify if an error happened due to rate limiting.
true
RATE_LIMIT_ERROR_EXTENSION_CODE
code
The error extension code for the rate limit.
RATE_LIMIT_EXCEEDED
Rate Limiting Example YAML configuration
Subgraph Error Propagation
The configuration for the subgraph error propagation. Errors can be exposed to the client in a "wrapped" form to hide Subgraph internals, or it's possible to "pass-through" Subgraph errors directly to the client.
SUBGRAPH_ERROR_PROPAGATION_ENABLED
enabled
Enable error propagation. If the value is true (default: false), Subgraph errors will be propagated to the client.
false
SUBGRAPH_ERROR_PROPAGATION_MODE
mode
The mode of error propagation. The supported modes are 'wrapped' (default) and 'pass-through'. The 'wrapped' mode wraps the error in a custom error object to hide internals. The 'pass-through' mode returns the error as is from the Subgraph.
wrapped
SUBGRAPH_ERROR_PROPAGATION_REWRITE_PATHS
rewrite_paths
Rewrite the paths of the Subgraph errors. If the value is true (default), the paths of the Subgraph errors will be rewritten to match the Schema of the Federated Graph.
true
SUBGRAPH_ERROR_PROPAGATION_OMIT_LOCATIONS
omit_locations
Omit the location field of Subgraph errors. If the value is true, the location field of Subgraph errors will be omitted. This is useful because the locations of a Subgraph error is internal to the Subgraph and not relevant to the client.
true
SUBGRAPH_ERROR_PROPAGATION_OMIT_EXTENSIONS
omit_extensions
Omit the extensions field of Subgraph errors. If the value is true, the extensions field of Subgraph errors will be omitted. This is useful in case you want to avoid leaking internal information to the client. Some users of GraphQL leverage the errors.extensions.code field to implement error handling logic in the client, in which case you might want to set this to false.
false
SUBGRAPH_ERROR_PROPAGATION_STATUS_CODES
propagate_status_codes
Propagate Subgraph status codes. If the value is true, Subgraph Response status codes will be propagated to the client in the errors.extensions.code field.
false
SUBGRAPH_ERROR_PROPAGATION_ALLOWED_FIELDS
allowed_fields
In passthrough mode, by default only message and path is propagated. You can specify additional fields here.
SUBGRAPH_ERROR_PROPAGATION_DEFAULT_EXTENSION_CODE
default_extension_code
The default extension code. The default extension code is used to specify the default code for the Subgraph errors when the code is not present.
DOWNSTREAM_SERVICE_ERROR
SUBGRAPH_ERROR_PROPAGATION_ATTACH_SERVICE_NAME
attach_service_name
Attach the service name to each Subgraph error. If the value is true, the service name will be attached to the Subgraph errors.
true
SUBGRAPH_ERROR_PROPAGATION_ALLOWED_EXTENSION_FIELDS
allowed_extension_fields
The allowed extension fields. The allowed extension fields are used to specify which fields of the Subgraph errors are allowed to be propagated to the client.
["code"]
Example YAML configuration:
Security
The configuration for the security. The security is used to configure the security settings for the Router.
SECURITY_BLOCK_MUTATIONS
block_mutations
Block mutation Operations.
SECURITY_BLOCK_MUTATIONS_ENABLED
block_mutations.enabled
If the value is true, the mutations are blocked.
false
SECURITY_BLOCK_MUTATIONS_CONDITION
block_mutations.condition
SECURITY_BLOCK_SUBSCRIPTIONS
block_subscriptions
Block subscription Operations.
block_subscriptions.enabled
If the value is true, the subscriptions are blocked.
false
block_subscriptions.condition
SECURITY_BLOCK_NON_PERSISTED_OPERATIONS
block_non_persisted_operations
Block non-persisted Operations.
SECURITY_BLOCK_NON_PERSISTED_OPERATIONS_ENABLED
block_non_persisted_operations.enabled
If the value is true, the non-persisted operations are blocked.
false
SECURITY_BLOCK_NON_PERSISTED_OPERATIONS_CONDITION
block_non_persisted_operations.condition
complexity_calculation_cache
Complexity Cache configuration
complexity_limits
Complexity limits configuration
Example YAML Configuration
Query Depth is now deprecated. We recommend using the security.complexity_calculation_cache
and security.complexity_limits
configurations instead, which provide that functionality.
Complexity Calculation Cache
The configuration for the in-memory complexity cache, to help speed up the calculation process in the event of a recurring query
SECURITY_COMPLEXITY_CACHE_ENABLED
enabled
Enable the complexity cache
false
SECURITY_COMPLEXITY_CACHE_SIZE
size
The size of the complexity cache
1024
Complexity Limits
The configuration for adding a complexity limits for queries. We currently expose 4 limits:
Query Depth - How many nested levels you can have in a query. This limit prevents infinite querying, and also limits the size of the data returned.
Total Fields in Query
Root Fields in Query
Root Field Aliases in Query
For all of the limits, if the limit is 0, or enabled
isn't true, the limit isn't applied. All of them have the same configuration fields:
enabled
Enable the specific limit. If the value is true (default: false), and a valid limit value is set, the limit will be applied
false
limit
The limit amount for query. If the limit is 0, this limit isn't applied
0
ignore_persisted_operations
Disable the limit for persisted operations. Since persisted operations are stored intentionally, users may want to disable the limit to consciously allow nested persisted operations
false
File Upload
The configuration for file upload. Configure whether it should be enabled along with file size and number of files.
FILE_UPLOAD_ENABLED
enabled
Whether the feature is enabled or not
true
FILE_UPLOAD_MAX_FILE_SIZE
max_file_size
The maximum size of a file that can be uploaded. The size is specified as a string with a number and a unit, e.g. 10KB, 1MB, 1GB. The supported units are 'KB', 'MB', 'GB'.
50MB
FILE_UPLOAD_MAX_FILES
max_files
The maximum number of files that can be uploaded per request.
10
Example YAML Configuration
Client Header
The configuration for custom names for client name and client version headers.
name
The custom name of the client name header.
version
The custom name of the client version header.
Example YAML Configuration
By default, we support Graphql-Client-Name
, Graphql-Client-Version
, Apollo-Graphql-Client-Name
, Apollo-Graphql-Client-Version
.
The custom names are given more precedence.
Apollo Compatibility Flags
This configuration is used to enable full compatibility with Apollo Federation, Apollo Gateway and Apollo Router, you can enable certain compatibility flags, allowing you to use Cosmo Router as a drop-in replacement for Apollo.
Apollo Compatibility Value Completion
Invalid __typename values will be returned in extensions.valueCompletion instead of errors.
Apollo Compatibility Truncate Floats
Truncate floats like 1.0 to 1, 2.0 to 2, etc.. Values like 1.1 or 2.2 will not be truncated.
Apollo Compatibility Suppress Fetch Errors
Suppresses fetch errors. When enabled, only the ‘data’ object is returned, suppressing errors. If disabled, fetch errors are included in the ‘errors’ array.
Apollo Compatibility Replace Undefined Op Field Errors
Produces the same error message as Apollo when an invalid operation field is included in an operation selection set. Extension code: "GRAPHQL_VALIDATION_FAILED" Status code: 400
Apollo Compatibility Replace Invalid Var Errors
Produces the same error message as Apollo when an invalid variable is supplied. Extension code: "BAD_USER_INPUT"
Apollo Compatibility Replace Validation Error Status
Produces the same error status as Apollo when validation fails. Error status: 400 Bad Request Minimum router version: 0.175.0
APOLLO_COMPATIBILITY_ENABLE_ALL
apollo_compatibility_flags: enable_all: <bool>
Enables all the options of Apollo Compatibility.
false
APOLLO_COMPATIBILITY_VALUE_COMPLETION_ENABLED
value_completion: enabled: <bool>
Enables value completion.
false
APOLLO_COMPATIBILITY_TRUNCATE_FLOATS_ENABLED
truncate_floats: enabled: <bool>
Enables truncate floats.
false
APOLLO_COMPATIBILITY_SUPPRESS_FETCH_ERRORS_ENABLED
suppress_fetch_errors: enabled: <bool>
Enables suppress fetch errors.
false
APOLLO_COMPATIBILITY_REPLACE_UNDEFINED_OP_FIELD_ERRORS_ENABLED
replace_undefined_op_field_errors: enabled: <bool>
Replaces undefined operation field errors.
false
APOLLO_COMPATIBILITY_REPLACE_INVALID_VAR_ERRORS_ENABLED
replace_invalid_var_errors: enabled: <bool>
Replaces invalid variable errors.
false
APOLLO_COMPATIBILITY_REPLACE_VALIDATION_ERROR_STATUS_ENABLED
replace_validation_error_status_enabled: <bool>
Replaces validation error status with 400.
false
APOLLO_COMPATIBILITY_SUBSCRIPTION_MULTIPART_PRINT_BOUNDARY_ENABLED
subscription_multipart_print_boundary: enabled: <bool>
Prints the multipart boundary right after the message in multipart subscriptions. Without this flag, the Apollo client wouldn’t parse a message until the next one is pushed.
false
Example YAML Configuration
Apollo Router Compatibility Flags
Apollo Router Compatibility Flags can be enabled alongside Apollo Compatibility Flags, but some will override their counterpart's functionality. This means you can safely use enable_all: true
alongside these flags.
Apollo Router Compatibility Replace Invalid Var Errors
Produces the same error messages as Apollo Router when an invalid variable is supplied.
Extension code: "VALIDATION_INVALID_TYPE_VARIABLE"
APOLLO_ROUTER_COMPATIBILITY_REPLACE_INVALID_VAR_ERRORS_ENABLED
replace_invalid_var_errors
Replaces invalid variable errors.
false
Example YAML Configuration
Cache warmer
enabled
Set to true to enable the cache warmer.
false
workers
The number of workers for the cache warmup to run in parallel. Higher numbers decrease the time to warm up the cache but increase the load on the system.
8
items_per_second
The number of cache warmup items to process per second. Higher numbers decrease the time to warm up the cache but increase the load on the system.
50
timeout
The timeout for warming up the cache. This can be used to limit the amount of time cache warming will block deploying a new config. The period is specified as a string with a number and a unit, e.g. 10ms, 1s, 1m, 1h. The supported units are 'ms', 's', 'm', 'h'.
30s
source
The source of the cache warmup items. Only one can be specified. If empty, the cache warmup source is the Cosmo CDN and it requires a graph to be set.
Example YAML config:
Source
The source of the cache warmup items. Only one can be specified. If empty, the cache warmup source is the Cosmo CDN and it requires a graph to be set.
path
The path to the directory containing the cache warmup items.
Example YAML config:
Last updated
Was this helpful?