Continuous log of changes in the nitty-gritty details composing CloudAMQP.
oauth_disable_basic_auth option for OAuth 2.0 configurationThe OAuth 2.0 configuration API (/api/oauth2-configurations) accepts an oauth_disable_basic_auth boolean field. When enabled, static username/password access to the management interface is disabled, leaving only OAuth 2.0 authentication. See the OAuth 2.0 configuration for details.
The ra.wal_max_size_bytes setting controls the maximum size of the Raft write-ahead log kept in memory before it is rolled over to disk. A RabbitMQ restart is required to apply changes. Defaults are plan-dependent. The setting is reset to the plan default on plan changes, unless the value is lower than the plan default.
The value is accepted and returned in MiB via the /api/config endpoint.
The otel_scope_info metric and the labels otel_scope_name, otel_scope_schema_url and otel_scope_version labels have been removed from the /metrics/node endpoint. These were recently introduced with empty values and carried no useful information.
The mqtt.max_session_expiry_interval_seconds setting controls how long the broker retains the state of an MQTT client after it disconnects.
The default value on CloudAMQP is 1800 seconds (30 minutes). Setting it to 0 ensures that the broker immediately removes the client’s state upon disconnection. Setting it to infinity (the literal string infinity in the API) means the broker will always retain the client’s state.
Be aware that using a high value can introduce risks: short-lived clients that do not use clean sessions may leave behind queues and messages. Over time, this can consume resources and may require manual cleanup.
Dedicated clusters connected via VPC Connect (AWS PrivateLink, Azure Private Link, or GCP Private Service Connect) now expose per-node Prometheus metrics endpoints. Because VPC Connect uses a single cluster-level DNS record, the standard /metrics endpoint only reaches one node at a time via round-robin. The new endpoints let you target each node individually:
/node1/metrics — metrics from node 1/node2/metrics — metrics from node 2/node3/metrics — metrics from node 3/node5/metrics)These endpoints are authenticated the same way as the regular /metrics endpoint and are available on both RabbitMQ and LavinMQ clusters.
/api/health/checks API calls without overriding HTTP Error CodesAll RabbitMQ HTTP API 503 Errors were prior being overriden with our Maintenance Page. From now on, instance will forward the correct payload responses from /api/health/checks which expectedly respond with 503. Older instances can be switched on-demand by contacting support.
Two security vulnerabilities have been identified and patched in LavinMQ. These are relevant for deployments where non-admin users are granted management access:
All CloudAMQP shared LavinMQ instances have been updated. Customers with dedicated LavinMQ instances are encouraged to enable automatic updates or update to the latest version via the Console or API.
A new /api/trust-store-configuration endpoint allows you to configure the RabbitMQ trust store plugin for certificate validation on RabbitMQ instances on CloudAMQP. See the API documentation for details.
Let's Encrypt announced that they will stop including the "TLS Client Authentication" Extended Key Usage (EKU) in their certificates.
Effective February 11, 2026, Let's Encrypt certificates issued or renewed for your CloudAMQP clusters will no longer support client authentication. This date applies to all CloudAMQP users as the default ACME profile is used for issuance.
Impact:
Standard Connections: This does not affect standard AMQPS connections (publishers/consumers) connecting to your cluster, as these rely on TLS Server Authentication, which remains supported.
Outgoing Connections (Federation/Shovel): If you rely on your cluster's Let's Encrypt certificate to authenticate itself as a client (e.g., usually in mutual TLS setups for Federation or Shovels connecting to external upstream servers that enforce strict EKU checks), these connections may fail after the cut-off date.
We recommend reviewing any Federation or Shovel upstreams using mTLS to ensure they do not strictly require the Client Authentication EKU from the CloudAMQP server certificate.
raft.wal_max_size_bytes on RabbitMQ clustersThe default value for the raft.wal_max_size_bytes configuration setting on new RabbitMQ instances with between 2 and 4 GiB of RAM has been adjusted from 512 MiB to 256 MiB. This change aims to improve stability for clusters using Raft-based features.
The full list is as follows:
If you need to adjust these numbers for your cluster, feel free to contact support.
You can now configure multiple signing secrets for webhooks, enabling zero-downtime key rotation. Enter multiple secrets separated by spaces, and the backend will generate signatures for all of them. This allows webhook consumers to verify with any key during the transition period.
Key rotation workflow:
new-secret old-secretWebhooks now support optional signature verification via a signing_secret parameter. When configured, webhook requests include the following headers:
webhook-id: Unique message identifierwebhook-timestamp: Unix timestamp (seconds)webhook-signature: HMAC-SHA256 signature (format: v1,<base64>)The signature is computed over {webhook-id}.{webhook-timestamp}.{body}, allowing you to verify that webhook requests originate from CloudAMQP.
You can now configure the following settings on your RabbitMQ clusters, either via the API or the console:
rabbitmq_mqtt.vhost - Define the virtual host used by the MQTT plugin.rabbitmq_mqtt.ssl_cert_login - Enable or disable certificate-based login for MQTT connections.rabbit.ssl_options.fail_if_no_peer_cert - Control whether clients without a valid certificate are rejected.rabbit.ssl_options.verify - Set the TLS verification mode for client certificates.rabbit.ssl_cert_login_from - Select whether the username in certificate should be extracted from the certificate’s Common Name (common_name) or the full Distinguished Name (distinguished_name).When enabling rabbitmq_mqtt.ssl_cert_login, rabbit.ssl_options.fail_if_no_peer_cert should be set to true and rabbit.ssl_options.verify to verify_peer.
Coralogix is deprecating legacy regional domains on December 12, 2025. We are updating all Coralogix log integration endpoints to the new regional format. Existing integrations will be migrated automatically.
| Old endpoint | New endpoint |
|---|---|
| syslog.coralogix.com:6514 | syslog.eu1.coralogix.com:6514 |
| syslog.coralogix.in:6514 | syslog.ap1.coralogix.com:6514 |
| syslog.coralogix.us:6514 | syslog.us1.coralogix.com:6514 |
| syslog.cx498.coralogix.com:6514 | syslog.us2.coralogix.com:6514 |
| syslog.coralogixsg.com:6514 | syslog.ap2.coralogix.com:6514 |
EU2 (syslog.eu2.coralogix.com:6514) remains unchanged. AP3 region has been added.
rabbitmq_prometheus.return_per_object_metrics config settingThe rabbitmq_prometheus.return_per_object_metrics configuration setting has been removed. It is no longer possible to change it using the Console nor the API. If you need per-object broker metrics, see Scraping Prometheus metrics. Please contact support if you have any questions.
rabbitmq_trust_store from list of available pluginsThe rabbitmq_trust_store plugin has been removed from the list of available plugins since it requires manual configuration. Please contact support to enable and configure it.
From October 8, 2025 all shared accounts created at CloudAMQP will be setup with a max length bytes policy. This is applied in order to stop abusive usage to our limits.
It is now possible to stop and start all nodes (RabbitMQ) in multi node clusters from the CloudAMQP Console and API. This ensures that all nodes are stopped/started in the correct order.
The default metrics set exported to the Datadog V3 metrics integration has been changed to use the standard Prometheus metrics format.
If metrics in the Datadog RabbitMQ dashboard format is desired, you can now set the rabbitmq_dashboard_metrics_format parameter to true when creating the integration via the API. This transforms RabbitMQ metric names to match Datadog's RabbitMQ dashboard format (e.g., rabbitmq_channels becomes rabbitmq.channels) to be compatible with Datadog's built-in RabbitMQ dashboards.
This transformation applies to 27 specific RabbitMQ metrics and only affects metrics that are included in your metrics filter. See the Datadog V3 documentation for the complete list of transformed metrics.
All Prometheus-based metrics integrations now support custom metrics filtering via the metrics_filter endpoint through the API. This allows you to specify exactly which metrics you want to export instead of using the default set.
See the API documentation for implementation details and the metrics documentation for the complete list of available metrics.
The Datadog service event attribute can now be set to a custom value for log integrations created after July 1, 2025. The service attribute will be overridden by the custom value of the service tag. If a service tag is not set the service attribute will be set to the cluster name.
From August 5, 2025 LavinMQ instances created via the API and Terraform will have automatic version updates enabled by default.
To opt-out your instance from automatic version updates, you can change your maintenance settings either using the API or the Console.
When you add a VPC peering, CloudAMQP automatically adds firewall rules for the peered CIDR block. These rules now include the MQTT ports: 1883 and 8883
We had a mismatch of API metric naming that caused queue unacked and queue ready alarms to not trigger on LavinMQ clusters.
A fix was deployed to handle this on 2025-05-08 11:45 UTC, no change is needed for customers with these alarms.
LavinMQ issue related to queue metrics naming inconsistency: https://github.com/cloudamqp/lavinmq/issues/1098
All our metrics are scraped and forwarded every 60s, except CPU and memory, which were scraped every 30 seconds
Today we are deploying a fix to ensure these are scraped at a 60s interval as well.
This could affect some metrics integrations, i.e. getting fewer metric points for CPU and memory, but we do not expect this to cause any issues.
If you experience any issues please contact us.
We encountered an issue where we tried to send a string value on a field that is suppose to be a float. The bug resulted in an error for the integration that was STRING_VALUE cannot be converted to Double.
We have now identified the issue and a fix has been deployed.
Context
The CloudWatch V2 integration supports batching values, which is done by collecting a number of metric values and from that calculate the minimum value, maximum value, sum of all of them and number of metrics and send that to CloudWatch. We do this to reduce the amount of metrics which reduces the cost for our customer but without loosing value insights into how the cluster is doing. When calculating these values, the Minimum and Maximum values sometimes was sent as strings which resulted in an error in the integration since only int and floats are allowed for these values.
CloudAMQP has upgraded to a higher-performance disk type on Azure. This provides consistent performance regardless of disk size.
Consequently, new Happy Hare (hare) plan instances and above will now be provisioned with smaller default disk sizes.
Azure disk expansion can now be performed live on CloudAMQP clusters, eliminating downtime.
The following issue has been remediated:
Going forward, only the specific alarm(s) where the sole recipient was removed, will be disabled.
The Erlang Ecosystem Foundation has published Exposed EPMD: A Hidden Security Risk for RabbitMQ and the BEAM Ecosystem to highlight the security risks with exposing epmd to the Internet.
At CloudAMQP we have strict firewalls on all instances, and a 24/7 security scanner to detect any faulty configuration. Thus port 4369 is not accessable from to the Internet, nor is other cluster distribution ports like 25672.
No change needed for CloudAMQP customers.
A small number (58) of accounts on shared LavinMQ plans reported having negative message counts. This was due a bug in our accounting system that tracked the number of published/delivered messages.
Fix has been deployed.
The LavinMQ 2.0.2 rollout on shared servers enabled counting of get/delivered messages for shared accounts. We now count just as we do on the RabbitMQ shared clusters thanks to the introduction of deliver_get metrics (LavinMQ pull request #793)
Note that both basic.deliver and basic.get counts, e.g. if you poll with basic.get it will count even if there's no message in the queue. We recommend you subscribe to a queue and get the messages delivered as they become available. Read more about Consume vs. Get.
Message limit is counted as published + deliver_get and checked against the monthly limit.
In the CloudAMQP Console, for users with the member role or the monitor role ("tagged roles"), you can assign tags to the user, in order to grant access to instances. Before 2024-10-17, changing a user's role from a tagged role to another tagged role, did not keep the assigned tags. Now the tags are kept.
The following attributes was added to Instance API: List nodes: disk_size, additional_disk_size