# null Source: https://upstash.com/docs/README # Mintlify Starter Kit Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including * Guide pages * Navigation * Customizations * API Reference pages * Use of popular components ### 👩‍💻 Development Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command ``` npm i -g mintlify ``` Run the following command at the root of your documentation (where mint.json is) ``` mintlify dev ``` ### 😎 Publishing Changes Changes will be deployed to production automatically after pushing to the default branch. You can also preview changes using PRs, which generates a preview link of the docs. #### Troubleshooting * Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies. * Page loads as a 404 - Make sure you are running in a folder with `mint.json` # Get QStash Source: https://upstash.com/docs/api-reference/qstash/get-qstash devops/developer-api/openapi.yml get /qstash/user Retrieves detailed information about the authenticated user's QStash, including plan details, limits, and configuration # Get QStash Stats Source: https://upstash.com/docs/api-reference/qstash/get-qstash-stats devops/developer-api/openapi.yml get /qstash/stats Retrieves detailed usage statistics for the QStash account including daily requests, billing, bandwidth, and workflow metrics over time. # Reset QStash Token Source: https://upstash.com/docs/api-reference/qstash/reset-qstash-token devops/developer-api/openapi.yml post /qstash/user/rotatetoken Resets the authentication credentials for the QStash user account. This invalidates the old password and token, and generates new ones. Returns the updated user information with new credentials. # Set QStash Plan Source: https://upstash.com/docs/api-reference/qstash/set-qstash-plan devops/developer-api/openapi.yml post /qstash-upgrade Changes the QStash account to a different plan type. This operation changes the plan and associated limits for the QStash account. # Create Search Index Source: https://upstash.com/docs/api-reference/search/create-search-index devops/developer-api/openapi.yml post /search Creates a new search index with the specified configuration # Delete Search Index Source: https://upstash.com/docs/api-reference/search/delete-search-index devops/developer-api/openapi.yml delete /search/{id} Permanently deletes a search index and all its data # Get Index Stats Source: https://upstash.com/docs/api-reference/search/get-index-stats devops/developer-api/openapi.yml get /search/{id}/stats Retrieves statistics and metrics for a specific search index # Get Search Index Source: https://upstash.com/docs/api-reference/search/get-search-index devops/developer-api/openapi.yml get /search/{id} Retrieves detailed information about a specific search index # Get Search Stats Source: https://upstash.com/docs/api-reference/search/get-search-stats devops/developer-api/openapi.yml get /search/stats Get search statistics for all the search indices associated with the authenticated user # List Search Indexes Source: https://upstash.com/docs/api-reference/search/list-search-indexes devops/developer-api/openapi.yml get /search Returns a list of all search indices belonging to the authenticated user. # Rename Search Index Source: https://upstash.com/docs/api-reference/search/rename-search-index devops/developer-api/openapi.yml post /search/{id}/rename Renames a search index. # Reset Password Source: https://upstash.com/docs/api-reference/search/reset-password devops/developer-api/openapi.yml post /search/{id}/reset-password This endpoint resets the regular and readonly tokens of a search index. # Transfer Search Index Source: https://upstash.com/docs/api-reference/search/transfer-search-index devops/developer-api/openapi.yml post /search/{id}/transfer Transfers ownership of a search index to another team. Transferring to a personal account is not supported. However, transferring from a personal account to a team is allowed. # Get Index Stats Source: https://upstash.com/docs/api-reference/vector/get-index-stats devops/developer-api/openapi.yml get /vector/index/{id}/stats Retrieves statistics and metrics for a specific vector index # Get Vector Stats Source: https://upstash.com/docs/api-reference/vector/get-vector-stats devops/developer-api/openapi.yml get /vector/index/stats Get vector statistics for all the vector indices associated with the authenticated user # Add a Payment Method Source: https://upstash.com/docs/common/account/addapaymentmethod Upstash does not require a credit card for Free databases. However, for paid databases, you need to add at least one payment method. To add a payment method, follow these steps: 1. Click on your profile at the top right. 2. Select  `Account` from the dropdown menu. 3. Navigate to the `Billing` tab. 4. On the screen, click the `Add Your Card` button. 5. Enter your name and credit card information in the following form: You can enter multiple credit cards and set one of them as the default one. The payments will be charged from the default credit card. ## Payment Security Upstash does not store users' credit card information in its servers. We use Stripe Inc payment processing company to handle payments. You can read more about payment security in Stripe [here](https://stripe.com/docs/security/stripe). # Audit Logs Source: https://upstash.com/docs/common/account/auditlogs Audit logs give you a chronological set of activity records that have affected your databases and Upstash account. You can see the list of all activities on a single page. You can access your audit logs under `Account > Audit Logs` in your console: Here the `Source` column shows if the action has been called by the console or via an API key. The `Entity` column gives you the name of the resource that has been affected by the action. For example, when you delete a database, the name of the database will be shown here. Also, you can see the IP address which performed the action. ## Security You can track your audit logs to detect any unusual activity on your account and databases. When you suspect any security breach, you should delete the API key related to suspicious activity and inform us by emailing [support@upstash.com](mailto:support@upstash.com) ## Retention period After the retention period, the audit logs are deleted. The retention period for free databases is 7 days, for pay-as-you-go databases, it is 30 days, and for the Pro tier, it is one year. # AWS Marketplace Source: https://upstash.com/docs/common/account/awsmarketplace **Prerequisite** You need an Upstash account before subscribing on AWS, create one [here](https://console.upstash.com). Upstash is available on the AWS Marketplace, which is particularly beneficial for users who already get other services from AWS Marketplace and can consolidate Upstash under a single bill. You can search "Upstash" on AWS Marketplace or just click [here](https://aws.amazon.com/marketplace/pp/prodview-fssqvkdcpycco). Once you click subscribe, you will be prompted to select which personal or team account you wish to link with your AWS Subscription. Once your account is linked, regardless of which Upstash product you use, all of your usage will be billed to your AWS Account. You can also upgrade or downgrade your subscription through Upstash console. # Cost Explorer Source: https://upstash.com/docs/common/account/costexplorer The Cost Explorer pages allow you to view your current and previous months’ costs. To access the Cost Explorer, navigate to the left menu and select Account > Cost Explorer. Below is an example report: You can select a specific month to view the cost breakdown for that period. Here's the explanation of the fields in the report: **Request:** This represents the total number of requests sent to the database. **Storage:** This indicates the average size of the total storage consumed. Upstash database includes a persistence layer for data durability. For example, if you have 1 GB of data in your database throughout the entire month, this value will be 1 GB. Even if your database is empty for the first 29 days of the month and then expands to 30 GB on the last day, this value will still be 1 GB. **Cost:** This field represents the total cost of your database in US Dollars. > The values for the current month is updated hourly, so values can be stale up > to 1 hour. # Create an Account Source: https://upstash.com/docs/common/account/createaccount You can sign up for Upstash using your Amazon, Github or Google accounts. Alternatively, if you prefer not to use these authentication providers or want to sign up with a corporate email address, you can also sign up using email and password. We do not access your information other than: * Your email * Your name * Your profile picture and we never share your information with third parties. # Developer API Source: https://upstash.com/docs/common/account/developerapi Using Upstash API, you can develop applications that can create and manage Upstash databases and Upstash Vector Indexes. You can automate everything that you can do in the console. To use developer API, you need to create an API key in the console. Note: The Developer API is only available to native Upstash accounts. Accounts created via third-party platforms like Vercel or Fly.io are not supported. See [DevOps](/devops) for details. # Account and Billing FAQ Source: https://upstash.com/docs/common/account/faq ## How can I delete my account? You can delete your account from `Account` > `Settings` > `Delete Account`. You should first delete all your databases and clusters. After you delete your account, all your data and payment information will be deleted and you will not be able to recover it. ## How can I delete my credit card? You can delete your credit card from `Account` > `Billing` page. However, you should first add a new credit card to be able to delete the existing one. If you want to delete all of your payment information, you should delete your account. ## How can I change my email address? You can change your account e-mail address in `Account` > `Settings` page. In order to change your billing e-mail adress, please see `Account` > `Billing` page. If you encounter any issues, please contact us at [support@upstash.com](mailto:support@upstash.com) to change your email address. ## Can I set an upper spending limit, so I don't get surprises after an unexpected amount of high traffic? On Pay as You Go model, you can set a budget for your Redis instances. When your monthly cost reaches the max budget, we send an email to inform you and throttle your instance. You will not be charged beyond your set budget. To set the budget, you can go to the "Usage" tab of your Redis instance and click "Change Budget" under the cost metric. ## What happens if my payment fails? If a payment failure occurs, we will retry the payment three more times before suspending the account. During this time, you will receive email notifications about the payment failure. If the account is suspended, all resources in the account will be inaccessible. If you add a valid payment method after the account suspension, your account will be automatically unsuspended during the next payment attempt. ## What happens if I unsubscribe from AWS Marketplace but I don't have any other payment methods? We send a warning email three times before suspending an account. If no valid payment method is added, we suspend the account. Once the account is suspended, all resources within the account will be inaccessible. If you add a valid payment method after the account suspension, your account will be automatically unsuspended during the next system check. ## I have a question about my bill, who should I contact? Please contact us at [support@upstash.com](mailto:support@upstash.com). # Payment History Source: https://upstash.com/docs/common/account/paymenthistory The Payment History page gives you information about your payments. You can open your payment history in the left menu under Account > Payment History. Here an example report: You can download receipt. If one of your payments failed, you can retry your payment on this page. # Teams and Users Source: https://upstash.com/docs/common/account/teams Team management enables collaboration with other users. You can create a team and invite people to join by using their email addresses. Team members will have access to databases created under the team based on their assigned roles. ## Create Team You can create a team using the menu `Account > Teams`
> A user can create up to 5 teams. You can be part of even more teams but only > be the owner of 5 teams. If you need to own more teams please email us at > [support@upstash.com](mailto:support@upstash.com). You can still continue using your personal account or switch to a team. > The databases in your personal account are not shared with anyone. If you want > your database to be accessible by other users, you need to create it under a > team. ## Switch Team You need to switch to the team to create databases shared with other team members. You can switch to the team via the switch button in the team table. Or you can click your profile pic in the top right and switch to any team listed there. ## Add/Remove Team Member After switching to a team, if you are the Owner or an Admin of the team, you can add team members by navigating to `Account > Teams`. Simply enter their email addresses.It's not an issue if the email addresses are not yet registered with Upstash. Once the user registers with that email, they will gain access to the team. We do not send invitations; when you add a member, they become a member directly. You can also remove members from the same page. > Only Admins or the Owner can add/remove users. ## Roles While adding a team member, you will need to select a role. Here are the access rights associated with each role: * Admin: This role has full access, including the ability to add and remove members, manage databases, and payment methods. * Dev: This role can create, manage, and delete databases but cannot manage users or payment methods. * Finance: This role is limited to managing payment methods and cannot manage databases or users. * Owner: The Owner role has all the access rights of an Admin and, in addition to having the ability to delete the team. This role is automatically assigned to the user who created the team, and you cannot assign it to other members. > If you want to change a user's role, you will need to delete and re-add them with the desired access rights. ## Delete Team Only the original creator (owner) can delete a team. Also the team should not have any active databases, namely all databases under the team should be deleted first. To delete your team, first you need to switch your personal account then you can delete your team in the team list under `Account > Teams`. # Access Anywhere Source: https://upstash.com/docs/common/concepts/access-anywhere Upstash has integrated REST APIs into all its products to facilitate access from various runtime environments. This integration is particularly beneficial for edge runtimes like Cloudflare Workers and Vercel Edge, which do not permit TCP connections, and for serverless functions such as AWS Lambda, which are stateless and do not retain connection information between invocations. ### Rationale The absence of TCP connection support in edge runtimes and the stateless nature of serverless functions necessitate a different approach for persistent connections typically used in traditional server setups. The stateless REST API provided by Upstash addresses this gap, enabling consistent and reliable communication with data stores from these platforms. ### REST API Design The REST APIs for Upstash services are thoughtfully designed to align closely with the conventions of each product. This ensures that users who are already familiar with these services will find the interactions intuitive and familiar. Our API endpoints are self-explanatory, following standard REST practices to guarantee ease of use and seamless integration. ### SDKs for Popular Languages To enhance the developer experience, Upstash is developing SDKs in various popular programming languages. These SDKs simplify the process of integrating Upstash services with your applications by providing straightforward methods and functions that abstract the underlying REST API calls. ### Resources [Redis REST API Docs](/redis/features/restapi) [QStash REST API Docs](/qstash/api/authentication) [Redis SDK - Typescript](https://github.com/upstash/upstash-redis) [Redis SDK - Python](https://github.com/upstash/redis-python) [QStash SDK - Typescript](https://github.com/upstash/sdk-qstash-ts) # Global Replication Source: https://upstash.com/docs/common/concepts/global-replication Global Replication for Low Latency and High Availability Upstash Redis automatically replicates your data to the regions you choose, so your application stays fast and responsive-no matter where your users are. Add or remove regions from a database at any time with zero downtime. Each region acts as a replica, holding a copy of your data for low latency and high availability. *** ## Built for Modern Serverless Architectures In serverless computing, performance isn't just about fast code—it's also about fast, reliable data access from anywhere in the world. Whether you're using Vercel Functions, Cloudflare Workers, Fastly Compute, or Deno Deploy, your data layer needs to be as distributed and flexible as your compute for best performance. Upstash Global replicates your Redis data across multiple regions to: * Minimize round-trip latency * Guarantee high availability at scale ...even under heavy or dynamic workloads. Our HTTP-based Redis® client is optimized for serverless environments and delivers consistent performance under high concurrency or variable workloads. As serverless platforms evolve with features like in-function concurrency (e.g. [Vercel's Fluid Compute](https://vercel.com/fluid)), you need a data layer that can keep up. Upstash Redis is a globally distributed, low-latency database that scales with your compute, wherever it runs. *** ## How Global Replication Works To minimize latency for read operations, we use a replica model. Our tests show sub-millisecond latency for read commands in the same AWS region as the Upstash Redis® instance. **Read commands are automatically served from the geographically closest replica**: **Write commands go to the primary database** for consistency. After a successful write, they are replicated to all read replicas: *** ## Available Regions To create a globally distributed database, select a primary region and the number of read regions: * Select a primary region for most write operations for best performance. * Select read regions close to your users for optimized read speeds. Each request is then automatically served by the closest read replica for maximum performance and minimum latency: **You can create read replicas in the following regions:** * AWS US-East-1 (North Virginia) * AWS US-East-2 (Ohio) * AWS US-West-1 (North California) * AWS US-West-2 (Oregon) * AWS EU-West-1 (Ireland) * AWS EU-West-2 (London) * AWS EU-Central-1 (Frankfurt) * AWS AP-South-1 (Mumbai) * AWS AP-Northeast-1 (Tokyo) * AWS AP-Southeast-1 (Singapore) * AWS AP-Southeast-2 (Sydney) * AWS SA-East-1 (São Paulo) Check out [our blog post](https://upstash.com/blog/global-database) to learn more about our global replication philosophy. You can also explore our [live benchmark](https://latency.upstash.com/) to see Upstash Redis latency from different locations around the world. # Scale to Zero Source: https://upstash.com/docs/common/concepts/scale-to-zero Only pay for what you really use. Traditionally, cloud services required users to predict their resource needs and provision servers or instances based on those predictions. This often led to over-provisioning to handle potential peak loads, resulting in paying for unused resources during periods of low demand. By *scaling to zero*, our pricing model aligns more closely with actual usage. ## Pay for usage You're only charged for the resources you actively use. When your application experiences low activity or no incoming requests, the system automatically scales down resources to a minimal level. This means you're no longer paying for idle capacity, resulting in cost savings. ## Flexibility "Scaling to zero" offers flexibility in scaling both up and down. As your application experiences traffic spikes, the system scales up resources to meet demand. Conversely, during quiet periods, resources scale down. ## Focus on Innovation Developers can concentrate on building and improving the application without constantly worrying about resource optimization. Upstash handles the scaling, allowing developers to focus on creating features that enhance user experiences. In essence, this aligns pricing with actual utilization, increases cost efficiency, and promotes a more sustainable approach to resource consumption. This model empowers businesses to leverage cloud resources without incurring unnecessary expenses, making cloud computing more accessible and attractive to a broader range of organizations. # Serverless Source: https://upstash.com/docs/common/concepts/serverless What do we mean by serverless? Upstash is a modern serverless data platform. But what do we mean by serverless? ## No Server Management In a serverless setup, developers don't need to worry about configuring or managing servers. We take care of server provisioning, scaling, and maintenance. ## Automatic Scaling As traffic or demand increases, Upstash automatically scales the required resources to handle the load. This means applications can handle sudden spikes in traffic without manual intervention. ## Granular Billing We charge based on the actual usage of resources rather than pre-allocated capacity. This can lead to more cost-effective solutions, as users only pay for what they consume. [Read more](/common/concepts/scale-to-zero) ## Stateless Functions In serverless architectures, functions are typically stateless. However, the traditional approach involves establishing long-lived connections to databases, which can lead to issues in serverless environments if connections aren't properly managed after use. Additionally, there are scenarios where TCP connections may not be feasible. Upstash addresses this issue by offering access via HTTP, a universally available protocol across all platforms. ## Rapid Deployment Fast iteration is the key to success in today's competitive environment. You can create a new Upstash database in seconds, with minimal required configuration. # Account & Teams Source: https://upstash.com/docs/common/help/account ## Create an Account You can sign up to Upstash using your Amazon, Github or Google accounts. Alternatively you can sign up using email/password registration if you don't want to use these auth providers, or you want to sign up using a corporate email address. We do not access your information other than: * Your email * Your name * Your profile picture and we never share your information with third parties. Team management allows you collaborate with other users. You can create a team and invite people to the team by email addresses. The team members will have access to the databases created under the team depending on their roles. ## Teams ### Create Team You can create a team using the menu `Account > Teams`
> A user can create up to 5 teams. You can be part of even more teams but only > be the owner of 5 teams. If you need to own more teams please email us at > [support@upstash.com](mailto:support@upstash.com). You can still continue using your personal account or switch to a team. > The databases in your personal account are not shared with anyone. If you want > your database to be accessible by other users, you need to create it under a > team. ### Switch Team You need to switch to the team to create databases shared with other team members. You can switch to the team via the switch button in the team table. Or you can click your profile pic in the top right and switch to any team listed there. ### Add/Remove Team Member Once you switched to a team, you can add team members in `Account > Teams` if you are Owner or Admin for of the team. Entering email will be enough. The email may not registered to Upstash yet, it is not a problem. Once the user registers with that email, he/she will be able to switch to the team. We do not send invitation, so when you add a member, he/she becomes a member directly. You can remove the members from the same page. > Only Admins or the Owner can add/remove users. ### Roles While adding a team member you need to select a role. Here the privileges of each role: * Admin: This role has full access including adding removing members, databases, payment methods. * Dev: This role can create, manage and delete databases. It can not manage users and payment methods. * Finance: This role can only manage payment methods. It can not manage the databases and users. * Owner: Owner has all the privileges that admin has. In addition he is the only person who can delete the team. This role is assigned to the user who created the team. So you can not create a member with Owner role. > If you want change role of a user, you need to delete and add again. ### Delete Team Only the original creator (owner) can delete a team. Also the team should not have any active databases, namely all databases under the team should be deleted first. To delete your team, first you need to switch your personal account then you can delete your team in the team list under `Account > Teams`. # Announcements Source: https://upstash.com/docs/common/help/announcements Upstash Announcements! Removal of GraphQL API and edge caching (Redis) (October 1, 2022) These two features have been already deprecated. We are planning to deactivate them completely on November 1st. We recommend use of REST API to replace GraphQL API and Global databases instead of Edge caching. Removal of strong consistency (Redis) (October 1, 2022) Upstash supported Strong Consistency mode for the single region databases. We decided to deprecate this feature because its effect on latency started to conflict with the performance expectations of Redis use cases. Moreover, we improved the consistency of replication to guarantee Read-Your-Writes consistency. Strong consistency will be disabled on existing databases on November 1st. #### Redis pay-as-you-go usage cap (October 1, 2022) We are increasing the max usage cap to \$160 from \$120 as of October 1st. This update is needed because of the increasing infrastructure cost due to replicating all databases to multiple instances. After your database exceeds the max usage cost, your database might be rate limited. #### Replication is enabled (Sep 29, 2022) All new and existing paid databases will be replicated to multiple replicas. Replication enables high availability in case of system and infrastructure failures. Starting from October 1st, we will gradually upgrade all databases without downtime. Free databases will stay single replica.
#### QStash Price Decrease (Sep 15, 2022) The price is \$1 per 100K requests.
#### [Pulumi Provider is available](https://upstash.com/blog/upstash-pulumi-provider) (August 4, 2022)
#### [QStash is released and announced](https://upstash.com/blog/qstash-announcement) (July 18, 2022)
#### [Announcing Upstash CLI](https://upstash.com/blog/upstash-cli) (May 16, 2022)
#### [Introducing Redis 6 Compatibility](https://upstash.com/blog/redis-6) (April 10, 2022)
#### Strong Consistency Deprecated (March 29, 2022) We have deprecated Strong Consistency mode for Redis databases due to its performance impact. This will not be available for new databases. We are planning to disable it on existing databases before the end of 2023. The database owners will be notified via email.
#### [Announcing Upstash Redis SDK v1.0.0](https://upstash.com/blog/upstash-redis-sdk-v1) (March 14, 2022)
#### Support for Google Cloud (June 8, 2021) Google Cloud is available for Upstash Redis databases. We initially support US-Central-1 (Iowa) region. Check the [get started guide](https://docs.upstash.com/redis/howto/getstartedgooglecloudfunctions).
#### Support for AWS Japan (March 1, 2021) こんにちは日本 Support for AWS Tokyo Region was the most requested feature by our users. Now our users can create their database in AWS Asia Pacific (Tokyo) region (ap-northeast-1). In addition to Japan, Upstash is available in the regions us-west-1, us-east-1, eu-west-1. Click [here](https://console.upstash.com) to start your database for free. Click [here](https://roadmap.upstash.com) to request new regions to be supported.
#### Vercel Integration (February 22, 2021) Upstash\&Vercel integration has been released. Now you are able to integrate Upstash to your project easily. We believe Upstash is the perfect database for your applications thanks to its: * Low latency data * Per request pricing * Durable storage * Ease of use Below are the resources about the integration: See [how to guide](https://docs.upstash.com/redis/howto/vercelintegration). See [integration page](https://vercel.com/integrations/upstash). See [Roadmap Voting app](https://github.com/upstash/roadmap) as a showcase for the integration. # Compliance Source: https://upstash.com/docs/common/help/compliance ## Upstash Legal & Security Documents * [Upstash Terms of Service](https://upstash.com/static/trust/terms.pdf) * [Upstash Privacy Policy](https://upstash.com/static/trust/privacy.pdf) * [Upstash Data Processing Agreement](https://upstash.com/static/trust/dpa.pdf) * [Upstash Technical and Organizational Security Measures](https://upstash.com/static/trust/security-measures.pdf) * [Upstash Subcontractors](https://upstash.com/static/trust/subprocessors.pdf) ## Is Upstash SOC2 Compliant? Upstash Redis databases under Pro and Enterprise support plans are SOC2 compliant. Check our [trust page](https://trust.upstash.com/) for details. ## Is Upstash ISO-27001 Compliant? We are in process of getting this certification. Contact us ([support@upstash.com](mailto:support@upstash.com)) to learn about the expected date. ## Is Upstash GDPR Compliant? Yes. For more information, see our [Privacy Policy](https://upstash.com/static/trust/privacy.pdf). We acquire DPAs from each [subcontractor](https://upstash.com/static/trust/subprocessors.pdf) that we work with. ## Is Upstash HIPAA Compliant? Yes. Upstash Redis is HIPAA compliant and we are in process of getting this compliance for our other products. See [Managing Healthcare Data](https://upstash.com/docs/redis/help/managing-healthcare-data) for more details. ## Is Upstash PCI Compliant? Upstash does not store personal credit card information. We use Stripe for payment processing. Stripe is a certified PCI Service Provider Level 1, which is the highest level of certification in the payments industry. ## Does Upstash conduct vulnerability scanning and penetration tests? Yes, we use third party tools and work with pen testers. We share the results with Enterprise customers. Contact us ([support@upstash.com](mailto:support@upstash.com)) for more information. ## Does Upstash take backups? Yes, we take regular snapshots of the data cluster to the AWS S3 platform. ## Does Upstash encrypt data? Customers can enable TLS when creating a database or cluster, and we recommend this for production environments. Additionally, we encrypt data at rest upon customer request. # Legal Source: https://upstash.com/docs/common/help/legal ## Upstash Legal Documents * [Upstash Terms of Service](https://upstash.com/trust/terms.pdf) * [Upstash Privacy Policy](https://upstash.com/trust/privacy.pdf) * [Upstash Subcontractors](https://upstash.com/trust/subprocessors.pdf) * [Context7 Addendum](https://upstash.com/trust/context7addendum.pdf) * [Data Processing Addendum](https://upstash.com/static/trust/dpa.pdf) # Production Checklist Source: https://upstash.com/docs/common/help/production-checklist This checklist provides essential recommendations for securing and optimizing your Upstash databases for production workloads. ## Security Features ### Enable Prod Pack Prod Pack provides enterprise-grade security and monitoring features: * 99.99% uptime SLA * SOC-2 Type 2 report available * Role-Based Access Control (RBAC) * Encryption at Rest * Advanced monitoring (Prometheus, Datadog) * High availability for read regions Prod Pack is available as a \$200/month add-on per database for all paid plans except Free tier. ### Enable Credential Protection Protect your database credentials (Prod Pack feature): * Credentials are never stored in Upstash infrastructure * Credentials are displayed only once during enablement * Console features requiring database access are disabled Disabling this feature will permanently revoke current credentials and generate new ones. ### Configure IP Allowlist Restrict database access to specific IP addresses: * Available on all plans except Free tier * Supports IPv4 addresses and CIDR blocks * Multiple IP ranges can be configured ### Implement Redis ACL Use Redis Access Control Lists to restrict user access: * Create users with minimal required permissions * Available for both TCP connections and REST API * Use `ACL RESTTOKEN` command to generate REST tokens ### Enable Multi-Factor Authentication Enable MFA on your Upstash account for enhanced security: * Use your existing authentication provider (Google, GitHub, Amazon) * Consider using a dedicated email/password account for production * Force MFA for all team members to ensure consistent security * Regularly review account access and team member permissions ### Secure Credential Management Follow these best practices: * Never hardcode credentials in your application code * Use environment variables or secret management systems * Reset passwords immediately if credentials are compromised * Use Read-Only tokens for public-facing applications ## Network Security ### TLS Encryption TLS is always enabled on Upstash Redis databases. ### VPC Peering (Enterprise) Connect databases to your VPCs using private IP: * Database becomes inaccessible from public networks * Minimizes data transfer costs * Available for Enterprise customers ## Monitoring & Observability ### Enable Advanced Monitoring Prod Pack includes comprehensive monitoring: * Prometheus integration * Datadog integration * Extended console metrics (up to one month) ## High Availability & Backup ### Enable Daily Backups Configure automated daily backups for data protection: * Available on all paid plans * Backup retention up to 3 days with Prod Pack * Hourly backups with customizable retention (Enterprise) ### Global Replication For global applications, consider using Global Database: * Distribute data across multiple regions * Minimize latency for users worldwide * Enhanced disaster recovery capabilities ## Compliance & Governance ### SOC-2 Compliance Prod Pack and Enterprise plans include SOC-2 Type 2 compliance: * Request SOC-2 report from [trust.upstash.com](https://trust.upstash.com/) * Available for production workloads ### Enterprise Features For enterprise customers: * HIPAA compliance available * SAML SSO integration * Access logs available * Custom resource allocation ## Pre-Production Checklist Before going live, ensure you have: * [ ] Prod Pack enabled (recommended) * [ ] Credential Protection enabled * [ ] IP Allowlist configured * [ ] MFA enabled on your account * [ ] Daily backups enabled * [ ] Monitoring and alerts configured * [ ] Environment variables secured * [ ] Error handling tested ## Additional Resources * [Security Features](/redis/features/security) * [Prod Pack & Enterprise](/redis/overall/enterprise) * [Backup & Restore](/redis/features/backup) * [Global Database](/redis/features/globaldatabase) * [Monitoring & Metrics](/redis/howto/metricsandcharts) * [Compliance Information](/common/help/compliance) * [Professional Support](/common/help/prosupport) For additional assistance with production deployment, contact our support team at [support@upstash.com](mailto:support@upstash.com). # Professional Support Source: https://upstash.com/docs/common/help/prosupport For all Upstash products, we manage everything for you and let you focus on more important things. If you ever need further help, our dedicated Professional Support team are here to ensure you get the most out of our platform, whether you’re just starting or scaling to new heights. Professional Support is strongly recommended especially for customers who use Upstash as part of their production systems. # Expert Guidance Get direct access to our team of specialists who can provide insights, troubleshooting, and best practices tailored to your unique use case. In any urgent incident you might have, our Support team will be standing by and ready to join you for troubleshooting. Professional Support package includes: * **Guaranteed Response Time:** Rapid Response Time SLA to urgent support requests, ensuring your concerns are addressed promptly with a **24/7 coverage**. * **Customer Onboarding:** A personalized session to guide you through utilizing our support services and reviewing your specific use case for a seamless start. * **Quarterly Use Case Review & Health Check:** On-request sessions every quarter to review your use case and ensure optimal performance. * **Dedicated Slack Channel:** Direct access to our team via a private Slack channel, so you can reach out whenever you need assistance. * **Incident Support:** Video call support during critical incidents to provide immediate help and resolution. * **Root Cause Analysis:** Comprehensive investigation and post-mortem analysis of critical incidents to identify and address the root cause. # Response Time SLA We understand that timely assistance is critical for production workloads, so your access to our Support team comes with 24/7 coverage and below SLA: | Severity | Response Time | | ------------------------------- | ------------- | | P1 - Production system down | 30 minutes | | P2 - Production system impaired | 2 hours | | P3 - Minor issue | 12 hours | | P4 - General guidance | 24 hours | ## How to Reach Out? As a Professional Support Customer, below are the **two methods** to reach out to Upstash Support Team, in case you need to utilize our services: #### Starting a Chat You will see a chatbox on the bottom right when viewing Upstash console, docs and website. Once you initiate a chat, Professional Support customers will be prompted to select a severity level: To be able to see these options in chat, remember to sign into your Upstash Account first. If you select "P1 - Production down, no workaround", or "P2 - Production impaired with workaround" options, you will be triggering an alert for our team to urgently step in. #### Sending an Email Sending an email with details to [support@upstash.com](mailto:support@upstash.com) is another way to submit a support request. In case of an urgency, sending an email with details by using "urgent" keyword in email subject is another alternative to alert our team about a possible incident. # Pricing For pricing and further details about Professional Support, please contact us at [support@upstash.com](mailto:support@upstash.com) # Uptime SLA Source: https://upstash.com/docs/common/help/sla This Service Level Agreement ("SLA") applies to Upstash resources with the Prod Pack add-on or Enterprise plans. It is clarified that this SLA is subject to the [terms of the Agreement](https://upstash.com/trust/terms.pdf), and does not derogate therefrom (capitalized terms, unless otherwise indicated herein, have the meaning specified in the Agreement). To receive uptime SLA guarantees, you need to enable the Prod Pack add-on or be on an Enterprise plan for your resource. Learn more about [Prod Pack and Enterprise features for Redis](/redis/overall/enterprise) or [QStash](/qstash/overall/enterprise). Upstash reserves the right to change the terms of this SLA by publishing updated terms on its website, such change to be effective as of the date of publication. ### Uptime Guarantee Upstash will use commercially reasonable efforts to make resources with Prod Pack add-on or Enterprise plans available with a Monthly Uptime Percentage of at least **99.99%**. In the event any of the services do not meet the SLA, you will be eligible to receive a Service Credit as described below. | Monthly Uptime Percentage | Service Credit Percentage | | --------------------------------------------------- | ------------------------- | | Less than 99.99% but equal to or greater than 99.0% | 10% | | Less than 99.0% but equal to or greater than 95.0% | 30% | | Less than 95.0% | 60% | ### SLA Credits Service Credits are calculated as a percentage of the monthly bill (excluding one-time payments such as upfront payments) for the resource in the affected region that did not meet the SLA. Uptime percentages are recorded and published in the [Upstash Status Page](https://status.upstash.com). To receive a Service Credit, you should submit a claim by sending an email to [support@upstash.com](mailto:support@upstash.com). Your credit request should be received by us before the end of the second billing cycle after the incident occurred. We will apply any service credits against future payments for the applicable services. At our discretion, we may issue the Service Credit to the credit card you used. Service Credits will not entitle you to any refund or other payment. A Service Credit will be applicable and issued only if the credit amount for the applicable monthly billing cycle is greater than one dollar (\$1 USD). Service Credits may not be transferred or applied to any other account. ### Getting Uptime SLA Coverage To receive uptime SLA guarantees for your resources, you need to upgrade to either: * **Prod Pack**: An add-on per resource available to both pay-as-you-go and fixed-price plans * **Enterprise Plan**: A custom plan that can cover one or more of your resources You can activate Prod Pack on the resource details page in the console. For Enterprise plans, contact [support@upstash.com](mailto:support@upstash.com). Learn more about [Prod Pack and Enterprise features for Redis](/redis/overall/enterprise) or [QStash](/qstash/overall/enterprise). # Support & Contact Us Source: https://upstash.com/docs/common/help/support ## Community [Upstash Discord Channel](https://upstash.com/discord) is the best way to interact with the community. ## Team Regardless of your subscription plan, you can contact the team via [support@upstash.com](mailto:support@upstash.com) for technical support as well as questions and feedback. ## Follow Us Follow us on [X](https://x.com/upstash). ## Enterprise Support Get [Enterprise Support](/common/help/prosupport) for your organization from the Upstash team. # Uptime Monitor Source: https://upstash.com/docs/common/help/uptime ## Status Page You can track the uptime status of Upstash databases in [Upstash Status Page](https://status.upstash.com) ## Latency Monitor You can see the average latencies for different regions in [Upstash Latency Monitoring](https://latency.upstash.com) page # Trials Source: https://upstash.com/docs/common/trials If you want to try Upstash paid and pro plans, we can offer **Free Trials**. Email us at [support@upstash.com](mailto:support@upstash.com) # Overview Source: https://upstash.com/docs/devops/cli/overview Manage Upstash resources in your terminal or CI. You can find the Github Repository [here](https://github.com/upstash/cli).
# Installation ## npm You can install upstash's cli directly from npm ```bash theme={"system"} npm i -g @upstash/cli ``` It will be added as `upstash` to your system's path. ## Compiled binaries: `upstash` is also available from the [releases page](https://github.com/upstash/cli/releases/latest) compiled for windows, linux and mac (both intel and m1). # Usage ```bash theme={"system"} > upstash Usage: upstash Version: development Description: Official cli for Upstash products Options: -h, --help - Show this help. -V, --version - Show the version number for this program. -c, --config - Path to .upstash.json file Commands: auth - Login and logout redis - Manage redis database instances team - Manage your teams and their members Environment variables: UPSTASH_EMAIL - The email you use on upstash UPSTASH_API_KEY - The api key from upstash ``` ## Authentication When running `upstash` for the first time, you should log in using `upstash auth login`. Provide your email and an api key. [See here for how to get a key.](https://docs.upstash.com/redis/howto/developerapi#api-development) As an alternative to logging in, you can provide `UPSTASH_EMAIL` and `UPSTASH_API_KEY` as environment variables. ## Usage Let's create a new redis database: ``` > upstash redis create --name=my-db --region=eu-west-1 Database has been created database_id a3e25299-132a-45b9-b026-c73f5a807859 database_name my-db database_type Pay as You Go region eu-west-1 type paid port 37090 creation_time 1652687630 state active password 88ae6392a1084d1186a3da37fb5f5a30 user_email andreas@upstash.com endpoint eu1-magnetic-lacewing-37090.upstash.io edge false multizone false rest_token AZDiASQgYTNlMjUyOTktMTMyYS00NWI5LWIwMjYtYzczZjVhODA3ODU5ODhhZTYzOTJhMTA4NGQxMTg2YTNkYTM3ZmI1ZjVhMzA= read_only_rest_token ApDiASQgYTNlMjUyOTktMTMyYS00NWI5LWIwMjYtYzczZjVhODA3ODU5O_InFjRVX1XHsaSjq1wSerFCugZ8t8O1aTfbF6Jhq1I= You can visit your database details page: https://console.upstash.com/redis/a3e25299-132a-45b9-b026-c73f5a807859 Connect to your database with redis-cli: redis-cli -u redis://88ae6392a1084d1186a3da37fb5f5a30@eu1-magnetic-lacewing-37090.upstash.io:37090 ``` ## Output Most commands support the `--json` flag to return the raw api response as json, which you can parse and automate your system. ```bash theme={"system"} > upstash redis create --name=test2113 --region=us-central1 --json | jq '.endpoint' "gusc1-clean-gelding-30208.upstash.io" ``` # List Audit Logs Source: https://upstash.com/docs/devops/developer-api/account/list_audit_logs devops/developer-api/openapi.yml get /auditlogs This endpoint lists all audit logs of user. # Authentication Source: https://upstash.com/docs/devops/developer-api/authentication Authentication for the Upstash Developer API The Upstash API requires API keys to authenticate requests. You can view and manage API keys at the Upstash Console. Upstash API uses HTTP Basic authentication. You should pass `EMAIL` and `API_KEY` as basic authentication username and password respectively. With a client such as `curl`, you can pass your credentials with the `-u` option, as the following example shows: ```curl theme={"system"} curl https://api.upstash.com/v2/redis/databases -u EMAIL:API_KEY ``` Replace `EMAIL` and `API_KEY` with your email and API key. # HTTP Status Codes Source: https://upstash.com/docs/devops/developer-api/http_status_codes The Upstash API uses the following HTTP Status codes: | Code | Description | | | ---- | ------------------------- | ------------------------------------------------------------------------------- | | 200 | **OK** | Indicates that a request completed successfully and the response contains data. | | 400 | **Bad Request** | Your request is invalid. | | 401 | **Unauthorized** | Your API key is wrong. | | 403 | **Forbidden** | The kitten requested is hidden for administrators only. | | 404 | **Not Found** | The specified kitten could not be found. | | 405 | **Method Not Allowed** | You tried to access a kitten with an invalid method. | | 406 | **Not Acceptable** | You requested a format that isn't JSON. | | 429 | **Too Many Requests** | You're requesting too many kittens! Slow down! | | 500 | **Internal Server Error** | We had a problem with our server. Try again later. | | 503 | **Service Unavailable** | We're temporarily offline for maintenance. Please try again later. | # Getting Started Source: https://upstash.com/docs/devops/developer-api/introduction Using Upstash API, you can develop applications that can create and manage Upstash products and resources. You can automate everything that you can do in the console. To use developer API, you need to create an API key in the console. The Developer API is only available to native Upstash accounts. Accounts created via third-party platforms like Vercel or Fly.io are not supported. ### Create an API key 1. Log in to the console then in the left menu click the `Account > Management API` link. 2. Click the `Create API Key` button. 3. Enter a name for your key. You can not use the same name for multiple keys. You need to download or copy/save your API key. Upstash does not remember or keep your API for security reasons. So if you forget your API key, it becomes useless; you need to create a new one.
You can create multiple keys. It is recommended to use different keys in different applications. By default one user can create up to 37 API keys. If you need more than that, please send us an email at [support@upstash.com](mailto:support@upstash.com) ### Deleting an API key When an API key is exposed (e.g. accidentally shared in a public repository) or not being used anymore; you should delete it. You can delete the API keys in `Account > API Keys` screen. ### Roadmap **Role based access:** You will be able to create API keys with specific privileges. For example you will be able to create a key with read-only access. **Stats:** We will provide reports based on usage of your API keys. # Create Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/create_backup devops/developer-api/openapi.yml post /redis/create-backup/{id} This endpoint creates a backup for a Redis database. # Delete Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/delete_backup devops/developer-api/openapi.yml delete /redis/delete-backup/{id}/{backup_id} This endpoint deletes a backup of a Redis database. # Disable Daily Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/disable_dailybackup devops/developer-api/openapi.yml patch /redis/disable-dailybackup/{id} This endpoint disables daily backup for a Redis database. # Enable Daily Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/enable_dailybackup devops/developer-api/openapi.yml patch /redis/enable-dailybackup/{id} This endpoint enables daily backup for a Redis database. # List Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/list_backup devops/developer-api/openapi.yml get /redis/list-backup/{id} This endpoint lists all backups for a Redis database. # Restore Backup Source: https://upstash.com/docs/devops/developer-api/redis/backup/restore_backup devops/developer-api/openapi.yml post /redis/restore-backup/{id} This endpoint restores data from an existing backup. # Change Database Plan Source: https://upstash.com/docs/devops/developer-api/redis/change_plan devops/developer-api/openapi.yml post /redis/change-plan/{id} This endpoint changes the plan of a Redis database. # Create Redis Database Source: https://upstash.com/docs/devops/developer-api/redis/create_database_global devops/developer-api/openapi.yml post /redis/database This endpoint creates a new Redis database. # Delete Database Source: https://upstash.com/docs/devops/developer-api/redis/delete_database devops/developer-api/openapi.yml delete /redis/database/{id} This endpoint deletes a database. # Disable Auto Upgrade Source: https://upstash.com/docs/devops/developer-api/redis/disable_autoscaling devops/developer-api/openapi.yml post /redis/disable-autoupgrade/{id} This endpoint disables Auto Upgrade for given database. # Disable Eviction Source: https://upstash.com/docs/devops/developer-api/redis/disable_eviction devops/developer-api/openapi.yml post /redis/disable-eviction/{id} This endpoint disables eviction for given database. # Enable Auto Upgrade Source: https://upstash.com/docs/devops/developer-api/redis/enable_autoscaling devops/developer-api/openapi.yml post /redis/enable-autoupgrade/{id} This endpoint enables Auto Upgrade for given database. # Enable Eviction Source: https://upstash.com/docs/devops/developer-api/redis/enable_eviction devops/developer-api/openapi.yml post /redis/enable-eviction/{id} This endpoint enables eviction for given database. # Enable TLS Source: https://upstash.com/docs/devops/developer-api/redis/enable_tls devops/developer-api/openapi.yml post /redis/enable-tls/{id} This endpoint enables tls on a database. # Get Database Source: https://upstash.com/docs/devops/developer-api/redis/get_database devops/developer-api/openapi.yml get /redis/database/{id} This endpoint gets details of a database. # Get Database Stats Source: https://upstash.com/docs/devops/developer-api/redis/get_database_stats devops/developer-api/openapi.yml get /redis/stats/{id} This endpoint gets detailed stats of a database. # List Databases Source: https://upstash.com/docs/devops/developer-api/redis/list_databases devops/developer-api/openapi.yml get /redis/databases This endpoint list all databases of user. # Move To Team Source: https://upstash.com/docs/devops/developer-api/redis/moveto_team devops/developer-api/openapi.yml post /redis/move-to-team This endpoint moves database under a target team # Rename Database Source: https://upstash.com/docs/devops/developer-api/redis/rename_database devops/developer-api/openapi.yml post /redis/rename/{id} This endpoint renames a database. # Reset Password Source: https://upstash.com/docs/devops/developer-api/redis/reset_password devops/developer-api/openapi.yml post /redis/reset-password/{id} This endpoint updates the password of a database. # Update Database Budget Source: https://upstash.com/docs/devops/developer-api/redis/update_budget devops/developer-api/openapi.yml patch /redis/update-budget/{id} This endpoint updates the monthly budget of a Redis database. # Update Regions Source: https://upstash.com/docs/devops/developer-api/redis/update_regions devops/developer-api/openapi.yml post /redis/update-regions/{id} Update the regions of a database # Add Team Member Source: https://upstash.com/docs/devops/developer-api/teams/add_team_member devops/developer-api/openapi.yml post /teams/member This endpoint adds a new team member to the specified team. # Create Team Source: https://upstash.com/docs/devops/developer-api/teams/create_team devops/developer-api/openapi.yml post /team This endpoint creates a new team. # Delete Team Source: https://upstash.com/docs/devops/developer-api/teams/delete_team devops/developer-api/openapi.yml delete /team/{id} This endpoint deletes a team. # Delete Team Member Source: https://upstash.com/docs/devops/developer-api/teams/delete_team_member devops/developer-api/openapi.yml delete /teams/member This endpoint deletes a team member from the specified team. # Get Team Members Source: https://upstash.com/docs/devops/developer-api/teams/get_team_members devops/developer-api/openapi.yml get /teams/{team_id} This endpoint list all members of a team. # List Teams Source: https://upstash.com/docs/devops/developer-api/teams/list_teams devops/developer-api/openapi.yml get /teams This endpoint lists all teams of user. # Create Index Source: https://upstash.com/docs/devops/developer-api/vector/create_index devops/developer-api/openapi.yml post /vector/index This endpoint creates an index. # Delete Index Source: https://upstash.com/docs/devops/developer-api/vector/delete_index devops/developer-api/openapi.yml delete /vector/index/{id} This endpoint deletes an index. # Get Index Source: https://upstash.com/docs/devops/developer-api/vector/get_index devops/developer-api/openapi.yml get /vector/index/{id} This endpoint returns the data associated to a index. # List Indices Source: https://upstash.com/docs/devops/developer-api/vector/list_indices devops/developer-api/openapi.yml get /vector/index This endpoint returns the data related to all indices of an account as a list. # Rename Index Source: https://upstash.com/docs/devops/developer-api/vector/rename_index devops/developer-api/openapi.yml post /vector/index/{id}/rename This endpoint is used to change the name of an index. # Reset Index Passwords Source: https://upstash.com/docs/devops/developer-api/vector/reset_index_passwords devops/developer-api/openapi.yml post /vector/index/{id}/reset-password This endpoint is used to reset regular and readonly tokens of an index. # Set Index Plan Source: https://upstash.com/docs/devops/developer-api/vector/set_index_plan devops/developer-api/openapi.yml post /vector/index/{id}/setplan This endpoint is used to change the plan of an index. # Transfer Index Source: https://upstash.com/docs/devops/developer-api/vector/transfer_index devops/developer-api/openapi.yml post /vector/index/{id}/transfer This endpoint is used to transfer an index to another team. Transferring to a personal account is not supported. However, transferring an index from a personal account to a team is allowed. # Overview Source: https://upstash.com/docs/devops/pulumi/overview The Upstash Pulumi Provider lets you manage [Upstash](https://upstash.com) Redis resources programmatically. You can find the Github Repository [here](https://github.com/upstash/pulumi-upstash). ## Installing This package is available for several languages/platforms: ### Node.js (JavaScript/TypeScript) To use from JavaScript or TypeScript in Node.js, install using either `npm`: ```bash theme={"system"} npm install @upstash/pulumi ``` or `yarn`: ```bash theme={"system"} yarn add @upstash/pulumi ``` ### Python To use from Python, install using `pip`: ```bash theme={"system"} pip install upstash_pulumi ``` ### Go To use from Go, use `go get` to grab the latest version of the library: ```bash theme={"system"} go get github.com/upstash/pulumi-upstash/sdk/go/... ``` ## Configuration The following configuration points are available for the `upstash` provider: * `upstash:apiKey` (environment: `UPSTASH_API_KEY`) - the API key for `upstash`. Can be obtained from the [console](https://console.upstash.com). * `upstash:email` (environment: `UPSTASH_EMAIL`) - owner email of the resources ## Some Examples ### TypeScript: ```typescript theme={"system"} import * as pulumi from "@pulumi/pulumi"; import * as upstash from "@upstash/pulumi"; // multiple redis databases in a single for loop for (let i = 0; i < 5; i++) { new upstash.RedisDatabase("mydb" + i, { databaseName: "pulumi-ts-db" + i, region: "eu-west-1", tls: true, }); } ``` ### Go: ```go theme={"system"} package main import ( "github.com/pulumi/pulumi/sdk/v3/go/pulumi" "github.com/upstash/pulumi-upstash/sdk/go/upstash" ) func main() { pulumi.Run(func(ctx *pulumi.Context) error { createdTeam, err := upstash.NewTeam(ctx, "exampleTeam", &upstash.TeamArgs{ TeamName: pulumi.String("pulumi go team"), CopyCc: pulumi.Bool(false), TeamMembers: pulumi.StringMap{ "": pulumi.String("owner"), "": pulumi.String("dev"), }, }) if err != nil { return err } return nil }) } ``` # null Source: https://upstash.com/docs/devops/terraform # upstash_qstash_endpoint_data Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_endpoint_data ```hcl example.tf theme={"system"} data "upstash_qstash_endpoint_data" "exampleQStashEndpointData" { endpoint_id = resource.upstash_qstash_endpoint.exampleQStashEndpoint.endpoint_id } ``` ## Schema ### Required Topic Id that the endpoint is added to ### Read-Only Unique QStash Endpoint ID The ID of this resource. Unique QStash Topic Name for Endpoint # upstash_qstash_schedule_data Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_schedule_data ```hcl example.tf theme={"system"} data "upstash_qstash_schedule_data" "exampleQStashScheduleData" { schedule_id = resource.upstash_qstash_schedule.exampleQStashSchedule.schedule_id } ``` ## Schema ### Required Unique QStash Schedule ID for requested schedule ### Read-Only Body to send for the POST request in string format. Needs escaping () double quotes. Creation time for QStash Schedule Cron string for QStash Schedule Destination for QStash Schedule. Either Topic ID or valid URL Forward headers to your API The ID of this resource. Start time for QStash Scheduling. Retries for QStash Schedule requests. # upstash_qstash_topic_data Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_topic_data ```hcl example.tf theme={"system"} data "upstash_qstash_topic_data" "exampleQstashTopicData" { topic_id = resource.upstash_qstash_topic.exampleQstashTopic.topic_id } ``` ## Schema ### Required Unique QStash Topic ID for requested topic ### Read-Only Endpoints for the QStash Topic The ID of this resource. Name of the QStash Topic # upstash_redis_database_data Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_redis_database_data ```hcl example.tf theme={"system"} data "upstash_redis_database_data" "exampleDBData" { database_id = resource.upstash_redis_database.exampleDB.database_id } ``` ## Schema ### Required Unique Database ID for created database ### Read-Only Upgrade to higher plans automatically when it hits quotas Creation time of the database Name of the database Type of the database Daily bandwidth limit for the database Disk threshold for the database Max clients for the database Max commands per second for the database Max entry size for the database Max request size for the database Memory threshold for the database Database URL for connection The ID of this resource. Password of the database Port of the endpoint Primary region for the database (Only works if region='global'. Can be one of \[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1, ap-southeast-2]) Rest Token for the database. Read regions for the database (Only works if region='global' and primary\_region is set. Can be any combination of \[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1, ap-southeast-2], excluding the one given as primary.) Region of the database. Possible values are: `global`, `eu-west-1`, `us-east-1`, `us-west-1`, `ap-northeast-1` , `eu-central1` Rest Token for the database. State of the database When enabled, data is encrypted in transit. (If changed to false from true, results in deletion and recreation of the resource) User email for the database # upstash_team_data Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_team_data ```hcl example.tf theme={"system"} data "upstash_team_data" "teamData" { team_id = resource.upstash_team.exampleTeam.team_id } ``` ## Schema ### Required Unique Cluster ID for created cluster ### Read-Only Whether Credit Card is copied The ID of this resource. Members of the team. (Owner must be specified, which is the owner of the api key.) Name of the team # Overview Source: https://upstash.com/docs/devops/terraform/overview The Upstash Terraform Provider lets you manage Upstash Redis resources programmatically. You can find the Github Repository for the Terraform Provider [here](https://github.com/upstash/terraform-provider-upstash). ## Installation ```hcl theme={"system"} terraform { required_providers { upstash = { source = "upstash/upstash" version = "x.x.x" } } } provider "upstash" { email = var.email api_key = var.api_key } ``` `email` is your registered email in Upstash. `api_key` can be generated from Upstash Console. For more information please check our [docs](https://docs.upstash.com/howto/developerapi). ## Create Database Using Terraform Here example code snippet that creates database: ```hcl theme={"system"} resource "upstash_redis_database" "redis" { database_name = "db-name" region = "eu-west-1" tls = "true" multi_zone = "false" } ``` ## Import Resources From Outside of Terraform To import resources created outside of the terraform provider, simply create the resource in .tf file as follows: ```hcl theme={"system"} resource "upstash_redis_database" "redis" {} ``` after this, you can run the command: ``` terraform import upstash_redis_database.redis ``` Above example is given for an Upstash Redis database. You can import all of the resources by changing the resource type and providing the resource id. You can check full spec and [doc from here](https://registry.terraform.io/providers/upstash/upstash/latest/docs). ## Support, Bugs Reports, Feature Requests If you need support then you can ask your questions Upstash Team in [upstash.com](https://upstash.com) chat widget. There is also discord channel available for community. [Please check here](https://docs.upstash.com/help/support) for more information. # upstash_qstash_endpoint Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_endpoint Create and manage QStash endpoints. ```hcl example.tf theme={"system"} resource "upstash_qstash_endpoint" "exampleQStashEndpoint" { url = "https://***.***" topic_id = resource.upstash_qstash_topic.exampleQstashTopic.topic_id } ``` ## Schema ### Required Topic ID that the endpoint is added to URL of the endpoint ### Read-Only Unique QStash endpoint ID The ID of this resource. Unique QStash topic name for endpoint # upstash_qstash_schedule Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_schedule Create and manage QStash schedules. ```hcl example.tf theme={"system"} resource "upstash_qstash_schedule" "exampleQStashSchedule" { destination = resource.upstash_qstash_topic.exampleQstashTopic.topic_id cron = "* * * * */2" # or simply provide a link # destination = "https://***.***" } ``` ## Schema ### Required Cron string for QStash Schedule Destination for QStash Schedule. Either Topic ID or valid URL ### Optional Body to send for the POST request in string format. Needs escaping () double quotes. Callback URL for QStash Schedule. Content based deduplication for QStash Scheduling. Content type for QStash Scheduling. Deduplication ID for QStash Scheduling. Delay for QStash Schedule. Forward headers to your API Start time for QStash Scheduling. Retries for QStash Schedule requests. ### Read-Only Creation time for QStash Schedule. The ID of this resource. Unique QStash Schedule ID for requested schedule # upstash_qstash_topic Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_topic Create and manage QStash topics ```hcl example.tf theme={"system"} resource "upstash_qstash_topic" "exampleQStashTopic" { name = "exampleQStashTopicName" } ``` ## Schema ### Required Name of the QStash topic ### Read-Only Endpoints for the QStash topic The ID of this resource. Unique QStash topic ID for requested topic # upstash_redis_database Source: https://upstash.com/docs/devops/terraform/resources/upstash_redis_database Create and manage Upstash Redis databases. ```hcl example.tf theme={"system"} resource "upstash_redis_database" "exampleDB" { database_name = "Terraform DB6" region = "eu-west-1" tls = "true" multizone = "true" } ``` ## Schema ### Required Name of the database Region of the database. Possible values are: `global`, `eu-west-1`, `us-east-1`, `us-west-1`, `ap-northeast-1` , `eu-central1` ### Optional Upgrade to higher plans automatically when it hits quotas Enable eviction, to evict keys when your database reaches the max size Primary region for the database (Only works if region='global'. Can be one of \[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1, ap-southeast-2]) Read regions for the database (Only works if region='global' and primary\_region is set. Can be any combination of \[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1, ap-southeast-2], excluding the one given as primary.) When enabled, data is encrypted in transit. (If changed to false from true, results in deletion and recreation of the resource) ### Read-Only Creation time of the database Unique Database ID for created database Type of the database Daily bandwidth limit for the database Disk threshold for the database Max clients for the database Max commands per second for the database Max entry size for the database Max request size for the database Memory threshold for the database Database URL for connection The ID of this resource. Password of the database Port of the endpoint Rest Token for the database. Rest Token for the database. State of the database User email for the database # upstash_team Source: https://upstash.com/docs/devops/terraform/resources/upstash_team Create and manage teams on Upstash. ```hcl example.tf theme={"system"} resource "upstash_team" "exampleTeam" { team_name = "TerraformTeam" copy_cc = false team_members = { # Owner is the owner of the api_key. "X@Y.Z": "owner", "A@B.C": "dev", "E@E.F": "finance", } } ``` ## Schema ### Required Whether Credit Card is copied Members of the team. (Owner must be specified, which is the owner of the api key.) Name of the team ### Read-Only The ID of this resource. Unique Cluster ID for created cluster # null Source: https://upstash.com/docs/img/bg-color-codes Recommended Background Color Transition: Primary: #34D399 (Emerald Green) Secondary: #00E9A3 (Cyan Green) # Get Started Source: https://upstash.com/docs/introduction Create a Redis Database within seconds Create a Vector Database for AI & LLMs Publish your first message Write durable serverless functions ## Concepts Upstash is serverless. You don't need to provision any infrastructure. Just create a database and start using it. Price scales to zero. You don't pay for idle or unused resources. You pay only for what you use. Upstash Redis replicates your data for the best latency all over the world. Upstash REST APIs enable access from all types of runtimes. ## Get In touch Follow us on X for the latest news and updates. Join our Discord Community and ask your questions to the team and other developers. # API Rate Limit Response Source: https://upstash.com/docs/qstash/api/api-ratelimiting This page documents the rate limiting behavior of our API and explains how to handle different types of rate limit errors. ## Overview There is no request per second limit for operational API's as listed below: * trigger, publish, enqueue, notify, wait, batch * Other endpoints (like logs,listing flow-controls, queues, schedules etc) have rps limit. This is a short-term limit **per second** to prevent rapid bursts of requests. **Headers**: * `Burst-RateLimit-Limit`: Maximum number of requests allowed in the burst window (1 second) * `Burst-RateLimit-Remaining`: Remaining number of requests in the burst window (1 second) * `Burst-RateLimit-Reset`: Time (in unix timestamp) when the burst limit will reset ### Example Rate Limit Error Handling ```typescript Handling Daily Rate Limit Error theme={"system"} import { QstashDailyRatelimitError } from "@upstash/qstash"; try { // Example of a publish request that could hit the daily rate limit const result = await client.publishJSON({ url: "https://my-api...", // or urlGroup: "the name or id of a url group" body: { hello: "world", }, }); } catch (error) { if (error instanceof QstashDailyRatelimitError) { console.log("Daily rate limit exceeded. Retry after:", error.reset); // Implement retry logic or notify the user } else { console.error("An unexpected error occurred:", error); } } ``` ```typescript Handling Burst Rate Limit Error theme={"system"} import { QstashRatelimitError } from "@upstash/qstash"; try { // Example of a request that could hit the burst rate limit const result = await client.publishJSON({ url: "https://my-api...", // or urlGroup: "the name or id of a url group" body: { hello: "world", }, }); } catch (error) { if (error instanceof QstashRatelimitError) { console.log("Burst rate limit exceeded. Retry after:", error.reset); // Implement exponential backoff or delay before retrying } else { console.error("An unexpected error occurred:", error); } } ``` # Authentication Source: https://upstash.com/docs/qstash/api/authentication Authentication for the QStash API You'll need to authenticate your requests to access any of the endpoints in the QStash API. In this guide, we'll look at how authentication works. ## Bearer Token When making requests to QStash, you will need your `QSTASH_TOKEN` — you will find it in the [console](https://console.upstash.com/qstash). Here's how to add the token to the request header using cURL: ```bash theme={"system"} curl https://qstash.upstash.io/v2/publish/... \ -H "Authorization: Bearer " ``` ## Query Parameter In environments where setting the header is not possible, you can use the `qstash_token` query parameter instead. ```bash theme={"system"} curl https://qstash.upstash.io/v2/publish/...?qstash_token= ``` Always keep your token safe and reset it if you suspect it has been compromised. # Delete a message from the DLQ Source: https://upstash.com/docs/qstash/api/dlq/deleteMessage DELETE https://qstash.upstash.io/v2/dlq/{dlqId} Manually remove a message Delete a message from the DLQ. ## Request The dlq id of the message you want to remove. You will see this id when listing all messages in the dlq with the [/v2/dlq](/qstash/api/dlq/listMessages) endpoint. ## Response The endpoint doesn't return anything, a status code of 200 means the message is removed from the DLQ. If the message is not found in the DLQ, (either is has been removed by you, or automatically), the endpoint returns a 404 status code. ```sh theme={"system"} curl -X DELETE https://qstash.upstash.io/v2/dlq/my-dlq-id \ -H "Authorization: Bearer " ``` # Delete multiple messages from the DLQ Source: https://upstash.com/docs/qstash/api/dlq/deleteMessages DELETE https://qstash.upstash.io/v2/dlq Manually remove messages Delete multiple messages from the DLQ. You can get the `dlqId` from the [list DLQs endpoint](/qstash/api/dlq/listMessages). ## Request The list of DLQ message IDs to remove. ## Response A deleted object with the number of deleted messages. ```JSON theme={"system"} { "deleted": number } ``` ```json 200 OK theme={"system"} { "deleted": 3 } ``` ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/dlq \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "dlqIds": ["11111-0", "22222-0", "33333-0"] }' ``` ```js Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/dlq", { method: "DELETE", headers: { Authorization: "Bearer ", "Content-Type": "application/json", }, body: { dlqIds: [ "11111-0", "22222-0", "33333-0", ], }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', } data = { "dlqIds": [ "11111-0", "22222-0", "33333-0" ] } response = requests.delete( 'https://qstash.upstash.io/v2/dlq', headers=headers, data=data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{ "dlqIds": [ "11111-0", "22222-0", "33333-0" ] }`) req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/dlq", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Get a message from the DLQ Source: https://upstash.com/docs/qstash/api/dlq/getMessage GET https://qstash.upstash.io/v2/dlq/{dlqId} Get a message from the DLQ Get a message from DLQ. ## Request The dlq id of the message you want to retrieve. You will see this id when listing all messages in the dlq with the [/v2/dlq](/qstash/api/dlq/listMessages) endpoint, as well as in the content of [the failure callback](https://docs.upstash.com/qstash/features/callbacks#what-is-a-failure-callback) ## Response If the message is not found in the DLQ, (either is has been removed by you, or automatically), the endpoint returns a 404 status code. ```sh theme={"system"} curl -X GET https://qstash.upstash.io/v2/dlq/my-dlq-id \ -H "Authorization: Bearer " ``` # List messages in the DLQ Source: https://upstash.com/docs/qstash/api/dlq/listMessages GET https://qstash.upstash.io/v2/dlq List and paginate through all messages currently inside the DLQ List all messages currently inside the DLQ ## Request By providing a cursor you can paginate through all of the messages in the DLQ Filter DLQ messages by message id. Filter DLQ messages by url. Filter DLQ messages by url group. Filter DLQ messages by schedule id. Filter DLQ messages by queue name. Filter DLQ messages by API name. Filter DLQ messages by starting date, in milliseconds (Unix timestamp). This is inclusive. Filter DLQ messages by ending date, in milliseconds (Unix timestamp). This is inclusive. Filter DLQ messages by HTTP response status code. Filter DLQ messages by IP address of the publisher. The number of messages to return. Default and maximum is 100. The sorting order of DLQ messages by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "earliestFirst". Filter DLQ messages by the label of the message assigned by the user. ## Response A cursor which you can use in subsequent requests to paginate through all messages. If no cursor is returned, you have reached the end of the messages. ```sh theme={"system"} curl https://qstash.upstash.io/v2/dlq \ -H "Authorization: Bearer " ``` ```sh with cursor theme={"system"} curl https://qstash.upstash.io/v2/dlq?cursor=xxx \ -H "Authorization: Bearer " ``` ```json 200 OK theme={"system"} { "messages": [ { "messageId": "msg_123", "topicId": "tpc_123", "url":"https://example.com", "method": "POST", "header": { "My-Header": ["my-value"] }, "body": "{\"foo\":\"bar\"}", "createdAt": 1620000000000, "state": "failed" } ] } ``` # Enqueue a Message Source: https://upstash.com/docs/qstash/api/enqueue POST https://qstash.upstash.io/v2/enqueue/{queueName}/{destination} Enqueue a message ## Request The name of the queue that message will be enqueued on. If doesn't exist, it will be created automatically. Destination can either be a topic name or id that you configured in the Upstash console, a valid url where the message gets sent to, or a valid QStash API name like `api/llm`. If the destination is a URL, make sure the URL is prefixed with a valid protocol (`http://` or `https://`) Id to use while deduplicating messages, so that only one message with the given deduplication id is published. When set to true, automatically deduplicates messages based on their content, so that only one message with the same content is published. Content based deduplication creates unique deduplication ids based on the following message fields: * Destination * Body * Headers ## Response ```sh curl theme={"system"} curl -X POST "https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -H "Upstash-Method: POST" \ -H "Upstash-Retries: 3" \ -H "Upstash-Forward-Custom-Header: custom-value" \ -d '{"message":"Hello, World!"}' ``` ```js Node theme={"system"} const response = await fetch( "https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com", { method: "POST", headers: { Authorization: "Bearer ", "Content-Type": "application/json", "Upstash-Method": "POST", "Upstash-Retries": "3", "Upstash-Forward-Custom-Header": "custom-value", }, body: JSON.stringify({ message: "Hello, World!", }), } ); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', 'Upstash-Method': 'POST', 'Upstash-Retries': '3', 'Upstash-Forward-Custom-Header': 'custom-value', } json_data = { 'message': 'Hello, World!', } response = requests.post( 'https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com', headers=headers, json=json_data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{"message":"Hello, World!"}`) req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") req.Header.Set("Upstash-Method", "POST") req.Header.Set("Upstash-Retries", "3") req.Header.Set("Upstash-Forward-Custom-Header", "custom-value") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json URL theme={"system"} { "messageId": "msd_1234", "url": "https://www.example.com" } ``` ```json URL Group theme={"system"} [ { "messageId": "msd_1234", "url": "https://www.example.com" }, { "messageId": "msd_5678", "url": "https://www.somewhere-else.com", "deduplicated": true } ] ``` # List Events Source: https://upstash.com/docs/qstash/api/events/list GET https://qstash.upstash.io/v2/events List all events that happened, such as message creation or delivery QStash events are being renamed to [Logs](/qstash/api/logs/list) to better reflect their purpose and to not get confused with [Workflow Events](/workflow/howto/events). ## Request By providing a cursor you can paginate through all of the events. Filter events by message id. Filter events by [state](/qstash/howto/debug-logs) | Value | Description | | ------------------ | ---------------------------------------------------------------------------------------- | | `CREATED` | The message has been accepted and stored in QStash | | `ACTIVE` | The task is currently being processed by a worker. | | `RETRY` | The task has been scheduled to retry. | | `ERROR` | The execution threw an error and the task is waiting to be retried or failed. | | `IN_PROGRESS` | The task is in one of `ACTIVE`, `RETRY` or `ERROR` state. | | `DELIVERED` | The message was successfully delivered. | | `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. | | `CANCEL_REQUESTED` | The cancel request from the user is recorded. | | `CANCELLED` | The cancel request from the user is honored. | Filter events by url. Filter events by URL Group (topic) name. Filter events by schedule id. Filter events by queue name. Filter events by starting date, in milliseconds (Unix timestamp). This is inclusive. Filter events by ending date, in milliseconds (Unix timestamp). This is inclusive. The number of events to return. Default and max is 1000. The sorting order of events by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "latestFirst". ## Response A cursor which you can use in subsequent requests to paginate through all events. If no cursor is returned, you have reached the end of the events. Timestamp of this log entry, in milliseconds The associated message id The headers of the message. Base64 encoded body of the message. The current state of the message at this point in time. | Value | Description | | ------------------ | ---------------------------------------------------------------------------------------- | | `CREATED` | The message has been accepted and stored in QStash | | `ACTIVE` | The task is currently being processed by a worker. | | `RETRY` | The task has been scheduled to retry. | | `ERROR` | The execution threw an error and the task is waiting to be retried or failed. | | `DELIVERED` | The message was successfully delivered. | | `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. | | `CANCEL_REQUESTED` | The cancel request from the user is recorded. | | `CANCELLED` | The cancel request from the user is honored. | An explanation what went wrong The next scheduled time of the message. (Unix timestamp in milliseconds) The destination url The name of the URL Group (topic) if this message was sent through a topic The name of the endpoint if this message was sent through a URL Group The scheduleId of the message if the message is triggered by a schedule The name of the queue if this message is enqueued on a queue The headers that are forwarded to the users endpoint Base64 encoded body of the message The status code of the response. Only set if the state is `ERROR` The base64 encoded body of the response. Only set if the state is `ERROR` The headers of the response. Only set if the state is `ERROR` The timeout(in milliseconds) of the outgoing http requests, after which Qstash cancels the request Method is the HTTP method of the message for outgoing request Callback is the URL address where QStash sends the response of a publish The headers that are passed to the callback url Failure Callback is the URL address where QStash sends the response of a publish The headers that are passed to the failure callback url The number of retries that should be attempted in case of delivery failure The mathematical expression used to calculate delay between retry attempts. If not set, [the default backoff](/qstash/features/retry) is used. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/events \ -H "Authorization: Bearer " ``` ```javascript Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/events", { headers: { Authorization: "Bearer ", }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/events', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/events", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "cursor": "1686652644442-12", "events":[ { "time": "1686652644442", "messageId": "msg_123", "state": "delivered", "url": "https://example.com", "header": { "Content-Type": [ "application/x-www-form-urlencoded" ] }, "body": "bWVyaGFiYSBiZW5pbSBhZGltIHNhbmNhcg==" } ] } ``` # Get Flow-Control Keys Source: https://upstash.com/docs/qstash/api/flow-control/get GET https://qstash.upstash.io/v2/flowControl/{flowControlKey} Get Information on Flow-Control ## Request The key of the flow control. See the [flow control](/qstash/features/flowcontrol) for more details. ## Response The key of of the flow control. The number of messages in the wait list that waits for `parallelism`/`rate` set in the flow control. ```sh theme={"system"} curl -X GET https://qstash.upstash.io/v2/flowControl/YOUR_FLOW_CONTROL_KEY -H "Authorization: Bearer " ``` # List Flow-Control Keys Source: https://upstash.com/docs/qstash/api/flow-control/list GET https://qstash.upstash.io/v2/flowControl/ List all Flow Control keys ## Response The key of the flow control. See the [flow control](/qstash/features/flowcontrol) for more details. The number of messages in the wait list that waits for `parallelism`/`rate` set in the flow control. ```sh theme={"system"} curl -X GET https://qstash.upstash.io/v2/flowControl/ -H "Authorization: Bearer " ``` # List Logs Source: https://upstash.com/docs/qstash/api/logs/list GET https://qstash.upstash.io/v2/logs Paginate through logs of published messages ## Request By providing a cursor you can paginate through all of the logs. Filter logs by message id. Filter logs by [state](/qstash/howto/debug-logs) | Value | Description | | ------------------ | ---------------------------------------------------------------------------------------- | | `CREATED` | The message has been accepted and stored in QStash | | `ACTIVE` | The task is currently being processed by a worker. | | `RETRY` | The task has been scheduled to retry. | | `ERROR` | The execution threw an error and the task is waiting to be retried or failed. | | `IN_PROGRESS` | The task is in one of `ACTIVE`, `RETRY` or `ERROR` state. | | `DELIVERED` | The message was successfully delivered. | | `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. | | `CANCEL_REQUESTED` | The cancel request from the user is recorded. | | `CANCELLED` | The cancel request from the user is honored. | Filter logs by url. Filter logs by URL Group (topic) name. Filter logs by schedule id. Filter logs by queue name. Filter logs by starting date, in milliseconds (Unix timestamp). This is inclusive. Filter logs by ending date, in milliseconds (Unix timestamp). This is inclusive. The number of logs to return. Default and max is 1000. The sorting order of logs by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "latestFirst". Filter event by the label of the message assigned by the user. ## Response A cursor which you can use in subsequent requests to paginate through all logs. If no cursor is returned, you have reached the end of the logs. Timestamp of this log entry, in milliseconds The associated message id The headers of the message. Base64 encoded body of the message. The current state of the message at this point in time. | Value | Description | | ------------------ | ---------------------------------------------------------------------------------------- | | `CREATED` | The message has been accepted and stored in QStash | | `ACTIVE` | The task is currently being processed by a worker. | | `RETRY` | The task has been scheduled to retry. | | `ERROR` | The execution threw an error and the task is waiting to be retried or failed. | | `DELIVERED` | The message was successfully delivered. | | `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. | | `CANCEL_REQUESTED` | The cancel request from the user is recorded. | | `CANCELLED` | The cancel request from the user is honored. | An explanation what went wrong The next scheduled time of the message. (Unix timestamp in milliseconds) The destination url The name of the URL Group (topic) if this message was sent through a topic The name of the endpoint if this message was sent through a URL Group The scheduleId of the message if the message is triggered by a schedule The name of the queue if this message is enqueued on a queue The headers that are forwarded to the users endpoint Base64 encoded body of the message The status code of the response. Only set if the state is `ERROR` The base64 encoded body of the response. Only set if the state is `ERROR` The headers of the response. Only set if the state is `ERROR` The timeout(in milliseconds) of the outgoing http requests, after which Qstash cancels the request Method is the HTTP method of the message for outgoing request Callback is the URL address where QStash sends the response of a publish The headers that are passed to the callback url Failure Callback is the URL address where QStash sends the response of a publish The headers that are passed to the failure callback url The number of retries that should be attempted in case of delivery failure The mathematical expression used to calculate delay between retry attempts. If not set, [the default backoff](/qstash/features/retry) is used. The label of the message assigned by the user. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/logs \ -H "Authorization: Bearer " ``` ```javascript Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/logs", { headers: { Authorization: "Bearer ", }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/logs', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/logs", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "cursor": "1686652644442-12", "events":[ { "time": "1686652644442", "messageId": "msg_123", "state": "delivered", "url": "https://example.com", "header": { "Content-Type": [ "application/x-www-form-urlencoded" ] }, "body": "bWVyaGFiYSBiZW5pbSBhZGltIHNhbmNhcg==" } ] } ``` # Batch Messages Source: https://upstash.com/docs/qstash/api/messages/batch POST https://qstash.upstash.io/v2/batch Send multiple messages in a single request You can learn more about batching in the [batching section](/qstash/features/batch). API playground is not available for this endpoint. You can use the cURL example below. You can publish to destination, URL Group or queue in the same batch request. ## Request The endpoint is `POST https://qstash.upstash.io/v2/batch` and the body is an array of messages. Each message has the following fields: ``` destination: string headers: headers object body: string ``` The headers are identical to the headers in the [create](/qstash/api/publish#request) endpoint. ```shell cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch -H "Authorization: Bearer XXX" \ -H "Content-Type: application/json" \ -d ' [ { "destination": "myUrlGroup", "headers":{ "Upstash-Delay":"5s", "Upstash-Forward-Hello":"123456" }, "body": "Hello World" }, { "queue": "test", "destination": "https://example.com/destination", "headers":{ "Upstash-Forward-Hello":"789" } }, { "destination": "https://example.com/destination1", "headers":{ "Upstash-Delay":"7s", "Upstash-Forward-Hello":"789" } }, { "destination": "https://example.com/destination2", "headers":{ "Upstash-Delay":"9s", "Upstash-Forward-Hello":"again" } } ]' ``` ## Response ```json theme={"system"} [ [ { "messageId": "msg_...", "url": "https://myUrlGroup-endpoint1.com" }, { "messageId": "msg_...", "url": "https://myUrlGroup-endpoint2.com" } ], { "messageId": "msg_...", }, { "messageId": "msg_..." }, { "messageId": "msg_..." } ] ``` # Bulk Cancel Messages Source: https://upstash.com/docs/qstash/api/messages/bulk-cancel DELETE https://qstash.upstash.io/v2/messages Stop delivery of multiple messages at once Bulk cancel allows you to cancel multiple messages at once. Cancelling a message will remove it from QStash and stop it from being delivered in the future. If a message is in flight to your API, it might be too late to cancel. If you provide a set of message IDs in the body of the request, only those messages will be cancelled. If you include filter parameters in the request body, only the messages that match the filters will be canceled. If the `messageIds` array is empty, QStash will cancel all of your messages. If no body is sent, QStash will also cancel all of your messages. This operation scans all your messages and attempts to cancel them. If an individual message cannot be cancelled, it will not continue and will return an error message. Therefore, some messages may not be cancelled at the end. In such cases, you can run the bulk cancel operation multiple times. You can filter the messages to cancel by including filter parameters in the request body. ## Request The list of message IDs to cancel. Filter messages to cancel by queue name. Filter messages to cancel by destination URL. Filter messages to cancel by URL Group (topic) name. Filter messages to cancel by starting date, in milliseconds (Unix timestamp). This is inclusive. Filter messages to cancel by ending date, specified in milliseconds (Unix timestamp). This is inclusive. Filter messages to cancel by schedule ID. Filter messages to cancel by IP address of publisher. ## Response A cancelled object with the number of cancelled messages. ```JSON theme={"system"} { "cancelled": number } ``` ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/messages/ \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{"messageIds": ["msg_id_1", "msg_id_2", "msg_id_3"]}' ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/messages', { method: 'DELETE', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', body: { messageIds: [ "msg_id_1", "msg_id_2", "msg_id_3", ], }, } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', } data = { "messageIds": [ "msg_id_1", "msg_id_2", "msg_id_3" ] } response = requests.delete( 'https://qstash.upstash.io/v2/messages', headers=headers, data=data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{ "messageIds": [ "msg_id_1", "msg_id_2", "msg_id_3" ] }`) req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/messages", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 202 Accepted theme={"system"} { "cancelled": 10 } ``` # Cancel Message Source: https://upstash.com/docs/qstash/api/messages/cancel DELETE https://qstash.upstash.io/v2/messages/{messageId} Stop delivery of an existing message Cancelling a message will remove it from QStash and stop it from being delivered in the future. If a message is in flight to your API, it might be too late to cancel. ## Request The id of the message to cancel. ## Response This endpoint only returns `202 OK` ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/messages/msg_123 \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/messages/msg_123', { method: 'DELETE', headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.delete( 'https://qstash.upstash.io/v2/messages/msg_123', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/messages/msg_123", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```text 202 Accepted theme={"system"} OK ``` # Get Message Source: https://upstash.com/docs/qstash/api/messages/get GET https://qstash.upstash.io/v2/messages/{messageId} Retrieve a message by its id ## Request The id of the message to retrieve. Messages are removed from the database shortly after they're delivered, so you will not be able to retrieve a message after. This endpoint is intended to be used for accessing messages that are in the process of being delivered/retried. ## Response ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/messages/msg_123 \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/messages/msg_123", { headers: { Authorization: "Bearer ", }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/messages/msg_123', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/messages/msg_123", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "messageId": "msg_123", "topicName": "myTopic", "url":"https://example.com", "method": "POST", "header": { "My-Header": ["my-value"] }, "body": "{\"foo\":\"bar\"}", "createdAt": 1620000000000 } ``` # Publish a Message Source: https://upstash.com/docs/qstash/api/publish POST https://qstash.upstash.io/v2/publish/{destination} Publish a message ## Request Destination can either be a topic name or id that you configured in the Upstash console, a valid url where the message gets sent to, or a valid QStash API name like `api/llm`. If the destination is a URL, make sure the URL is prefixed with a valid protocol (`http://` or `https://`) Delay the message delivery. Format for this header is a number followed by duration abbreviation, like `10s`. Available durations are `s` (seconds), `m` (minutes), `h` (hours), `d` (days). example: "50s" | "3m" | "10h" | "1d" Delay the message delivery until a certain time in the future. The format is a unix timestamp in seconds, based on the UTC timezone. When both `Upstash-Not-Before` and `Upstash-Delay` headers are provided, `Upstash-Not-Before` will be used. Id to use while deduplicating messages, so that only one message with the given deduplication id is published. When set to true, automatically deduplicates messages based on their content, so that only one message with the same content is published. Content based deduplication creates unique deduplication ids based on the following message fields: * Destination * Body * Headers ## Response ```sh curl theme={"system"} curl -X POST "https://qstash.upstash.io/v2/publish/https://www.example.com" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -H "Upstash-Method: POST" \ -H "Upstash-Delay: 10s" \ -H "Upstash-Retries: 3" \ -H "Upstash-Retry-Delay: pow(2, retried) * 1000" \ -H "Upstash-Forward-Custom-Header: custom-value" \ -d '{"message":"Hello, World!"}' ``` ```js Node theme={"system"} const response = await fetch( "https://qstash.upstash.io/v2/publish/https://www.example.com", { method: "POST", headers: { Authorization: "Bearer ", "Content-Type": "application/json", "Upstash-Method": "POST", "Upstash-Delay": "10s", "Upstash-Retries": "3", "Upstash-Retry-Delay": "pow(2, retried) * 1000", "Upstash-Forward-Custom-Header": "custom-value", }, body: JSON.stringify({ message: "Hello, World!", }), } ); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', 'Upstash-Method': 'POST', 'Upstash-Delay': '10s', 'Upstash-Retries': '3', 'Upstash-Retry-Delay': 'pow(2, retried) * 1000', 'Upstash-Forward-Custom-Header': 'custom-value', } json_data = { 'message': 'Hello, World!', } response = requests.post( 'https://qstash.upstash.io/v2/publish/https://www.example.com', headers=headers, json=json_data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{"message":"Hello, World!"}`) req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/publish/https://www.example.com", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") req.Header.Set("Upstash-Method", "POST") req.Header.Set("Upstash-Delay", "10s") req.Header.Set("Upstash-Retries", "3") req.Header.Set("Upstash-Retry-Delay", "pow(2, retried) * 1000") req.Header.Set("Upstash-Forward-Custom-Header", "custom-value") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json URL theme={"system"} { "messageId": "msd_1234", "url": "https://www.example.com" } ``` ```json URL Group theme={"system"} [ { "messageId": "msd_1234", "url": "https://www.example.com" }, { "messageId": "msd_5678", "url": "https://www.somewhere-else.com", "deduplicated": true } ] ``` # Get a Queue Source: https://upstash.com/docs/qstash/api/queues/get GET https://qstash.upstash.io/v2/queues/{queueName} Retrieves a queue ## Request The name of the queue to retrieve. ## Response The creation time of the queue. UnixMilli The update time of the queue. UnixMilli The name of the queue. The number of parallel consumers consuming from [the queue](/qstash/features/queues). The number of unprocessed messages that exist in [the queue](/qstash/features/queues). ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/queues/my-queue \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/queue/my-queue', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/queue/my-queue', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/queue/my-queue", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "createdAt": 1623345678001, "updatedAt": 1623345678001, "name": "my-queue", "parallelism" : 5, "lag" : 100 } ``` # List Queues Source: https://upstash.com/docs/qstash/api/queues/list GET https://qstash.upstash.io/v2/queues List all your queues ## Request No parameters ## Response The creation time of the queue. UnixMilli The update time of the queue. UnixMilli The name of the queue. The number of parallel consumers consuming from [the queue](/qstash/features/queues). The number of unprocessed messages that exist in [the queue](/qstash/features/queues). ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/queues \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/queues", { headers: { Authorization: "Bearer ", }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/queues', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/queues", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} [ { "createdAt": 1623345678001, "updatedAt": 1623345678001, "name": "my-queue", "parallelism" : 5, "lag" : 100 }, // ... ] ``` # Pause Queue Source: https://upstash.com/docs/qstash/api/queues/pause POST https://qstash.upstash.io/v2/queues/{queueName}/pause Pause an active queue Pausing a queue stops the delivery of enqueued messages. The queue will still accept new messages, but they will wait until the queue becomes active for delivery. If the queue is already paused, this action has no effect. Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. ## Request The name of the queue to pause. ## Response This endpoint simply returns 200 OK if the queue is paused successfully. ```sh curl theme={"system"} curl -X POST https://qstash.upstash.io/v2/queues/queue_1234/pause \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} import { Client } from "@upstash/qstash"; /** * Import a fetch polyfill only if you are using node prior to v18. * This is not necessary for nextjs, deno or cloudflare workers. */ import "isomorphic-fetch"; const c = new Client({ token: "", }); c.queue({ queueName: "" }).pause() ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.queue.pause("") ``` ```go Go theme={"system"} package main import ( "github.com/upstash/qstash-go" ) func main() { client := qstash.NewClient("") // error checking is omitted for brevity err := client.Queues().Pause("") } ``` # Remove a Queue Source: https://upstash.com/docs/qstash/api/queues/remove DELETE https://qstash.upstash.io/v2/queues/{queueName} Removes a queue Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. ## Request The name of the queue to remove. ## Response This endpoint returns 200 if the queue is removed successfully, or it doesn't exist. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/queues/my-queue \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/queue/my-queue', { method: "DELETE", headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.delete( 'https://qstash.upstash.io/v2/queue/my-queue', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/queue/my-queue", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Resume Queue Source: https://upstash.com/docs/qstash/api/queues/resume POST https://qstash.upstash.io/v2/queues/{queueName}/resume Resume a paused queue Resuming a queue starts the delivery of enqueued messages from the earliest undelivered message. If the queue is already active, this action has no effect. ## Request The name of the queue to resume. ## Response This endpoint simply returns 200 OK if the queue is resumed successfully. ```sh curl theme={"system"} curl -X POST https://qstash.upstash.io/v2/queues/queue_1234/resume \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} import { Client } from "@upstash/qstash"; /** * Import a fetch polyfill only if you are using node prior to v18. * This is not necessary for nextjs, deno or cloudflare workers. */ import "isomorphic-fetch"; const c = new Client({ token: "", }); c.queue({ queueName: "" }).resume() ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.queue.resume("") ``` ```go Go theme={"system"} package main import ( "github.com/upstash/qstash-go" ) func main() { client := qstash.NewClient("") // error checking is omitted for brevity err := client.Queues().Resume("") } ``` # Upsert a Queue Source: https://upstash.com/docs/qstash/api/queues/upsert POST https://qstash.upstash.io/v2/queues/ Updates or creates a queue ## Request The name of the queue. The number of parallel consumers consuming from [the queue](/qstash/features/queues). For the parallelism limit, we introduced an easier and less limited API with publish. Please check the [Flow Control](/qstash/features/flowcontrol) page for the detailed information. Setting parallelism with queues will be deprecated at some point. ## Response This endpoint returns * 200 if the queue is added successfully. * 412 if it fails because of the the allowed number of queues limit ```sh curl theme={"system"} curl -XPOST https://qstash.upstash.io/v2/queues/ \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "queueName": "my-queue" , "parallelism" : 5, }' ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/queues/', { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json' }, body: JSON.stringify({ "queueName": "my-queue" , "parallelism" : 5, }) }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', } json_data = { "queueName": "my-queue" , "parallelism" : 5, } response = requests.post( 'https://qstash.upstash.io/v2/queues/', headers=headers, json=json_data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{ "queueName": "my-queue" , "parallelism" : 5, }`) req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/queues/", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Create Schedule Source: https://upstash.com/docs/qstash/api/schedules/create POST https://qstash.upstash.io/v2/schedules/{destination} Create a schedule to send messages periodically ## Request Destination can either be a topic name or id that you configured in the Upstash console or a valid url where the message gets sent to. If the destination is a URL, make sure the URL is prefixed with a valid protocol (`http://` or `https://`) Cron allows you to send this message periodically on a schedule. Add a Cron expression and we will requeue this message automatically whenever the Cron expression triggers. We offer an easy to use UI for creating Cron expressions in our [console](https://console.upstash.com/qstash) or you can check out [Crontab.guru](https://crontab.guru). Note: it can take up to 60 seconds until the schedule is registered on an available qstash node. Example: `*/5 * * * *` Timezones are also supported. You can specify timezone together with cron expression as follows: Example: `CRON_TZ=America/New_York 0 4 * * *` Delay the message delivery. Delay applies to the delivery of the scheduled messages. For example, with the delay set to 10 minutes for a schedule that runs everyday at 00:00, the scheduled message will be created at 00:00 and it will be delivered at 00:10. Format for this header is a number followed by duration abbreviation, like `10s`. Available durations are `s` (seconds), `m` (minutes), `h` (hours), `d` (days). example: "50s" | "3m" | "10h" | "1d" Assign a schedule id to the created schedule. This header allows you to set the schedule id yourself instead of QStash assigning a random id. If a schedule with the provided id exists, the settings of the existing schedule will be updated with the new settings. ## Response The unique id of this schedule. You can use it to delete the schedule later. ```sh curl theme={"system"} curl -XPOST https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint \ -H "Authorization: Bearer " \ -H "Upstash-Cron: */5 * * * *" ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint', { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Upstash-Cron': '*/5 * * * *' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Upstash-Cron': '*/5 * * * *' } response = requests.post( 'https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Upstash-Cron", "*/5 * * * *") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "scheduleId": "scd_1234" } ``` # Get Schedule Source: https://upstash.com/docs/qstash/api/schedules/get GET https://qstash.upstash.io/v2/schedules/{scheduleId} Retrieves a schedule by id. ## Request The id of the schedule to retrieve. ## Response The id of the schedule. The cron expression used to schedule the message. The creation time of the object. UnixMilli Url or URL Group name The HTTP method to use for the message. The headers of the message. The body of the message. The base64 encoded body of the message. The number of retries that should be attempted in case of delivery failure. The delay in seconds before the message is delivered. The url where we send a callback to after the message is delivered The url where we send a callback to after the message delivery fails IP address where this schedule was created from. Whether the schedule is paused or not. The flow control key for rate limiting. The maximum number of parallel executions. The rate limit for this schedule. The time interval during which the specified rate of requests can be activated using the same flow control key. In seconds. The retry delay expression for this schedule, if retry\_delay was set when creating the schedule. The label assigned to the schedule for filtering purposes. The timestamp of the last scheduled execution. The timestamp of the next scheduled execution. The states of the last scheduled messages. Maps message id to state (IN\_PROGRESS, SUCCESS, FAIL). The IP address of the caller who created the schedule. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/schedules/scd_1234 \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/schedules/scd_1234', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/schedules/scd_1234', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/schedules/scd_1234", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { createdAt: 1754565618803, scheduleId: "schedule-id", cron: "* * * * *", destination: "https://your-website/api", method: "GET", header: { "Content-Type": [ "application/json" ], }, retries: 3, delay: 25, lastScheduleTime: 1755095280020, nextScheduleTime: 1759909800000, lastScheduleStates: { msg_7YoJxFpwk: "SUCCESS", }, callerIP: "127.43.12.54", isPaused: true, parallelism: 0, } ``` # List Schedules Source: https://upstash.com/docs/qstash/api/schedules/list GET https://qstash.upstash.io/v2/schedules List all your schedules ## Response The id of the schedule. The cron expression used to schedule the message. The creation time of the object. UnixMilli Url or URL Group (topic) name The HTTP method to use for the message. The headers of the message. The body of the message. The number of retries that should be attempted in case of delivery failure. The delay in seconds before the message is delivered. The url where we send a callback to after the message is delivered The url where we send a callback to after the message delivery fails IP address where this schedule was created from. Whether the schedule is paused or not. The flow control key for rate limiting. The maximum number of parallel executions. The rate limit for this schedule. The time interval during which the specified rate of requests can be activated using the same flow control key. In seconds. The retry delay expression for this schedule, if retry\_delay was set when creating the schedule. The label assigned to the schedule for filtering purposes. The timestamp of the last scheduled execution. The timestamp of the next scheduled execution. The states of the last scheduled messages. Maps message id to state (IN\_PROGRESS, SUCCESS, FAIL). The IP address of the caller who created the schedule. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/schedules \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/schedules', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/schedules', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/schedules", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} [ { createdAt: 1754565618803, scheduleId: "schedule-id", cron: "* * * * *", destination: "https://your-website/api", method: "GET", header: { "Content-Type": [ "application/json" ], }, retries: 3, delay: 25, lastScheduleTime: 1755095280020, nextScheduleTime: 1759909800000, lastScheduleStates: { msg_7YoJxFpwk: "SUCCESS", }, callerIP: "127.43.12.54", isPaused: true, parallelism: 0, } ] ``` # Pause Schedule Source: https://upstash.com/docs/qstash/api/schedules/pause POST https://qstash.upstash.io/v2/schedules/{scheduleId}/pause Pause an active schedule Pausing a schedule will not change the next delivery time, but the delivery will be ignored. If the schedule is already paused, this action has no effect. ## Request The id of the schedule to pause. ## Response This endpoint simply returns 200 OK if the schedule is paused successfully. ```sh curl theme={"system"} curl -X POST https://qstash.upstash.io/v2/schedules/scd_1234/pause \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} import { Client } from "@upstash/qstash"; /** * Import a fetch polyfill only if you are using node prior to v18. * This is not necessary for nextjs, deno or cloudflare workers. */ import "isomorphic-fetch"; const c = new Client({ token: "", }); c.schedules.pause({ schedule: "" }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.pause("") ``` ```go Go theme={"system"} package main import "github.com/upstash/qstash-go" func main() { client := qstash.NewClient("") // error checking is omitted for brevity err := client.Schedules().Pause("") } ``` # Remove Schedule Source: https://upstash.com/docs/qstash/api/schedules/remove DELETE https://qstash.upstash.io/v2/schedules/{scheduleId} Remove a schedule ## Request The schedule id to remove ## Response This endpoint simply returns 200 OK if the schedule is removed successfully. ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/schedules/scd_123 \ -H "Authorization: Bearer " ``` ```javascript Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/schedules/scd_123', { method: 'DELETE', headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.delete( 'https://qstash.upstash.io/v2/schedules/scd_123', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/schedules/scd_123", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Resume Schedule Source: https://upstash.com/docs/qstash/api/schedules/resume POST https://qstash.upstash.io/v2/schedules/{scheduleId}/resume Resume a paused schedule Resuming a schedule marks the schedule as active. This means the upcoming messages will be delivered and will not be ignored. If the schedule is already active, this action has no effect. ## Request The id of the schedule to resume. ## Response This endpoint simply returns 200 OK if the schedule is resumed successfully. ```sh curl theme={"system"} curl -X POST https://qstash.upstash.io/v2/schedules/scd_1234/resume \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} import { Client } from "@upstash/qstash"; /** * Import a fetch polyfill only if you are using node prior to v18. * This is not necessary for nextjs, deno or cloudflare workers. */ import "isomorphic-fetch"; const c = new Client({ token: "", }); c.schedules.resume({ schedule: "" }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.resume("") ``` ```go Go theme={"system"} package main import "github.com/upstash/qstash-go" func main() { client := qstash.NewClient("") // error checking is omitted for brevity err := client.Schedules().Resume("") } ``` # Get Signing Keys Source: https://upstash.com/docs/qstash/api/signingKeys/get GET https://qstash.upstash.io/v2/keys Retrieve your signing keys ## Response Your current signing key. The next signing key. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/keys \ -H "Authorization: Bearer " ``` ```javascript Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/keys', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/keys', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/keys", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "current": "sig_123", "next": "sig_456" } ``` # Rotate Signing Keys Source: https://upstash.com/docs/qstash/api/signingKeys/rotate POST https://qstash.upstash.io/v2/keys/rotate Rotate your signing keys ## Response Your current signing key. The next signing key. ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/keys/rotate \ -H "Authorization: Bearer " ``` ```javascript Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/keys/rotate', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/keys/rotate', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/keys/rotate", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "current": "sig_123", "next": "sig_456" } ``` # Upsert URL Group and Endpoint Source: https://upstash.com/docs/qstash/api/url-groups/add-endpoint POST https://qstash.upstash.io/v2/topics/{urlGroupName}/endpoints Add an endpoint to a URL Group If the URL Group does not exist, it will be created. If the endpoint does not exist, it will be created. ## Request The name of your URL Group (topic). If it doesn't exist yet, it will be created. The endpoints to add to the URL Group. The name of the endpoint The URL of the endpoint ## Response This endpoint returns 200 if the endpoints are added successfully. ```sh curl theme={"system"} curl -XPOST https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "endpoints": [ { "name": "endpoint1", "url": "https://example.com" }, { "name": "endpoint2", "url": "https://somewhere-else.com" } ] }' ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints', { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json' }, body: JSON.stringify({ 'endpoints': [ { 'name': 'endpoint1', 'url': 'https://example.com' }, { 'name': 'endpoint2', 'url': 'https://somewhere-else.com' } ] }) }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', } json_data = { 'endpoints': [ { 'name': 'endpoint1', 'url': 'https://example.com', }, { 'name': 'endpoint2', 'url': 'https://somewhere-else.com', }, ], } response = requests.post( 'https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints', headers=headers, json=json_data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{ "endpoints": [ { "name": "endpoint1", "url": "https://example.com" }, { "name": "endpoint2", "url": "https://somewhere-else.com" } ] }`) req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Get a URL Group Source: https://upstash.com/docs/qstash/api/url-groups/get GET https://qstash.upstash.io/v2/topics/{urlGroupName} Retrieves a URL Group ## Request The name of the URL Group (topic) to retrieve. ## Response The creation time of the URL Group. UnixMilli The update time of the URL Group. UnixMilli The name of the URL Group. The name of the endpoint The URL of the endpoint ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/topics/my-url-group \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/topics/my-url-group', { headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/topics/my-url-group', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/topics/my-url-group", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} { "createdAt": 1623345678001, "updatedAt": 1623345678001, "name": "my-url-group", "endpoints": [ { "name": "my-endpoint", "url": "https://my-endpoint.com" } ] } ``` # List URL Groups Source: https://upstash.com/docs/qstash/api/url-groups/list GET https://qstash.upstash.io/v2/topics List all your URL Groups ## Request No parameters ## Response The creation time of the URL Group. UnixMilli The update time of the URL Group. UnixMilli The name of the URL Group. The name of the endpoint. The URL of the endpoint ```sh curl theme={"system"} curl https://qstash.upstash.io/v2/topics \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/topics", { headers: { Authorization: "Bearer ", }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.get( 'https://qstash.upstash.io/v2/topics', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/topics", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` ```json 200 OK theme={"system"} [ { "createdAt": 1623345678001, "updatedAt": 1623345678001, "name": "my-url-group", "endpoints": [ { "name": "my-endpoint", "url": "https://my-endpoint.com" } ] }, // ... ] ``` # Remove URL Group Source: https://upstash.com/docs/qstash/api/url-groups/remove DELETE https://qstash.upstash.io/v2/topics/{urlGroupName} Remove a URL group and all its endpoints The URL Group and all its endpoints are removed. In flight messages in the URL Group are not removed but you will not be able to send messages to the topic anymore. ## Request The name of the URL Group to remove. ## Response This endpoint returns 200 if the URL Group is removed successfully. ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/topics/my-url-group \ -H "Authorization: Bearer " ``` ```js Node theme={"system"} const response = await fetch('https://qstash.upstash.io/v2/topics/my-url-group', { method: 'DELETE', headers: { 'Authorization': 'Bearer ' } }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', } response = requests.delete( 'https://qstash.upstash.io/v2/topics/my-url-group', headers=headers ) ``` ```go Go theme={"system"} req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/topics/my-url-group", nil) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Remove Endpoints Source: https://upstash.com/docs/qstash/api/url-groups/remove-endpoint DELETE https://qstash.upstash.io/v2/topics/{urlGroupName}/endpoints Remove one or more endpoints Remove one or multiple endpoints from a URL Group. If all endpoints have been removed, the URL Group will be deleted. ## Request The name of your URL Group. If it doesn't exist, we return an error. The endpoints to be removed from to the URL Group. Either `name` or `url` must be provided The name of the endpoint The URL of the endpoint ## Response This endpoint simply returns 200 OK if the endpoints have been removed successfully. ```sh curl theme={"system"} curl -XDELETE https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "endpoints": [ { "name": "endpoint1", }, { "url": "https://somewhere-else.com" } ] }' ``` ```js Node theme={"system"} const response = await fetch("https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", { method: "DELETE", headers: { Authorization: "Bearer ", "Content-Type": "application/json", }, body: { endpoints: [ { name: "endpoint1", }, { url: "https://somewhere-else.com", }, ], }, }); ``` ```python Python theme={"system"} import requests headers = { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', } data = { "endpoints": [ { "name": "endpoint1", }, { "url": "https://somewhere-else.com" } ] } response = requests.delete( 'https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints', headers=headers, data=data ) ``` ```go Go theme={"system"} var data = strings.NewReader(`{ "endpoints": [ { "name": "endpoint1", }, { "url": "https://somewhere-else.com" } ] }`) req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", data) if err != nil { log.Fatal(err) } req.Header.Set("Authorization", "Bearer ") req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatal(err) } defer resp.Body.Close() ``` # Background Jobs Source: https://upstash.com/docs/qstash/features/background-jobs ## When do you need background jobs Background jobs are essential for executing tasks that are too time-consuming to run in the main execution thread without affecting the user experience. These tasks might include data processing, sending batch emails, performing scheduled maintenance, or any other operations that are not immediately required to respond to user requests. Utilizing background jobs allows your application to remain responsive and scalable, handling more requests simultaneously by offloading heavy lifting to background processes. In Serverless frameworks, your hosting provider will likely have a limit for how long each task can last. Try searching for the maximum execution time for your hosting provider to find out more. ## How to use QStash for background jobs QStash provides a simple and efficient way to run background jobs, you can understand it as a 2 step process: 1. **Public API** Create a public API endpoint within your application. The endpoint should contain the logic for the background job. QStash requires a public endpoint to trigger background jobs, which means it cannot directly access localhost APIs. To get around this, you have two options: * Run QStash [development server](/qstash/howto/local-development) locally * Set up a [local tunnel](/qstash/howto/local-tunnel) for your API 2. **QStash Request** Invoke QStash to start/schedule the execution of the API endpoint. Here's what this looks like in a simple Next.js application: ```tsx app/page.tsx theme={"system"} "use client" export default function Home() { async function handleClick() { // Send a request to our server to start the background job. // For proper error handling, refer to the quick start. // Note: This can also be a server action instead of a route handler await fetch("/api/start-email-job", { method: "POST", body: JSON.stringify({ users: ["a@gmail.com", "b@gmail.com", "c.gmail.com"] }), }) } return (
); } ``` ```typescript app/api/start-email-job/route.ts theme={"system"} import { Client } from "@upstash/qstash"; const qstashClient = new Client({ token: "YOUR_TOKEN", }); export async function POST(request: Request) { const body = await request.json(); const users: string[] = body.users; // If you know the public URL of the email API, you can use it directly const rootDomain = request.url.split('/').slice(0, 3).join('/'); const emailAPIURL = `${rootDomain}/api/send-email`; // ie: https://yourapp.com/api/send-email // Tell QStash to start the background job. // For proper error handling, refer to the quick start. await qstashClient.publishJSON({ url: emailAPIURL, body: { users } }); return new Response("Job started", { status: 200 }); } ``` ```typescript app/api/send-email/route.ts theme={"system"} // This is a public API endpoint that will be invoked by QStash. // It contains the logic for the background job and may take a long time to execute. import { sendEmail } from "your-email-library"; export async function POST(request: Request) { const body = await request.json(); const users: string[] = body.users; // Send emails to the users for (const user of users) { await sendEmail(user); } return new Response("Job started", { status: 200 }); } ```
To better understand the application, let's break it down: 1. **Client**: The client application contains a button that, when clicked, sends a request to the server to start the background job. 2. **Next.js server**: The first endpoint, `/api/start-email-job`, is invoked by the client to start the background job. 3. **QStash**: The QStash client is used to invoke the `/api/send-email` endpoint, which contains the logic for the background job. Here is a visual representation of the process: Background job diagram Background job diagram To view a more detailed Next.js quick start guide for setting up QStash, refer to the [quick start](/qstash/quickstarts/vercel-nextjs) guide. It's also possible to schedule a background job to run at a later time using [schedules](/qstash/features/schedules). If you'd like to invoke another endpoint when the background job is complete, you can use [callbacks](/qstash/features/callbacks). # Batching Source: https://upstash.com/docs/qstash/features/batch [Publishing](/qstash/howto/publishing) is great for sending one message at a time, but sometimes you want to send a batch of messages at once. This can be useful to send messages to a single or multiple destinations. QStash provides the `batch` endpoint to help you with this. If the format of the messages are valid, the response will be an array of responses for each message in the batch. When batching URL Groups, the response will be an array of responses for each destination in the URL Group. If one message fails to be sent, that message will have an error response, but the other messages will still be sent. You can publish to destination, URL Group or queue in the same batch request. ## Batching messages with destinations You can also send messages to the same destination! ```shell cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d ' [ { "destination": "https://example.com/destination1" }, { "destination": "https://example.com/destination2" } ]' ``` ```typescript TypeScript theme={"system"} import { Client } from "@upstash/qstash"; // Each message is the same as the one you would send with the publish endpoint const client = new Client({ token: "" }); const res = await client.batchJSON([ { url: "https://example.com/destination1", }, { url: "https://example.com/destination2", }, ]); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.batch_json( [ {"url": "https://example.com/destination1"}, {"url": "https://example.com/destination2"}, ] ) ``` ## Batching messages with URL Groups If you have a [URL Group](/qstash/howto/url-group-endpoint), you can batch send with the URL Group as well. ```shell cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d ' [ { "destination": "myUrlGroup" }, { "destination": "https://example.com/destination2" } ]' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); // Each message is the same as the one you would send with the publish endpoint const res = await client.batchJSON([ { urlGroup: "myUrlGroup", }, { url: "https://example.com/destination2", }, ]); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.batch_json( [ {"url_group": "my-url-group"}, {"url": "https://example.com/destination2"}, ] ) ``` ## Batching messages with queue If you have a [queue](/qstash/features/queues), you can batch send with the queue. It is the same as publishing to a destination, but you need to set the queue name. ```shell cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d ' [ { "queue": "my-queue", "destination": "https://example.com/destination1" }, { "queue": "my-second-queue", "destination": "https://example.com/destination2" } ]' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); const res = await client.batchJSON([ { queueName: "my-queue", url: "https://example.com/destination1", }, { queueName: "my-second-queue", url: "https://example.com/destination2", }, ]); ``` ```python Python theme={"system"} from upstash_qstash import QStash from upstash_qstash.message import BatchRequest qstash = QStash("") messages = [ BatchRequest( queue="my-queue", url="https://httpstat.us/200", body=f"hi 1", retries=0 ), BatchRequest( queue="my-second-queue", url="https://httpstat.us/200", body=f"hi 2", retries=0 ), ] qstash.message.batch(messages) ``` ## Batching messages with headers and body You can provide custom headers and a body for each message in the batch. ```shell cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch -H "Authorization: Bearer XXX" \ -H "Content-Type: application/json" \ -d ' [ { "destination": "myUrlGroup", "headers":{ "Upstash-Delay":"5s", "Upstash-Forward-Hello":"123456" }, "body": "Hello World" }, { "destination": "https://example.com/destination1", "headers":{ "Upstash-Delay":"7s", "Upstash-Forward-Hello":"789" } }, { "destination": "https://example.com/destination2", "headers":{ "Upstash-Delay":"9s", "Upstash-Forward-Hello":"again" } } ]' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); // Each message is the same as the one you would send with the publish endpoint const msgs = [ { urlGroup: "myUrlGroup", delay: 5, body: "Hello World", headers: { hello: "123456", }, }, { url: "https://example.com/destination1", delay: 7, headers: { hello: "789", }, }, { url: "https://example.com/destination2", delay: 9, headers: { hello: "again", }, body: { Some: "Data", }, }, ]; const res = await client.batchJSON(msgs); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.batch_json( [ { "url_group": "my-url-group", "delay": "5s", "body": {"hello": "world"}, "headers": {"random": "header"}, }, { "url": "https://example.com/destination1", "delay": "1m", }, { "url": "https://example.com/destination2", "body": {"hello": "again"}, }, ] ) ``` #### The response for this will look like ```json theme={"system"} [ [ { "messageId": "msg_...", "url": "https://myUrlGroup-endpoint1.com" }, { "messageId": "msg_...", "url": "https://myUrlGroup-endpoint2.com" } ], { "messageId": "msg_..." }, { "messageId": "msg_..." } ] ``` # Callbacks Source: https://upstash.com/docs/qstash/features/callbacks All serverless function providers have a maximum execution time for each function. Usually you can extend this time by paying more, but it's still limited. QStash provides a way to go around this problem by using callbacks. ## What is a callback? A callback allows you to call a long running function without having to wait for its response. Instead of waiting for the request to finish, you can add a callback url to your published message and when the request finishes, we will call your callback URL with the response. 1. You publish a message to QStash using the `/v2/publish` endpoint 2. QStash will enqueue the message and deliver it to the destination 3. QStash waits for the response from the destination 4. When the response is ready, QStash calls your callback URL with the response Callbacks publish a new message with the response to the callback URL. Messages created by callbacks are charged as any other message. ## How do I use Callbacks? You can add a callback url in the `Upstash-Callback` header when publishing a message. The value must be a valid URL. ```bash cURL theme={"system"} curl -X POST \ https://qstash.upstash.io/v2/publish/https://my-api... \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -H 'Upstash-Callback: ' \ -d '{ "hello": "world" }' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, callback: "https://my-callback...", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, callback="https://my-callback...", ) ``` The callback body sent to you will be a JSON object with the following fields: ```json theme={"system"} { "status": 200, "header": { "key": ["value"] }, // Response header "body": "YmFzZTY0IGVuY29kZWQgcm9keQ==", // base64 encoded response body "retried": 2, // How many times we retried to deliver the original message "maxRetries": 3, // Number of retries before the message assumed to be failed to delivered. "sourceMessageId": "msg_xxx", // The ID of the message that triggered the callback "topicName": "myTopic", // The name of the URL Group (topic) if the request was part of a URL Group "endpointName": "myEndpoint", // The endpoint name if the endpoint is given a name within a topic "url": "http://myurl.com", // The destination url of the message that triggered the callback "method": "GET", // The http method of the message that triggered the callback "sourceHeader": { "key": "value" }, // The http header of the message that triggered the callback "sourceBody": "YmFzZTY0kZWQgcm9keQ==", // The base64 encoded body of the message that triggered the callback "notBefore": "1701198458025", // The unix timestamp of the message that triggered the callback is/will be delivered in milliseconds "createdAt": "1701198447054", // The unix timestamp of the message that triggered the callback is created in milliseconds "scheduleId": "scd_xxx", // The scheduleId of the message if the message is triggered by a schedule "callerIP": "178.247.74.179" // The IP address where the message that triggered the callback is published from } ``` In Next.js you could use the following code to handle the callback: ```js theme={"system"} // pages/api/callback.js import { verifySignature } from "@upstash/qstash/nextjs"; function handler(req, res) { // responses from qstash are base64-encoded const decoded = atob(req.body.body); console.log(decoded); return res.status(200).end(); } export default verifySignature(handler); export const config = { api: { bodyParser: false, }, }; ``` We may truncate the response body if it exceeds your plan limits. You can check your `Max Message Size` in the [console](https://console.upstash.com/qstash?tab=details). Make sure you verify the authenticity of the callback request made to your API by [verifying the signature](/qstash/features/security/#request-signing-optional). # What is a Failure-Callback? Failure callbacks are similar to callbacks but they are called only when all the retries are exhausted and still the message can not be delivered to the given endpoint. This is designed to be an serverless alternative to [List messages to DLQ](/qstash/api/dlq/listMessages). You can add a failure callback URL in the `Upstash-Failure-Callback` header when publishing a message. The value must be a valid URL. ```bash cURL theme={"system"} curl -X POST \ https://qstash.upstash.io/v2/publish/ \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -H 'Upstash-Failure-Callback: ' \ -d '{ "hello": "world" }' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, failureCallback: "https://my-callback...", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, failure_callback="https://my-callback...", ) ``` The callback body sent to you will be a JSON object with the following fields: ```json theme={"system"} { "status": 400, "header": { "key": ["value"] }, // Response header "body": "YmFzZTY0IGVuY29kZWQgcm9keQ==", // base64 encoded response body "retried": 3, // How many times we retried to deliver the original message "maxRetries": 3, // Number of retries before the message assumed to be failed to delivered. "dlqId": "1725323658779-0", // Dead Letter Queue id. This can be used to retrieve/remove the related message from DLQ. "sourceMessageId": "msg_xxx", // The ID of the message that triggered the callback "topicName": "myTopic", // The name of the URL Group (topic) if the request was part of a topic "endpointName": "myEndpoint", // The endpoint name if the endpoint is given a name within a topic "url": "http://myurl.com", // The destination url of the message that triggered the callback "method": "GET", // The http method of the message that triggered the callback "sourceHeader": { "key": "value" }, // The http header of the message that triggered the callback "sourceBody": "YmFzZTY0kZWQgcm9keQ==", // The base64 encoded body of the message that triggered the callback "notBefore": "1701198458025", // The unix timestamp of the message that triggered the callback is/will be delivered in milliseconds "createdAt": "1701198447054", // The unix timestamp of the message that triggered the callback is created in milliseconds "scheduleId": "scd_xxx", // The scheduleId of the message if the message is triggered by a schedule "callerIP": "178.247.74.179" // The IP address where the message that triggered the callback is published from } ``` You can also use a callback and failureCallback together! ## Configuring Callbacks Publishes/enqueues for callbacks can also be configured with the same HTTP headers that are used to configure direct publishes/enqueues. You can refer to headers that are used to configure `publishes` [here](/qstash/api/publish) and for `enqueues` [here](/qstash/api/enqueue) Instead of the `Upstash` prefix for headers, the `Upstash-Callback`/`Upstash-Failure-Callback` prefix can be used to configure callbacks as follows: ``` Upstash-Callback-Timeout Upstash-Callback-Retries Upstash-Callback-Delay Upstash-Callback-Method Upstash-Failure-Callback-Timeout Upstash-Failure-Callback-Retries Upstash-Failure-Callback-Delay Upstash-Failure-Callback-Method ``` You can also forward headers to your callback endpoints as follows: ``` Upstash-Callback-Forward-MyCustomHeader Upstash-Failure-Callback-Forward-MyCustomHeader ``` # Deduplication Source: https://upstash.com/docs/qstash/features/deduplication Messages can be deduplicated to prevent duplicate messages from being sent. When a duplicate message is detected, it is accepted by QStash but not enqueued. This can be useful when the connection between your service and QStash fails, and you never receive the acknowledgement. You can simply retry publishing and can be sure that the message will enqueued only once. In case a message is a duplicate, we will accept the request and return the messageID of the existing message. The only difference will be the response status code. We'll send HTTP `202 Accepted` code in case of a duplicate message. ## Deduplication ID To deduplicate a message, you can send the `Upstash-Deduplication-Id` header when publishing the message. ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Deduplication-Id: abcdef" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://my-api..."' ``` ```typescript TypeScript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, deduplicationId: "abcdef", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, deduplication_id="abcdef", ) ``` ## Content Based Deduplication If you want to deduplicate messages automatically, you can set the `Upstash-Content-Based-Deduplication` header to `true`. ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Content-Based-Deduplication: true" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/...' ``` ```typescript TypeScript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, contentBasedDeduplication: true, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, content_based_deduplication=True, ) ``` Content based deduplication creates a unique deduplication ID for the message based on the following fields: * **Destination**: The URL Group or endpoint you are publishing the message to. * **Body**: The body of the message. * **Header**: This includes the `Content-Type` header and all headers, that you forwarded with the `Upstash-Forward-` prefix. See [custom HTTP headers section](/qstash/howto/publishing#sending-custom-http-headers). The deduplication window is 10 minutes. After that, messages with the same ID or content can be sent again. # Delay Source: https://upstash.com/docs/qstash/features/delay When publishing a message, you can delay it for a certain amount of time before it will be delivered to your API. See the [pricing table](https://upstash.com/pricing/qstash) for more information For free: The maximum allowed delay is **7 days**. For pay-as-you-go: The maximum allowed delay is **1 year**. For fixed pricing: The maximum allowed delay is **Custom(you may delay as much as needed)**. ## Relative Delay Delay a message by a certain amount of time relative to the time the message was published. The format for the duration is ``. Here are some examples: * `10s` = 10 seconds * `1m` = 1 minute * `30m` = half an hour * `2h` = 2 hours * `7d` = 7 days You can send this duration inside the `Upstash-Delay` header. ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Delay: 1m" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://my-api...' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, delay: 60, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, headers={ "test-header": "test-value", }, delay="60s", ) ``` `Upstash-Delay` will get overridden by `Upstash-Not-Before` header when both are used together. ## Absolute Delay Delay a message until a certain time in the future. The format is a unix timestamp in seconds, based on the UTC timezone. You can send the timestamp inside the `Upstash-Not-Before` header. ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Not-Before: 1657104947" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://my-api...' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, notBefore: 1657104947, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, headers={ "test-header": "test-value", }, not_before=1657104947, ) ``` `Upstash-Not-Before` will override the `Upstash-Delay` header when both are used together. ## Delays in Schedules Adding a delay in schedules is only possible via `Upstash-Delay`. The delay will affect the messages that will be created by the schedule and not the schedule itself. For example when you create a new schedule with a delay of `30s`, the messages will be created when the schedule triggers but only delivered after 30 seconds. # Dead Letter Queues Source: https://upstash.com/docs/qstash/features/dlq At times, your API may fail to process a request. This could be due to a bug in your code, a temporary issue with a third-party service, or even network issues. QStash automatically retries messages that fail due to a temporary issue but eventually stops and moves the message to a dead letter queue to be handled manually. Read more about retries [here](/qstash/features/retry). ## How to Use the Dead Letter Queue You can manually republish messages from the dead letter queue in the console. 1. **Retry** - Republish the message and remove it from the dead letter queue. Republished messages are just like any other message and will be retried automatically if they fail. 2. **Delete** - Delete the message from the dead letter queue. ## Limitations Dead letter queues are subject only to a retention period that depends on your plan. Messages are deleted when their retention period expires. See the “Max DLQ Retention” row on the [QStash Pricing](https://upstash.com/pricing/qstash) page. # Flow Control Source: https://upstash.com/docs/qstash/features/flowcontrol FlowControl enables you to limit the number of messages sent to your endpoint via delaying the delivery. There are two limits that you can set with the FlowControl feature: [Rate](#rate-limit) and [Parallelism](#parallelism-limit). And if needed both parameters can be [combined](#rate-and-parallelism-together). For the `FlowControl`, you need to choose a key first. This key is used to count the number of calls made to your endpoint. There are not limits to number of keys you can use. The rate/parallelism limits are not applied per `url`, they are applied per `Flow-Control-Key`. Keep in mind that rate/period and parallelism info are kept on each publish separately. That means if you change the rate/period or parallelism on a new publish, the old fired ones will not be affected. They will keep their flowControl config. During the period that old `publishes` has not delivered but there are also the `publishes` with the new rates, QStash will effectively allow the highest rate/period or highest parallelism. Eventually(after the old publishes are delivered), the new rate/period and parallelism will be used. ## Rate and Period Parameters The `rate` parameter specifies the maximum number of calls allowed within a given period. The `period` parameter allows you to specify the time window over which the rate limit is enforced. By default, the period is set to 1 second, but you can adjust it to control how frequently calls are allowed. For example, you can set a rate of 10 calls per minute as follows: ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world" }, flowControl: { key: "USER_GIVEN_KEY", rate: 10, period: "1m" }, }); ``` ```bash cURL theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \ -H "Upstash-Flow-Control-Value:rate=10,period=1m" \ 'https://qstash.upstash.io/v2/publish/https://example.com' \ -d '{"message":"Hello, World!"}' ``` ## Parallelism Limit The parallelism limit is the number of calls that can be active at the same time. Active means that the call is made to your endpoint and the response is not received yet. You can set the parallelism limit to 10 calls active at the same time as follows: ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world" }, flowControl: { key: "USER_GIVEN_KEY", parallelism: 10 }, }); ``` ```bash cURL theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \ -H "Upstash-Flow-Control-Value:parallelism=10" \ 'https://qstash.upstash.io/v2/publish/https://example.com' \ -d '{"message":"Hello, World!"}' ``` You can also use the Rest API to get information how many messages waiting for parallelism limit. See the [API documentation](/qstash/api/flow-control/get) for more details. ## Rate, Parallelism, and Period Together All three parameters can be combined. For example, with a rate of 10 per minute, parallelism of 20, and a period of 1 minute, QStash will trigger 10 calls in the first minute and another 10 in the next. Since none of them will have finished, the system will wait until one completes before triggering another. ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world" }, flowControl: { key: "USER_GIVEN_KEY", rate: 10, parallelism: 20, period: "1m" }, }); ``` ```bash cURL theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \ -H "Upstash-Flow-Control-Value:rate=10,parallelism=20,period=1m" \ 'https://qstash.upstash.io/v2/publish/https://example.com' \ -d '{"message":"Hello, World!"}' ``` ## Monitor You can monitor wait list size of your flow control key's from the console `FlowControl` tab. Also you can get the same info using the REST API. * [List All Flow Control Keys](/qstash/api/flow-control/list). * [Single Flow Control Key](/qstash/api/flow-control/get). # Queues Source: https://upstash.com/docs/qstash/features/queues The queue concept in QStash allows ordered delivery (FIFO). See the [API doc](/qstash/api/queues/upsert) for the full list of related Rest APIs. Here we list common use cases for Queue and how to use them. ## Ordered Delivery With Queues, the ordered delivery is guaranteed by default. This means: * Your messages will be queued without blocking the REST API and sent one by one in FIFO order. Queued means [CREATED](/qstash/howto/debug-logs) event will be logged. * The next message will wait for retries of the current one if it can not be delivered because your endpoint returns non-2xx code. In other words, the next message will be [ACTIVE](/qstash/howto/debug-logs) only after the last message is either [DELIVERED](/qstash/howto/debug-logs) or [FAILED](/qstash/howto/debug-logs). * Next message will wait for [callbacks](/qstash/features/callbacks#what-is-a-callback) or [failure callbacks](/qstash/features/callbacks#what-is-a-failure-callback) to finish. ```bash cURL theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ 'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com' -d '{"message":"Hello, World!"}' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); const queue = client.queue({ queueName: "my-queue" }) await queue.enqueueJSON({ url: "https://example.com", body: { "Hello": "World" } }) ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.enqueue_json( queue="my-queue", url="https://example.com", body={ "Hello": "World", }, ) ``` ## Controlled Parallelism For the parallelism limit, we introduced an easier and less limited API with publish. Please check the [Flow Control](/qstash/features/flowcontrol) page for the detailed information. Setting parallelism with queues will be deprecated at some point. To ensure that your endpoint is not overwhelmed and also you want more than one-by-one delivery for better throughput, you can achieve controlled parallelism with queues. By default, queues have parallelism 1. Depending on your [plan](https://upstash.com/pricing/qstash), you can configure the parallelism of your queues as follows: ```bash cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/queues/ \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "queueName": "my-queue", "parallelism": 5, }' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); const queue = client.queue({ queueName: "my-queue" }) await queue.upsert({ parallelism: 1, }) ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.queue.upsert("my-queue", parallelism=5) ``` After that, you can use the `enqueue` path to send your messages. ```bash cURL theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ 'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com' -d '{"message":"Hello, World!"}' ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); const queue = QStashClient.queue({ queueName: "my-queue" }) await queue.enqueueJSON({ url: "https://example.com", body: { "Hello": "World" } }) ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.enqueue_json( queue="my-queue", url="https://example.com", body={ "Hello": "World", }, ) ``` You can check the parallelism of your queues with the following API: ```bash cURL theme={"system"} curl https://qstash.upstash.io/v2/queues/my-queue \ -H "Authorization: Bearer " ``` ```typescript TypeScript theme={"system"} const client = new Client({ token: "" }); const queue = client.queue({ queueName: "my-queue" }) const res = await queue.get() ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.queue.get("my-queue") ``` # Retry Source: https://upstash.com/docs/qstash/features/retry QStash will abort a delivery attempt if **the HTTP call to your endpoint does not return within the plan-specific Max HTTP Response Duration**.\ See the current limits on the QStash pricing page. Many things can go wrong in a serverless environment. If your API does not respond with a success status code (2XX), we retry the request to ensure every message will be delivered. The maximum number of retries depends on your current plan. By default, we retry the maximum amount of times, but you can set it lower by sending the `Upstash-Retries` header: ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Retries: 2" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://my-api...' ``` ```typescript TypeScript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, retries: 2, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, retries=2, ) ``` The backoff algorithm calculates the retry delay based on the number of retries. Each delay is capped at 1 day. ``` n = how many times this request has been retried delay = min(86400, e ** (2.5*n)) // in seconds ``` | n | delay | | - | ------ | | 1 | 12s | | 2 | 2m28s | | 3 | 30m8ss | | 4 | 6h7m6s | | 5 | 24h | | 6 | 24h | ## Custom Retry Delay You can customize the delay between retry attempts by using the `Upstash-Retry-Delay` header when publishing a message. This allows you to override the default exponential backoff with your own mathematical expressions. ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Retries: 3" \ -H "Upstash-Retry-Delay: pow(2, retried) * 1000" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://my-api...' ``` ```typescript TypeScript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, retries: 3, retryDelay: "pow(2, retried) * 1000", // 2^retried * 1000ms }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, retries=3, retry_delay="pow(2, retried) * 1000", # 2^retried * 1000ms ) ``` The `retryDelay` expression can use mathematical functions and the special variable `retried` (current retry attempt count starting from 0). **Supported functions:** * `pow` - Power function * `sqrt` - Square root * `abs` - Absolute value * `exp` - Exponential * `floor` - Floor function * `ceil` - Ceiling function * `round` - Rounding function * `min` - Minimum of values * `max` - Maximum of values **Examples:** * `1000` - Fixed 1 second delay * `1000 * (1 + retried)` - Linear backoff: 1s, 2s, 3s, 4s... * `pow(2, retried) * 1000` - Exponential backoff: 1s, 2s, 4s, 8s... * `max(1000, pow(2, retried) * 100)` - Exponential with minimum 1s delay ## Retry-After Headers Instead of using the default backoff algorithm, you can specify when QStash should retry your message. To do this, include one of the following headers in your response to QStash request. * Retry-After * X-RateLimit-Reset * X-RateLimit-Reset-Requests * X-RateLimit-Reset-Tokens These headers can be set to a value in seconds, the RFC1123 date format, or a duration format (e.g., 6m5s). For the duration format, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Note that you can only delay retries up to the maximum value of the default backoff algorithm, which is one day. If you specify a value beyond this limit, the backoff algorithm will be applied. This feature is particularly useful if your application has rate limits, ensuring retries are scheduled appropriately without wasting attempts during restricted periods. ``` Retry-After: 0 // Next retry will be scheduled immediately without any delay. Retry-After: 10 // Next retry will be scheduled after a 10-second delay. Retry-After: 6m5s // Next retry will be scheduled after 6 minutes 5 seconds delay. Retry-After: Sun, 27 Jun 2024 12:16:24 GMT // Next retry will be scheduled for the specified date, within the allowable limits. ``` ## Upstash-Retried Header QStash adds the `Upstash-Retried` header to requests sent to your API. This indicates how many times the request has been retried. ``` Upstash-Retried: 0 // This is the first attempt Upstash-Retried: 1 // This request has been sent once before and now is the second attempt Upstash-Retried: 2 // This request has been sent twice before and now is the third attempt ``` ## Non-Retryable Error By default, QStash retries requests for any response that does not return a successful 2XX status code. To explicitly disable retries for a given message, respond with a 489 status code and include the header `Upstash-NonRetryable-Error: true`. When this header is present, QStash will immediately mark the message as failed and skip any further retry attempts. The message will then be forwarded to the Dead Letter Queue (DLQ) for manual review and resolution. This mechanism is particularly useful in scenarios where retries are generally enabled but should be bypassed for specific known errors—such as invalid payloads or non-recoverable conditions. # Schedules Source: https://upstash.com/docs/qstash/features/schedules In addition to sending a message once, you can create a schedule, and we will publish the message in the given period. To create a schedule, you simply need to add the `Upstash-Cron` header to your `publish` request. Schedules can be configured using `cron` expressions. [crontab.guru](https://crontab.guru/) is a great tool for understanding and creating cron expressions. By default, we evaluate cron expressions in `UTC`.\ If you want to run your schedule in a specific timezone, see the section on [Timezones](#timezones). The following request would create a schedule that will automatically publish the message every minute: ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", cron: "* * * * *", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="https://example.com", cron="* * * * *", ) ``` ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Cron: * * * * *" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/https://example.com' ``` All of the [other config options](/qstash/howto/publishing#optional-parameters-and-configuration) can still be used. It can take up to 60 seconds for the schedule to be loaded on an active node and triggered for the first time. You can see and manage your schedules in the [Upstash Console](https://console.upstash.com/qstash). ### Scheduling to a URL Group Instead of scheduling a message to a specific URL, you can also create a schedule, that publishes to a URL Group. Simply use either the URL Group name or its id: ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "urlGroupName", cron: "* * * * *", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="url-group-name", cron="* * * * *", ) ``` ```bash cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Cron: * * * * *" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/' ``` ### Scheduling to a Queue You can schedule an item to be added to a queue at a specified time. ```bash typescript theme={"system"} curl -XPOST \ import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", cron: "* * * * *", queueName: "yourQueueName", }); ``` ```bash cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Cron: * * * * *" \ -H "Upstash-Queue-Name: yourQueueName" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/https://example.com' ``` ### Overwriting an existing schedule You can pass scheduleId explicitly to overwrite an existing schedule or just simply create the schedule with the given schedule id. ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", scheduleId: "existingScheduleId", cron: "* * * * *", }); ``` ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Cron: * * * * *" \ -H "Upstash-Schedule-Id: existingScheduleId" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/https://example.com' ``` ### Timezones By default, cron expressions are evaluated in `UTC`.\ You can specify a different timezone using the `CRON_TZ` prefix directly inside the cron expression. All [IANA timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) are supported. For example, this schedule runs every day at `04:00 AM` in New York time: ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", cron: "CRON_TZ=America/New_York 0 4 * * *", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="https://example.com", cron="CRON_TZ=America/New_York 0 4 * * *", ) ``` ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Cron: CRON_TZ=America/New_York 0 4 * * *" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/https://example.com' ``` # Security Source: https://upstash.com/docs/qstash/features/security ### Request Authorization When interacting with the QStash API, you will need an authorization token. You can get your token from the [Console](https://console.upstash.com/qstash). Send this token along with every request made to `QStash` inside the `Authorization` header like this: ``` "Authorization": "Bearer " ``` ### Request Signing (optional) Because your endpoint needs to be publicly available, we recommend you verify the authenticity of each incoming request. #### The `Upstash-Signature` header With each request we are sending a JWT inside the `Upstash-Signature` header. You can learn more about them [here](https://jwt.io). An example token would be: **Header** ```json theme={"system"} { "alg": "HS256", "typ": "JWT" } ``` **Payload** ```json theme={"system"} { "iss": "Upstash", "sub": "https://qstash-remote.requestcatcher.com/test", "exp": 1656580612, "nbf": 1656580312, "iat": 1656580312, "jti": "jwt_67kxXD6UBAk7DqU6hzuHMDdXFXfP", "body": "qK78N0k3pNKI8zN62Fq2Gm-_LtWkJk1z9ykio3zZvY4=" } ``` The JWT is signed using `HMAC SHA256` algorithm with your current signing key and includes the following claims: #### Claims ##### `iss` The issuer field is always `Upstash`. ##### `sub` The url of your endpoint, where this request is sent to. For example when you are using a nextjs app on vercel, this would look something like `https://my-app.vercel.app/api/endpoint` ##### `exp` A unix timestamp in seconds after which you should no longer accept this request. Our JWTs have a lifetime of 5 minutes by default. ##### `iat` A unix timestamp in seconds when this JWT was created. ##### `nbf` A unix timestamp in seconds before which you should not accept this request. ##### `jti` A unique id for this token. ##### `body` The body field is a base64 encoded sha256 hash of the request body. We use url encoding as specified in [RFC 4648](https://datatracker.ietf.org/doc/html/rfc4648#section-5). #### Verifying the signature See [how to verify the signature](/qstash/howto/signature). # URL Groups Source: https://upstash.com/docs/qstash/features/url-groups Sending messages to a single endpoint and not having to worry about retries is already quite useful, but we also added the concept of URL Groups to QStash. In short, a URL Group is just a namespace where you can publish messages to, the same way as publishing a message to an endpoint directly. After creating a URL Group, you can create one or multiple endpoints. An endpoint is defined by a publicly available URL where the request will be sent to each endpoint after it is published to the URL Group. When you publish a message to a URL Group, it will be fanned out and sent to all the subscribed endpoints. ## When should I use URL Groups? URL Groups decouple your message producers from consumers by grouping one or more endpoints into a single namespace. Here's an example: You have a serverless function which is invoked with each purchase in your e-commerce site. You want to send email to the customer after the purchase. Inside the function, you submit the URL `api/sendEmail` to the QStash. Later, if you want to send a Slack notification, you need to update the serverless function adding another call to QStash to submit `api/sendNotification`. In this example, you need to update and redeploy the Serverless function at each time you change (or add) the endpoints. If you create a URL Group `product-purchase` and produce messages to that URL Group in the function, then you can add or remove endpoints by only updating the URL Group. URL Groups give you freedom to modify endpoints without touching the backend implementation. Check [here](/qstash/howto/publishing#publish-to-url-group) to learn how to publish to URL Groups. ## How URL Groups work When you publish a message to a URL Group, we will enqueue a unique task for each subscribed endpoint and guarantee successful delivery to each one of them. [![](https://mermaid.ink/img/pako:eNp1kl1rgzAUhv9KyOWoddXNtrkYVNdf0F0U5ijRHDVMjctHoRT_-2KtaztUQeS8j28e8JxxKhhggpWmGt45zSWtnKMX13GN7PX59IUc5w19iIanBDUmKbkq-qwfXuKdSVQqeQLssK1ZI3itVQ9dekdzdO6Ja9ntKKq-DxtEoP4xYGCIr-OOGCoOG4IYlPwIcqBu0V0XQRK0PE0w9lyCvP1-iB1n1CgcNwofjcJpo_Cua8ooHDWadIrGnaJHp2jaKbrrmnKK_jl1d9s98AxXICvKmd2fy8-MsS6gghgT-5oJCUrH2NKWNA2zi7BlXAuJSUZLBTNMjRa7U51ioqWBAbpu4R9VCsrAfnTG-tR0u5pzpW1lKuqM593cyNKOC60bRVy3i-c514VJ5qmoXMVZQaUujuvADbxgRT0fgqVPX32fpclivcq8l0XGls8Lj-K2bX8Bx2nzPg)](https://mermaid.live/edit#pako:eNp1kl1rgzAUhv9KyOWoddXNtrkYVNdf0F0U5ijRHDVMjctHoRT_-2KtaztUQeS8j28e8JxxKhhggpWmGt45zSWtnKMX13GN7PX59IUc5w19iIanBDUmKbkq-qwfXuKdSVQqeQLssK1ZI3itVQ9dekdzdO6Ja9ntKKq-DxtEoP4xYGCIr-OOGCoOG4IYlPwIcqBu0V0XQRK0PE0w9lyCvP1-iB1n1CgcNwofjcJpo_Cua8ooHDWadIrGnaJHp2jaKbrrmnKK_jl1d9s98AxXICvKmd2fy8-MsS6gghgT-5oJCUrH2NKWNA2zi7BlXAuJSUZLBTNMjRa7U51ioqWBAbpu4R9VCsrAfnTG-tR0u5pzpW1lKuqM593cyNKOC60bRVy3i-c514VJ5qmoXMVZQaUujuvADbxgRT0fgqVPX32fpclivcq8l0XGls8Lj-K2bX8Bx2nzPg) Consider this scenario: You have a URL Group and 3 endpoints that are subscribed to it. Now when you publish a message to the URL Group, internally we will create a task for each subscribed endpoint and handle all retry mechanism isolated from each other. ## How to create a URL Group Please refer to the howto [here](/qstash/howto/url-group-endpoint). # Debug Logs Source: https://upstash.com/docs/qstash/howto/debug-logs To debug the logs, first you need to understand the different states a message can be in. Only the last 10.000 logs are kept and older logs are removed automatically. ## Lifecycle of a Message To understand the lifecycle of each message, we'll look at the following chart: [comment]: # "https://mermaid.live/edit#pako:eNptU9uO2jAQ_RXLjxVXhyTED5UQpBUSZdtAK7VNtfLGTmIpsZHjrEoR_17HBgLdztPMmXPm4ssJZpIyiGGjiWYrTgpF6uErSgUw9vPdLzAcvgfLJF7s45UDL4FNbEnN6FLWB9lwzVz-EbO0xXK__hb_L43Bevv8OXn6mMS7nSPYSf6tcgIXc5zOkniffH9TvrM4SZ4Sm3GcXne-rLDYLuPNcxJ_-Rrvrrs4cGMiRxLS9K1YroHM3yowqFnTkIKBjIiMVYA3xqsqRp3azWQLu3EwaFUFFNOtEg3ICa9uU91xV_HGuIltcM9v2iwz_fpN-u0_LNYbyzdcdQQVr7k2PsnK6yx90Y5vLtXBF-ED1h_CA5wKOICF4hRirVo2gDVTNelCeOoYKdQlq1kKsXEpy0lb6RSm4mxkByJ-SFlflUq2RQlxTqrGRO2B9u_uhpJWy91RZFeNY8WUa6lupEoSykx4gvp46J5wwRtt-mVS5LzocHOABi61PjR4PO7So4Lrsn0ZZbIeN5yWROnyNQrGAQrmBHksCD3iex7NXqbRPEezaU7DyRQReD4PILP9P7n_Yr-N2YYJM8RStkJDHHqRXbfr_RviaDbyQg9NJz7yg9ksCAfwCHGARn6AfC9CKJqiiT83lf_Y85mM5uEsurfzX7VrENs" Either you or a previously setup schedule will create a message. When a message is ready for execution, it will be become `ACTIVE` and a delivery to your API is attempted. If you API responds with a status code between `200 - 299`, the task is considered successful and will be marked as `DELIVERED`. Otherwise the message is being retried if there are any retries left and moves to `RETRY`. If all retries are exhausted, the task has `FAILED` and the message will be moved to the DLQ. During all this a message can be cancelled via [DELETE /v2/messages/:messageId](https://docs.upstash.com/qstash/api/messages/cancel). When the request is received, `CANCEL_REQUESTED` will be logged first. If retries are not exhausted yet, in the next deliver time, the message will be marked as `CANCELLED` and will be completely removed from the system. ## Console Head over to the [Upstash Console](https://console.upstash.com/qstash) and go to the `Logs` tab, where you can see the latest status of your messages. # Delete Schedules Source: https://upstash.com/docs/qstash/howto/delete-schedule Deleting schedules can be done using the [schedules api](/qstash/api/schedules/remove). ```shell cURL theme={"system"} curl -XDELETE \ -H 'Authorization: Bearer XXX' \ 'https://qstash.upstash.io/v2/schedules/' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.delete(""); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.delete("") ``` Deleting a schedule does not stop existing messages from being delivered. It only stops the schedule from creating new messages. ## Schedule ID If you don't know the schedule ID, you can get a list of all of your schedules from [here](/qstash/api/schedules/list). ```shell cURL theme={"system"} curl \ -H 'Authorization: Bearer XXX' \ 'https://qstash.upstash.io/v2/schedules' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const allSchedules = await client.schedules.list(); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.schedule.list() ``` # Handling Failures Source: https://upstash.com/docs/qstash/howto/handling-failures Sometimes, endpoints fail due to various reasons such as network issues or server issues. In such cases, QStash offers a few options to handle these failures. ## Failure Callbacks When publishing a message, you can provide a failure callback that will be called if the message fails to be published. You can read more about callbacks [here](/qstash/features/callbacks). With the failure callback, you can add custom logic such as logging the failure or sending an alert to the team. Once you handle the failure, you can [delete it from the dead letter queue](/qstash/api/dlq/deleteMessage). ```bash cURL theme={"system"} curl -X POST \ https://qstash.upstash.io/v2/publish/ \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -H 'Upstash-Failure-Callback: ' \ -d '{ "hello": "world" }' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, failureCallback: "https://my-callback...", }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, failure_callback="https://my-callback...", ) ``` ## Dead Letter Queue If you don't want to handle the failure immediately, you can use the dead letter queue (DLQ) to store the failed messages. You can read more about the dead letter queue [here](/qstash/features/dlq). Failed messages are automatically moved to the dead letter queue upon failure, and can be retried from the console or the API by [retrieving the message](/qstash/api/dlq/getMessage) then [publishing it](/qstash/api/publish). DLQ from console # Local Development Source: https://upstash.com/docs/qstash/howto/local-development QStash requires a publicly available API to send messages to. During development when applications are not yet deployed, developers typically need to expose their local API by creating a public tunnel. While local tunneling works seamlessly, it requires code changes between development and production environments and increase friction for developers. To simplify the development process, Upstash provides QStash CLI, which allows you to run a development server locally for testing and development. The development server fully supports all QStash features including Schedules, URL Groups, Workflows, and Event Logs. Since the development server operates entirely in-memory, all data is reset when the server restarts. You can download and run the QStash CLI executable binary in several ways: ## NPX (Node Package Executable) Install the binary via the `@upstash/qstash-cli` NPM package: ```javascript theme={"system"} npx @upstash/qstash-cli dev // Start on a different port npx @upstash/qstash-cli dev -port=8081 ``` Once you start the local server, you can go to the QStash tab on Upstash Console and enable local mode, which will allow you to publish requests and monitor messages with the local server. ## Docker QStash CLI is available as a Docker image through our public AWS ECR repository: ```javascript theme={"system"} // Pull the image docker pull public.ecr.aws/upstash/qstash:latest // Run the image docker run -p 8080:8080 public.ecr.aws/upstash/qstash:latest qstash dev ``` ## Artifact Repository You can download the binary directly from our artifact repository without using a package manager: [https://artifacts.upstash.com/#qstash/versions/](https://artifacts.upstash.com/#qstash/versions/) Select the appropriate version, architecture, and operating system for your platform. After extracting the archive file, run the executable: ``` $ ./qstash dev ``` ## QStash CLI Currently, the only available command for QStash CLI is `dev`, which starts a development server instance. ``` $ ./qstash dev --help Usage of dev: -port int The port to start HTTP server at [env QSTASH_DEV_PORT] (default 8080) -quota string The quota of users [env QSTASH_DEV_QUOTA] (default "payg") ``` There are predefined test users available. You can configure the quota type of users using the `-quota` option, with available options being `payg` and `pro`. These quotas don't affect performance but allow you to simulate different server limits based on the subscription tier. After starting the development server using any of the methods above, it will display the necessary environment variables. Select and copy the credentials from one of the following test users: ```javascript User 1 theme={"system"} QSTASH_URL="http://localhost:8080" QSTASH_TOKEN="eyJVc2VySUQiOiJkZWZhdWx0VXNlciIsIlBhc3N3b3JkIjoiZGVmYXVsdFBhc3N3b3JkIn0=" QSTASH_CURRENT_SIGNING_KEY="sig_7kYjw48mhY7kAjqNGcy6cr29RJ6r" QSTASH_NEXT_SIGNING_KEY="sig_5ZB6DVzB1wjE8S6rZ7eenA8Pdnhs" ``` ```javascript User 2 theme={"system"} QSTASH_URL="http://localhost:8080" QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjEiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9" QSTASH_CURRENT_SIGNING_KEY="sig_7GVPjvuwsfqF65iC8fSrs1dfYruM" QSTASH_NEXT_SIGNING_KEY="sig_5NoELc3EFnZn4DVS5bDs2Nk4b7Ua" ``` ```javascript User 3 theme={"system"} QSTASH_URL="http://localhost:8080" QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjIiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9" QSTASH_CURRENT_SIGNING_KEY="sig_6jWGaWRxHsw4vMSPJprXadyvrybF" QSTASH_NEXT_SIGNING_KEY="sig_7qHbvhmahe5GwfePDiS5Lg3pi6Qx" ``` ```javascript User 4 theme={"system"} QSTASH_URL="http://localhost:8080" QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjMiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9" QSTASH_CURRENT_SIGNING_KEY="sig_5T8FcSsynBjn9mMLBsXhpacRovJf" QSTASH_NEXT_SIGNING_KEY="sig_7GFR4YaDshFcqsxWRZpRB161jguD" ``` Currently, there is no GUI client available for the development server. You can use QStash SDKs to fetch resources like event logs. ## License The QStash development server is licensed under the [Development Server License](/qstash/misc/license), which restricts its use to development and testing purposes only. It is not permitted to use it in production environments. Please refer to the full license text for details. # Local Tunnel Source: https://upstash.com/docs/qstash/howto/local-tunnel QStash requires a publicly available API to send messages to. The recommended approach is to run a [development server](/qstash/howto/local-development) locally and use it for development purposes. Alternatively, you can set up a local tunnel to expose your API, enabling QStash to send requests directly to your application during development. ## localtunnel.me [localtunnel.me](https://github.com/localtunnel/localtunnel) is a free service to provide a public endpoint for your local development. It's as simple as running ``` npx localtunnel --port 3000 ``` replacing `3000` with the port your application is running on. This will give you a public URL like `https://good-months-leave.loca.lt` which can be used as your QStash URL. If you run into issues, you may need to set the `Upstash-Forward-bypass-tunnel-reminder` header to any value to bypass the reminder message. ## ngrok [ngrok](https://ngrok.com) is a free service, that provides you with a public endpoint and forwards all traffic to your localhost. ### Sign up Create a new account on [dashboard.ngrok.com/signup](https://dashboard.ngrok.com/signup) and follow the [instructions](https://dashboard.ngrok.com/get-started/setup) to download the ngrok CLI and connect your account: ```bash theme={"system"} ngrok config add-authtoken XXX ``` ### Start the tunnel Choose the port where your application is running. Here I'm forwarding to port 3000, because Next.js is using it. ```bash theme={"system"} $ ngrok http 3000 Session Status online Account Andreas Thomas (Plan: Free) Version 3.1.0 Region Europe (eu) Latency - Web Interface http://127.0.0.1:4040 Forwarding https://e02f-2a02-810d-af40-5284-b139-58cc-89df-b740.eu.ngrok.io -> http://localhost:3000 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00 ``` ### Publish a message Now copy the `Forwarding` url and use it as destination in QStash. Make sure to add the path of your API at the end. (`/api/webhooks` in this case) ``` curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://e02f-2a02-810d-af40-5284-b139-58cc-89df-b740.eu.ngrok.io/api/webhooks' ``` ### Debug In case messages are not delivered or something else doesn't work as expected, you can go to [http://127.0.0.1:4040](http://127.0.0.1:4040) to see what ngrok is doing. # Publish Messages Source: https://upstash.com/docs/qstash/howto/publishing Publishing a message is as easy as sending a HTTP request to the `/publish` endpoint. All you need is a valid url of your destination. Destination URLs must always include the protocol (`http://` or `https://`) ## The message The message you want to send is passed in the request body. Upstash does not use, parse, or validate the body, so you can send any kind of data you want. We suggest you add a `Content-Type` header to your request to make sure your destination API knows what kind of data you are sending. ## Sending custom HTTP headers In addition to sending the message itself, you can also forward HTTP headers. Simply add them prefixed with `Upstash-Forward-` and we will include them in the message. #### Here's an example ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H 'Upstash-Forward-My-Header: my-value' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://example.com", body: { "hello": "world" }, headers: { "my-header": "my-value" }, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, headers={ "my-header": "my-value", }, ) ``` In this case, we would deliver a `POST` request to `https://example.com` with the following body and headers: ```json theme={"system"} // body { "hello": "world" } // headers My-Header: my-value Content-Type: application/json ``` #### What happens after publishing? When you publish a message, it will be durably stored in an [Upstash Redis database](https://upstash.com/redis). Then we try to deliver the message to your chosen destination API. If your API is down or does not respond with a success status code (200-299), the message will be retried and delivered when it comes back online. You do not need to worry about retrying messages or ensuring that they are delivered. By default, the multiple messages published to QStash are sent to your API in parallel. ## Publish to URL Group URL Groups allow you to publish a single message to more than one API endpoints. To learn more about URL Groups, check [URL Groups section](/qstash/features/url-groups). Publishing to a URL Group is very similar to publishing to a single destination. All you need to do is replace the `URL` in the `/publish` endpoint with the URL Group name. ``` https://qstash.upstash.io/v2/publish/https://example.com https://qstash.upstash.io/v2/publish/my-url-group ``` ```shell cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/my-url-group' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ urlGroup: "my-url-group", body: { "hello": "world" }, }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url_group="my-url-group", body={ "hello": "world", }, ) ``` ## Optional parameters and configuration QStash supports a number of optional parameters and configuration that you can use to customize the delivery of your message. All configuration is done using HTTP headers. # Receiving Messages Source: https://upstash.com/docs/qstash/howto/receiving What do we send to your API? When you publish a message, QStash will deliver it to your chosen destination. This is a brief overview of how a request to your API looks like. ## Headers We are forwarding all headers that have been prefixed with `Upstash-Forward-` to your API. [Learn more](/qstash/howto/publishing#sending-custom-http-headers) In addition to your custom headers, we're sending these headers as well: | Header | Description | | --------------------- | -------------------------------------------------------------------- | | `User-Agent` | Will be set to `Upstash-QStash` | | `Content-Type` | The original `Content-Type` header | | `Upstash-Topic-Name` | The URL Group (topic) name if sent to a URL Group | | `Upstash-Signature` | The signature you need to verify [See here](/qstash/howto/signature) | | `Upstash-Retried` | How often the message has been retried so far. Starts with 0. | | `Upstash-Message-Id` | The message id of the message. | | `Upstash-Schedule-Id` | The schedule id of the message if it is related to a schedule. | | `Upstash-Caller-Ip` | The IP address of the publisher of this message. | ## Body The body is passed as is, we do not modify it at all. If you send a JSON body, you will receive a JSON body. If you send a string, you will receive a string. ## Verifying the signature [See here](/qstash/howto/signature) # Reset Token Source: https://upstash.com/docs/qstash/howto/reset-token Your token is used to interact with the QStash API. You need it to publish messages as well as create, read, update or delete other resources, such as URL Groups and endpoints. Resetting your token will invalidate your current token and all future requests with the old token will be rejected. To reset your token, simply click on the "Reset token" button at the bottom in the [QStash UI](https://console.upstash.com/qstash) and confirm the dialog. Afterwards you should immediately update your token in all your applications. # Roll Your Signing Keys Source: https://upstash.com/docs/qstash/howto/roll-signing-keys Because your API needs to be publicly accessible from the internet, you should make sure to verify the authenticity of each request. Upstash provides a JWT with each request. This JWT is signed by your individual secret signing keys. [Read more](/qstash/howto/signature). We are using 2 signing keys: * current: This is the key used to sign the JWT. * next: This key will be used to sign after you have rolled your keys. If we were using only a single key, there would be some time between when you rolled your keys and when you can edit the key in your applications. In order to minimize downtime, we use 2 keys and you should always try to verify with both keys. ## What happens when I roll my keys? When you roll your keys, the current key will be replaced with the next key and a new next key will be generated. ``` currentKey = nextKey nextKey = generateNewKey() ``` Rolling your keys twice without updating your applications will cause your apps to reject all requests, because both the current and next keys will have been replaced. ## How to roll your keys Rolling your keys can be done by going to the [QStash UI](https://console.upstash.com/qstash) and clicking on the "Roll keys" button. # Verify Signatures Source: https://upstash.com/docs/qstash/howto/signature We send a JWT with each request. This JWT is signed by your individual secret signing key and sent in the `Upstash-Signature` HTTP header. You can use this signature to verify the request is coming from QStash. You need to keep your signing keys in a secure location. Otherwise some malicious actor could use them to send requests to your API as if they were coming from QStash. ## Verifying You can use the official QStash SDKs or implement a custom verifier either by using [an open source library](https://jwt.io/libraries) or by processing the JWT manually. ### Via SDK (Recommended) QStash SDKs provide a `Receiver` type that simplifies signature verification. ```typescript Typescript theme={"system"} import { Receiver } from "@upstash/qstash"; const receiver = new Receiver({ currentSigningKey: "YOUR_CURRENT_SIGNING_KEY", nextSigningKey: "YOUR_NEXT_SIGNING_KEY", }); // ... in your request handler const signature = req.headers["Upstash-Signature"]; const body = req.body; const isValid = await receiver.verify({ body, signature, url: "YOUR-SITE-URL", }); ``` ```python Python theme={"system"} from qstash import Receiver receiver = Receiver( current_signing_key="YOUR_CURRENT_SIGNING_KEY", next_signing_key="YOUR_NEXT_SIGNING_KEY", ) # ... in your request handler signature, body = req.headers["Upstash-Signature"], req.body receiver.verify( body=body, signature=signature, url="YOUR-SITE-URL", ) ``` ```go Golang theme={"system"} import "github.com/qstash/qstash-go" receiver := qstash.NewReceiver("", "NEXT_SIGNING_KEY") // ... in your request handler signature := req.Header.Get("Upstash-Signature") body, err := io.ReadAll(req.Body) // handle err err := receiver.Verify(qstash.VerifyOptions{ Signature: signature, Body: string(body), Url: "YOUR-SITE-URL", // optional }) // handle err ``` Depending on the environment, the body might be parsed into an object by the HTTP handler if it is JSON. Ensure you use the raw body string as is. For example, converting the parsed object back to a string (e.g., JSON.stringify(object)) may cause inconsistencies and result in verification failure. ### Manual verification If you don't want to use the SDKs, you can implement your own verifier either by using an open-source library or by manually processing the JWT. The exact implementation depends on the language of your choice and the library if you use one. Instead here are the steps you need to follow: 1. Split the JWT into its header, payload and signature 2. Verify the signature 3. Decode the payload and verify the claims * `iss`: The issuer must be`Upstash`. * `sub`: The subject must the url of your API. * `exp`: Verify the token has not expired yet. * `nbf`: Verify the token is already valid. * `body`: Hash the raw request body using `SHA-256` and compare it with the `body` claim. You can also reference the implementation in our [Typescript SDK](https://github.com/upstash/sdk-qstash-ts/blob/main/src/receiver.ts#L82). After you have verified the signature and the claims, you can be sure the request came from Upstash and process it accordingly. ## Claims All claims in the JWT are listed [here](/qstash/features/security#claims) # Create URL Groups and Endpoints Source: https://upstash.com/docs/qstash/howto/url-group-endpoint QStash allows you to group multiple APIs together into a single namespace, called a `URL Group` (Previously, it was called `Topics`). Read more about URL Groups [here](/qstash/features/url-groups). There are two ways to create endpoints and URL Groups: The UI and the REST API. ## UI Go to [console.upstash.com/qstash](https://console.upstash.com/qstash) and click on the `URL Groups` tab. Afterwards you can create a new URL Group by giving it a name. Keep in mind that URL Group names are restricted to alphanumeric, underscore, hyphen and dot characters. After creating the URL Group, you can add endpoints to it: ## API You can create a URL Group and endpoint using the [console](https://console.upstash.com/qstash) or [REST API](/qstash/api/url-groups/add-endpoint). ```bash cURL theme={"system"} curl -XPOST https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "endpoints": [ { "name": "endpoint1", "url": "https://example.com" }, { "name": "endpoint2", "url": "https://somewhere-else.com" } ] }' ``` ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const urlGroups = client.urlGroups; await urlGroups.addEndpoints({ name: "urlGroupName", endpoints: [ { name: "endpoint1", url: "https://example.com" }, { name: "endpoint2", url: "https://somewhere-else.com" }, ], }); ``` ```python Python theme={"system"} from qstash import QStash client = QStash("") client.url_group.upsert_endpoints( url_group="url-group-name", endpoints=[ {"name": "endpoint1", "url": "https://example.com"}, {"name": "endpoint2", "url": "https://somewhere-else.com"}, ], ) ``` # Use as Webhook Receiver Source: https://upstash.com/docs/qstash/howto/webhook You can configure QStash to receive and process your webhook calls. Instead of having the webhook service call your endpoint directly, QStash acts as an intermediary, receiving the request and forwarding it to your endpoint. QStash provides additional control over webhook requests, allowing you to configure properties such as delay, retries, timeouts, callbacks, and flow control. There are multiple ways to configure QStash to receive webhook requests. ## 1. Publish You can configure your webhook URL as a QStash publish request. For example, if your webhook endpoint is: `https://example.com/api/webhook` Instead of using this URL directly as the webhook address, use: `https://qstash.upstash.io/v2/publish/https://example.com/api/webhook?qstash_token=` Request configurations such as custom retries, timeouts, and other settings can be specified using HTTP headers in the publish request. Refer to the [REST API documentation](/qstash/api/publish) for a full list of available configuration headers. It’s also possible to pass configuration via query parameters. You can use the lowercase format of headers as the key, such as ?upstash-retries=3\&upstash-delay=100s. This makes it easier to configure webhook messages. By default, any headers in the publish request that are prefixed with `Upstash-Forward-` will be forwarded to your endpoint. However, since most webhook services do not allow header prefixing, we introduced a configuration option to enable forwarding all incoming request headers. To enable this, set `Upstash-Header-Forward: true` in the publish request or append the query parameter `?upstash-header-forward=true` to the request URL. This ensures that all headers are forwarded to your endpoint without requiring the `Upstash-Forward-` prefix. ## 2. URL Group URL Groups allow you to define server-side templates for publishing messages. You can create a URL Group either through the UI or programmatically. For example, if your webhook endpoint is: `https://example.com/api/webhook` Instead of using this URL directly, you can create a URL Group and add this URL as an endpoint. `https://qstash.upstash.io/v2/publish/?qstash_token=` You can define default headers for a URL Group, which will automatically apply to all requests sent to that group. ``` curl -X PATCH https://qstash.upstash.io/v2/topics/ \ -H "Authorizarion: Bearer " -d '{ "headers": { "Upstash-Header-Forward": ["true"], "Upstash-Retries": "3" } }' ``` When you save this header for your URL Group, it ensures that all headers are forwarded as needed for your webhook processing. A URL Group also enables you to define multiple endpoints within group. When a publish request is made to a URL Group, all associated endpoints will be triggered, allowing you to fan-out a single webhook call to multiple destinations. # LLM with Anthropic Source: https://upstash.com/docs/qstash/integrations/anthropic QStash integrates smoothly with Anthropic's API, allowing you to send LLM requests and leverage QStash features like retries, callbacks, and batching. This is especially useful when working in serverless environments where LLM response times vary and traditional timeouts may be limiting. QStash provides an HTTP timeout of up to 2 hours, which is ideal for most LLM cases. ### Example: Publishing and Enqueueing Requests Specify the `api` as `llm` with the provider set to `anthropic()` when publishing requests. Use the `Upstash-Callback` header to handle responses asynchronously, as streaming completions aren’t supported for this integration. #### Publishing a Request ```typescript theme={"system"} import { anthropic, Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.publishJSON({ api: { name: "llm", provider: anthropic({ token: "" }) }, body: { model: "claude-3-5-sonnet-20241022", messages: [{ role: "user", content: "Summarize recent tech trends." }], }, callback: "https://example.com/callback", }); ``` ### Enqueueing a Chat Completion Request Use `enqueueJSON` with Anthropic as the provider to enqueue requests for asynchronous processing. ```typescript theme={"system"} import { anthropic, Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const result = await client.queue({ queueName: "your-queue-name" }).enqueueJSON({ api: { name: "llm", provider: anthropic({ token: "" }) }, body: { model: "claude-3-5-sonnet-20241022", messages: [ { role: "user", content: "Generate ideas for a marketing campaign.", }, ], }, callback: "https://example.com/callback", }); console.log(result); ``` ### Sending Chat Completion Requests in Batches Use `batchJSON` to send multiple requests at once. Each request in the batch specifies the same Anthropic provider and includes a callback URL. ```typescript theme={"system"} import { anthropic, Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const result = await client.batchJSON([ { api: { name: "llm", provider: anthropic({ token: "" }) }, body: { model: "claude-3-5-sonnet-20241022", messages: [ { role: "user", content: "Describe the latest in AI research.", }, ], }, callback: "https://example.com/callback1", }, { api: { name: "llm", provider: anthropic({ token: "" }) }, body: { model: "claude-3-5-sonnet-20241022", messages: [ { role: "user", content: "Outline the future of remote work.", }, ], }, callback: "https://example.com/callback2", }, // Add more requests as needed ]); console.log(result); ``` #### Analytics with Helicone To monitor usage, include Helicone analytics by passing your Helicone API key under `analytics`: ```typescript theme={"system"} await client.publishJSON({ api: { name: "llm", provider: anthropic({ token: "" }), analytics: { name: "helicone", token: process.env.HELICONE_API_KEY! }, }, body: { model: "claude-3-5-sonnet-20241022", messages: [{ role: "user", content: "Hello!" }] }, callback: "https://example.com/callback", }); ``` With this setup, Anthropic can be used seamlessly in any LLM workflows in QStash. # Datadog - Upstash QStash Integration Source: https://upstash.com/docs/qstash/integrations/datadog This guide walks you through connecting your Datadog account with Upstash QStash for monitoring and analytics of your message delivery, retries, DLQ, and schedules. **Integration Scope** Upstash Datadog Integration covers Prod Pack. ## **Step 1: Log in to Your Datadog Account** 1. Go to [Datadog](https://www.datadoghq.com/) and sign in. ## **Step 2: Install Upstash Application** 1. In Datadog, open the Integrations page. 2. Search for "Upstash" and open the integration. integration-tab.png Click "Install" to add Upstash to your Datadog account. installation.png ## **Step 3: Connect Accounts** After installing Upstash, click "Connect Accounts". Datadog will redirect you to Upstash to complete account linking. connect-acc.png ## **Step 4: Select Account to Integrate** 1. On Upstash, select the Datadog account to integrate. 2. Personal and team accounts are supported. **Caveats** * The integration can be established once at a time. To change the account scope (e.g., add/remove teams), re-establish the integration from scratch. personal.png team.png ## **Step 5: Wait for Metrics Availability** Once the integration is completed, metrics from QStash (publish counts, success/error rates, retries, DLQ, schedule executions) will start appearing in Datadog dashboards shortly. upstash-dashboard.png ## **Step 6: Datadog Integration Removal Process** From Datadog → Integrations → Upstash, press "Remove" to break the connection. ### Confirm Removal Upstash will stop publishing metrics after removal. Ensure any Datadog API keys/configurations for this integration are also removed on the Datadog side. ## **Conclusion** You’ve connected Datadog with Upstash QStash. Explore Datadog dashboards to monitor message delivery performance and reliability. If you need help, contact support. # LLM - OpenAI Source: https://upstash.com/docs/qstash/integrations/llm QStash has built-in support for calling LLM APIs. This allows you to take advantage of QStash features such as retries, callbacks, and batching while using LLM APIs. QStash is especially useful for LLM processing because LLM response times are often highly variable. When accessing LLM APIs from serverless runtimes, invocation timeouts are a common issue. QStash offers an HTTP timeout of 2 hours, which is sufficient for most LLM use cases. By using callbacks and the workflows, you can easily manage the asynchronous nature of LLM APIs. ## QStash LLM API You can publish (or enqueue) single LLM request or batch LLM requests using all existing QStash features natively. To do this, specify the destination `api` as `llm` with a valid provider. The body of the published or enqueued message should contain a valid chat completion request. For these integrations, you must specify the `Upstash-Callback` header so that you can process the response asynchronously. Note that streaming chat completions cannot be used with them. Use [the chat API](#chat-api) for streaming completions. All the examples below can be used with **OpenAI-compatible LLM providers**. ### Publishing a Chat Completion Request ```js JavaScript theme={"system"} import { Client, upstash } from "@upstash/qstash"; const client = new Client({ token: "", }); const result = await client.publishJSON({ api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_"}) }, body: { model: "gpt-3.5-turbo", messages: [ { role: "user", content: "Write a hello world program in Rust.", }, ], }, callback: "https://abc.requestcatcher.com/", }); console.log(result); ``` ```python Python theme={"system"} from qstash import QStash from qstash.chat import upstash q = QStash("") result = q.message.publish_json( api={"name": "llm", "provider": openai("")}, body={ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "Write a hello world program in Rust.", } ], }, callback="https://abc.requestcatcher.com/", ) print(result) ``` ### Enqueueing a Chat Completion Request ```js JavaScript theme={"system"} import { Client, upstash } from "@upstash/qstash"; const client = new Client({ token: "", }); const result = await client.queue({ queueName: "queue-name" }).enqueueJSON({ api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_"}) }, body: { "model": "gpt-3.5-turbo", messages: [ { role: "user", content: "Write a hello world program in Rust.", }, ], }, callback: "https://abc.requestcatcher.com", }); console.log(result); ``` ```python Python theme={"system"} from qstash import QStash from qstash.chat import upstash q = QStash("") result = q.message.enqueue_json( queue="queue-name", api={"name": "llm", "provider": openai("")}, body={ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "Write a hello world program in Rust.", } ], }, callback="https://abc.requestcatcher.com", ) print(result) ``` ### Sending Chat Completion Requests in Batches ```js JavaScript theme={"system"} import { Client, upstash } from "@upstash/qstash"; const client = new Client({ token: "", }); const result = await client.batchJSON([ { api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_" }) }, body: { ... }, callback: "https://abc.requestcatcher.com", }, ... ]); console.log(result); ``` ```python Python theme={"system"} from qstash import QStash from qstash.chat import upstash q = QStash("") result = q.message.batch_json( [ { "api":{"name": "llm", "provider": openai("")}, "body": {...}, "callback": "https://abc.requestcatcher.com", }, ... ] ) print(result) ``` ```shell curl theme={"system"} curl "https://qstash.upstash.io/v2/batch" \ -X POST \ -H "Authorization: Bearer QSTASH_TOKEN" \ -H "Content-Type: application/json" \ -d '[ { "destination": "api/llm", "body": {...}, "callback": "https://abc.requestcatcher.com" }, ... ]' ``` ### Retrying After Rate Limit Resets When the rate limits are exceeded, QStash automatically schedules the retry of publish or enqueue of chat completion tasks depending on the reset time of the rate limits. That helps with not doing retries prematurely when it is definitely going to fail due to exceeding rate limits. ## Analytics via Helicone Helicone is a powerful observability platform that provides valuable insights into your LLM usage. Integrating Helicone with QStash is straightforward. To enable Helicone observability in QStash, you simply need to pass your Helicone API key when initializing your model. Here's how to do it for both custom models and OpenAI: ```ts theme={"system"} import { Client, custom } from "@upstash/qstash"; const client = new Client({ token: "", }); await client.publishJSON({ api: { name: "llm", provider: custom({ token: "XXX", baseUrl: "https://api.together.xyz", }), analytics: { name: "helicone", token: process.env.HELICONE_API_KEY! }, }, body: { model: "meta-llama/Llama-3-8b-chat-hf", messages: [ { role: "user", content: "hello", }, ], }, callback: "https://oz.requestcatcher.com/", }); ``` # n8n with QStash Source: https://upstash.com/docs/qstash/integrations/n8n Leverage your n8n workflow with Upstash Qstash, here is how you can make those requests using HTTP Request node. ### Step 1: Set Up an n8n Project 1. Go to [https://n8n.io](https://n8n.io) and create a new project 2. Create a Trigger as Webhook with default settings, this will be our entry point. 3. Create a HTTP Request Node *** ### Step 2: Import QStash Configurations to HTTP Node 1. Go to Upstash Console and open QStash Request Builder Tab. 2. Fill out the fields to create an QStash Request. (Publish, Enqueue, Schedule) 3. Copy the cURL snippet created for you, representing your request. 4. Back to the n8n, in HTTP Request Parameters tab, use import cURL. 5. Paste the cURL snippet that you copied in the console, and let n8n to fill out the form for you. *** ### Step 3: Test the Workflow 1. Execute workflow. 2. Visit the Webhook URL. 3. That's it! You can check the logs in the Qstash Console to confirm your QStash Request is working. # Pipedream Source: https://upstash.com/docs/qstash/integrations/pipedream Build and run workflows with 1000s of open source triggers and actions across 900+ apps. [Pipedream](https://pipedream.com) allows you to build and run workflows with 1000s of open source triggers and actions across 900+ apps. Check out the [official integration](https://pipedream.com/apps/qstash). ## Trigger a Pipedream workflow from a QStash topic message This is a step by step guide on how to trigger a Pipedream workflow from a QStash topic message. Alternatively [click here](https://pipedream.com/new?h=tch_3egfAX) to create a new workflow with this QStash topic trigger added. ### 1. Create a Topic in QStash If you haven't yet already, create a **Topic** in the [QStash dashboard](https://console.upstash.com/qstash?tab=topics). ### 2. Create a new Pipedream workflow Sign into [Pipedream](https://pipedream.com) and create a new workflow. ### 3. Add QStash Topic Message as a trigger In the workflow **Trigger** search for QStash and select the **Create Topic Endpoint** trigger. ![Select the QStash Create Topic Endpoint trigger](https://res.cloudinary.com/pipedreamin/image/upload/v1664298855/docs/components/CleanShot_2022-09-27_at_13.13.56_x6gzgk.gif) Then, connect your QStash account by clicking the QStash prop and retrieving your token from the [QStash dashboard](https://console.upstash.com/qstash?tab=details). After connecting your QStash account, click the **Topic** prop, a dropdown will appear containing the QStash topics on your account. Then *click* on a specific topic to listen for new messages on. ![Selecting a QStash topic to subscribe to](https://res.cloudinary.com/pipedreamin/image/upload/v1664299016/docs/components/CleanShot_2022-09-27_at_13.16.35_rewzbo.gif) Finally, *click* **Continue**. Pipedream will create a unique HTTP endpoint and add it to your QStash topic. ### 4. Test with a sample message Use the *Request Builder* in the [QStash dashboard](https://console.upstash.com/qstash?tab=details) to publish a test message to your topic. Alternatively, you can use the **Create topic message** action in a Pipedream workflow to send a message to your topic. *Don't forget* to use this action in a separate workflow, otherwise you might cause an infinite loop of messages between QStash and Pipedream. ### 5. Add additional steps Add additional steps to the workflow by clicking the plus icon beneath the Trigger step. Build a workflow with the 1,000+ pre-built components available in Pipedream, including [Airtable](https://pipedream.com/apps/airtable), [Google Sheets](https://pipedream.com/apps/google-sheets), [Slack](https://pipedream.com/apps/slack) and many more. Alternatively, use [Node.js](https://pipedream.com/docs/code/nodejs) or [Python](https://pipedream.com/docs/code/python) code steps to retrieve, transform, or send data to other services. ### 6. Deploy your Pipedream workflow After you're satisfied with your changes, click the **Deploy** button in the top right of your Pipedream workflow. Your deployed workflow will not automatically process new messages to your QStash topic. Collapse quickstart-trigger-pipedream-workflow-from-topic.md 3 KB ### Video tutorial If you prefer video, you can check out this tutorial by [pipedream](https://pipedream.com). [![Video](https://img.youtube.com/vi/-oXlWuxNG5A/0.jpg)](https://www.youtube.com/watch?v=-oXlWuxNG5A) ## Trigger a Pipedream workflow from a QStash topic message This is a step by step guide on how to trigger a Pipedream workflow from a QStash endpoint message. Alternatively [click here](https://pipedream.com/new?h=tch_m5ofX6) to create a pre-configured workflow with the HTTP trigger and QStash webhook verification step already added. ### 1. Create a new Pipedream workflow Sign into [Pipedream](https://pipedream.com) and create a new workflow. ### 2. Configure the workflow with an HTTP trigger In the workflow **Trigger** select the **New HTTP / Webhook Requests** option. ![Create new HTTP Webhook trigger](https://res.cloudinary.com/pipedreamin/image/upload/v1664296111/docs/components/CleanShot_2022-09-27_at_12.27.42_cqzolg.png) Pipedream will create a unique HTTP endpoint for your workflow. Then configure the HTTP trigger to *return a custom response*. By default Pipedream will always return a 200 response, which allows us to return a non-200 response to QStash to retry the workflow again if there's an error during the execution of the QStash message. ![Configure the webhook to return a custom response](https://res.cloudinary.com/pipedreamin/image/upload/v1664296210/docs/components/CleanShot_2022-09-27_at_12.29.45_jbwtcm.png) Lastly, set the **Event Body** to be a **Raw request**. This will make sure the QStash verify webhook action receives the data in the correct format. ![Set the event body to a raw body](https://res.cloudinary.com/pipedreamin/image/upload/v1664302540/docs/components/CleanShot_2022-09-27_at_14.15.15_o4xinz.png) ### 3. Test with a sample message Use the *Request Builder* in the [QStash dashboard](https://console.upstash.com/qstash?tab=details) to publish a test message to your topic. Alternatively, you can use the **Create topic message** action in a Pipedream workflow to send a message to your topic. *Don't forget* to use this action in a separate workflow, otherwise you might cause an infinite loop of messages between QStash and Pipedream. ### 4. Verify the QStash webhook Pipedream has a pre-built QStash action that will verify the content of incoming webhooks from QStash. First, search for **QStash** in the step search bar, then select the QStash app. Of the available actions, select the **Verify Webhook** action. Then connect your QStash account and select the **HTTP request** prop. In the dropdown, click **Enter custom expression** and then paste in `{{ steps.trigger.event }}`. This step will automatically verify the incoming HTTP requests and exit the workflow early if requests are not from QStash. ### 5. Add additional steps Add additional steps to the workflow by clicking the plus icon beneath the Trigger step. Build a workflow with the 1,000+ pre-built components available in Pipedream, including [Airtable](https://pipedream.com/apps/airtable), [Google Sheets](https://pipedream.com/apps/google-sheets), [Slack](https://pipedream.com/apps/slack) and many more. Alternatively, use [Node.js](https://pipedream.com/docs/code/nodejs) or [Python](https://pipedream.com/docs/code/python) code steps to retrieve, transform, or send data to other services. ### 6. Return a 200 response In the final step of your workflow, return a 200 response by adding a new step and selecting **Return an HTTP Response**. ![Returning an HTTP response](https://res.cloudinary.com/pipedreamin/image/upload/v1664296812/docs/components/CleanShot_2022-09-27_at_12.39.25_apkngf.png) This will generate Node.js code to return an HTTP response to QStash using the `$.respond` helper in Pipedream. ### 7. Deploy your Pipedream workflow After you're satisfied with your changes, click the **Deploy** button in the top right of your Pipedream workflow. Your deployed workflow will not automatically process new messages to your QStash topic. ### Video tutorial If you prefer video, you can check out this tutorial by [pipedream](https://pipedream.com). [![Video](https://img.youtube.com/vi/uG8eO7BNok4/0.jpg)](https://youtu.be/uG8eO7BNok4) # Prometheus - Upstash QStash Integration Source: https://upstash.com/docs/qstash/integrations/prometheus To monitor your QStash metrics in Prometheus and visualize in Grafana, follow these steps: **Integration Scope** Upstash Prometheus Integration covers Prod Pack. ## **Step 1: Enable Prometheus in Upstash Console** 1. Open the Upstash Console and navigate to QStash. 2. Go to Settings → Monitoring. 3. Enable Prometheus to allow scraping QStash metrics. configuration.png ## **Step 2: Copy Monitoring Token** 1. After enabling, a monitoring token is generated and displayed. 2. Copy the token. It will be used to authenticate Prometheus requests. **Header Format** Send the token as `Authorization: Bearer `. monitoring-token.png ## **Step 3: Configure Prometheus (via Grafana Data Source)** 1. In Grafana, add a Prometheus data source. 2. Set the address to `https://api.upstash.com/monitoring/prometheus`. 3. In HTTP headers, add the monitoring token. datasource.png headers.png Click Test and Save. datasource-final.png ## **Step 4: Import Dashboard** You can use the Upstash Grafana dashboard to visualize QStash metrics. Open the import dialog and use: Upstash QStash Dashboard grafana-dashboard.png ## **Conclusion** You’ve integrated QStash with Prometheus. Use Grafana to explore message throughput, retries, DLQ, schedules, and Upstash Workflows. If you encounter issues, contact support. # Email - Resend Source: https://upstash.com/docs/qstash/integrations/resend The `qstash-js` SDK offers an integration to easily send emails using [Resend](https://resend.com/), streamlining email delivery in your applications. ## Basic Email Sending To send a single email, use the `publishJSON` method with the `resend` provider. Ensure your `QSTASH_TOKEN` and `RESEND_TOKEN` are set for authentication. ```typescript theme={"system"} import { Client, resend } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.publishJSON({ api: { name: "email", provider: resend({ token: "" }), }, body: { from: "Acme ", to: ["delivered@resend.dev"], subject: "Hello World", html: "

It works!

", }, }); ``` In the `body` field, specify any parameters supported by [the Resend Send Email API](https://resend.com/docs/api-reference/emails/send-email), such as `from`, `to`, `subject`, and `html`. ## Sending Batch Emails To send multiple emails at once, use Resend’s [Batch Email API](https://resend.com/docs/api-reference/emails/send-batch-emails). Set the `batch` option to `true` to enable batch sending. Each email configuration is defined as an object within the `body` array. ```typescript theme={"system"} await client.publishJSON({ api: { name: "email", provider: resend({ token: "", batch: true }), }, body: [ { from: "Acme ", to: ["foo@gmail.com"], subject: "Hello World", html: "

It works!

", }, { from: "Acme ", to: ["bar@outlook.com"], subject: "World Hello", html: "

It works!

", }, ], }); ``` Each entry in the `body` array represents an individual email, allowing customization of `from`, `to`, `subject`, `html`, and any other Resend-supported fields. # Development Server License Agreement Source: https://upstash.com/docs/qstash/misc/license ## 1. Purpose and Scope This software is a development server implementation of QStash API ("Development Server") provided for testing and development purposes only. It is not intended for production use, commercial deployment, or as a replacement for the official QStash service. ## 2. Usage Restrictions By using this Development Server, you agree to the following restrictions: a) The Development Server may only be used for: * Local development and testing * Continuous Integration (CI) testing * Educational purposes * API integration development b) The Development Server may NOT be used for: * Production environments * Commercial service offerings * Public-facing applications * Operating as a Software-as-a-Service (SaaS) * Reselling or redistributing as a service ## 3. Restrictions on Modification and Reverse Engineering You may not: * Decompile, reverse engineer, disassemble, or attempt to derive the source code of the Development Server * Modify, adapt, translate, or create derivative works based upon the Development Server * Remove, obscure, or alter any proprietary rights notices within the Development Server * Attempt to bypass or circumvent any technical limitations or security measures in the Development Server ## 4. Technical Limitations Users acknowledge that the Development Server: * Operates entirely in-memory without persistence * Provides limited functionality compared to the official service * Offers no data backup or recovery mechanisms * Has no security guarantees * May have performance limitations * Does not implement all features of the official service ## 5. Warranty Disclaimer THE DEVELOPMENT SERVER IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. THE AUTHORS OR COPYRIGHT HOLDERS SHALL NOT BE LIABLE FOR ANY CLAIMS, DAMAGES, OR OTHER LIABILITY ARISING FROM THE USE OF THE SOFTWARE IN VIOLATION OF THIS LICENSE. ## 6. Termination Your rights under this license will terminate automatically if you fail to comply with any of its terms. Upon termination, you must cease all use of the Development Server. ## 7. Acknowledgment By using the Development Server, you acknowledge that you have read this license, understand it, and agree to be bound by its terms. # API Examples Source: https://upstash.com/docs/qstash/overall/apiexamples ### Use QStash via: * cURL * [Typescript SDK](https://github.com/upstash/sdk-qstash-ts) * [Python SDK](https://github.com/upstash/qstash-python) Below are some examples to get you started. You can also check the [how to](/qstash/howto/publishing) section for more technical details or the [API reference](/qstash/api/messages) to test the API. ### Publish a message to an endpoint Simple example to [publish](/qstash/howto/publishing) a message to an endpoint. ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, ) # Async version is also available ``` ### Publish a message to a URL Group The [URL Group](/qstash/features/url-groups) is a way to publish a message to multiple endpoints in a fan out pattern. ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/myUrlGroup' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ urlGroup: "myUrlGroup", body: { hello: "world", }, }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url_group="my-url-group", body={ "hello": "world", }, ) # Async version is also available ``` ### Publish a message with 5 minutes delay Add a delay to the message to be published. After QStash receives the message, it will wait for the specified time (5 minutes in this example) before sending the message to the endpoint. ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Delay: 5m" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, delay: 300, }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, delay="5m", ) # Async version is also available ``` ### Send a custom header Add a custom header to the message to be published. ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H 'Upstash-Forward-My-Header: my-value' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, headers: { "My-Header": "my-value", }, }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, headers={ "My-Header": "my-value", }, ) # Async version is also available ``` ### Schedule to run once a day ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Upstash-Cron: 0 0 * * *" \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/schedules/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", cron: "0 0 * * *", }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="https://example.com", cron="0 0 * * *", ) # Async version is also available ``` ### Publish messages to a FIFO queue By default, messges are published concurrently. With a [queue](/qstash/features/queues), you can enqueue messages in FIFO order. ```shell theme={"system"} curl -XPOST -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ 'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com' -d '{"message":"Hello, World!"}' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); const queue = client.queue({ queueName: "my-queue" }) await queue.enqueueJSON({ url: "https://example.com", body: { "Hello": "World" } }) ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.enqueue_json( queue="my-queue", url="https://example.com", body={ "Hello": "World", }, ) # Async version is also available ``` ### Publish messages in a [batch](/qstash/features/batch) Publish multiple messages in a single request. ```shell theme={"system"} curl -XPOST https://qstash.upstash.io/v2/batch \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -d ' [ { "destination": "https://example.com/destination1" }, { "destination": "https://example.com/destination2" } ]' ``` ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.batchJSON([ { url: "https://example.com/destination1", }, { url: "https://example.com/destination2", }, ]); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.batch_json( [ { "url": "https://example.com/destination1", }, { "url": "https://example.com/destination2", }, ] ) # Async version is also available ``` ### Set max retry count to 3 Configure how many times QStash should retry to send the message to the endpoint before sending it to the [dead letter queue](/qstash/features/dlq). ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Upstash-Retries: 3" \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, retries: 3, }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, retries=3, ) # Async version is also available ``` ### Set custom retry delay Configure the delay between retry attempts when message delivery fails. [By default, QStash uses exponential backoff](/qstash/features/retry). You can customize this using mathematical expressions with the special variable `retried` (current retry attempt count starting from 0). ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Upstash-Retries: 3" \ -H "Upstash-Retry-Delay: pow(2, retried) * 1000" \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, retries: 3, retryDelay: "pow(2, retried) * 1000", // 2^retried * 1000ms }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, retries=3, retry_delay="pow(2, retried) * 1000", # 2^retried * 1000ms ) # Async version is also available ``` **Supported functions for retry delay expressions:** * `pow` - Power function * `sqrt` - Square root * `abs` - Absolute value * `exp` - Exponential * `floor` - Floor function * `ceil` - Ceiling function * `round` - Rounding function * `min` - Minimum of values * `max` - Maximum of values **Examples:** * `1000` - Fixed 1 second delay * `1000 * (1 + retried)` - Linear backoff: 1s, 2s, 3s, 4s... * `pow(2, retried) * 1000` - Exponential backoff: 1s, 2s, 4s, 8s... * `max(1000, pow(2, retried) * 100)` - Exponential with minimum 1s delay ### Set callback url Receive a response from the endpoint and send it to the specified callback URL. If the endpoint does not return a response, QStash will send it to the failure callback URL. ```shell theme={"system"} curl -XPOST \ -H 'Authorization: Bearer XXX' \ -H "Content-type: application/json" \ -H "Upstash-Callback: https://example.com/callback" \ -H "Upstash-Failure-Callback: https://example.com/failure" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://example.com' ``` ```typescript theme={"system"} const client = new Client({ token: "" }); await client.publishJSON({ url: "https://example.com", body: { hello: "world", }, callback: "https://example.com/callback", failureCallback: "https://example.com/failure", }); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://example.com", body={ "hello": "world", }, callback="https://example.com/callback", failure_callback="https://example.com/failure", ) # Async version is also available ``` ### Get message logs Retrieve logs for all messages that have been published (filtering is also available). ```shell theme={"system"} curl https://qstash.upstash.io/v2/logs \ -H "Authorization: Bearer XXX" ``` ```typescript theme={"system"} const client = new Client({ token: "" }); const logs = await client.logs() ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.event.list() # Async version is also available ``` ### List all schedules ```shell theme={"system"} curl https://qstash.upstash.io/v2/schedules \ -H "Authorization: Bearer XXX" ``` ```typescript theme={"system"} const client = new Client({ token: "" }); const scheds = await client.schedules.list(); ``` ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.list() # Async version is also available ``` # Changelog Source: https://upstash.com/docs/qstash/overall/changelog We have moved the roadmap and the changelog to [Github Discussions](https://github.com/orgs/upstash/discussions) starting from October 2025.Now you can follow `In Progress` features. You can see that your `Feature Requests` are recorded. You can vote for them and comment your specific use-cases to shape the feature to your needs. * **TypeScript SDK (`qstash-js`):** * `Label` feature is added. This will enable our users to label their publishes so that * Logs can be filtered with user given label. * DLQ can be filtered with user given label. * **Console:** * `Flat view` on the `Logs` tab is removed. The purpose is to simplify the `Logs` tab. All the information is already available on the default(grouped) view. Let us know if there is something missing via Discord/Support so that we can fill in the gaps. * **Console:** * Added ability to hide/show columns on the Schedules tab. * Local mode is added to enable our users to use the console with their local development envrionment. See [docs](http://localhost:3000/qstash/howto/local-development) for details. * **TypeScript SDK (`qstash-js`):** * Added `retryDelay` option to dynamicaly program the retry duration of a failed message. The new parameter is available in publish/batch/enqueue/schedules. See [here](/qstash/features/retry#custom-retry-delay) * Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.8.1...v2.8.2). * No new features for QStash this month. We are mostly focused on stability and performance. * **TypeScript SDK (`qstash-js`):** * Added `flow control period` and deprecated `ratePerSecond`. See [here](https://github.com/upstash/qstash-js/pull/237). * Added `IN_PROGRESS` state filter. See [here](https://github.com/upstash/qstash-js/pull/236). * Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.23...v2.8.1). * **Python SDK (`qstash-py`):** * Added `IN_PROGRESS` state filter. See [here](https://github.com/upstash/qstash-js/pull/236). * Added various missing features: Callback Headers, Schedule with Queue, Overwrite Schedule ID, Flow Control Period. See [here](https://github.com/upstash/qstash-py/pull/41). * Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-py/compare/v2.0.5...v3.0.0). * **Console:** * Improved logs tab behavior to prevent collapsing or unnecessary refreshes, increasing usability. * **QStash Server:** * Added support for filtering messages by `FlowControlKey` (Console and SDK support in progress). * Applied performance improvements for bulk cancel operations. * Applied performance improvements for bulk publish operations. * Fixed an issue where scheduled publishes with queues would reset queue parallelism to 1. * Added support for updating existing queue parallelisms even when the max queue limit is reached. * Applied several additional performance optimizations. * **QStash Server:** * Added support for `flow-control period`, allowing users to define a period for a given rate—up to 1 week.\ Previously, the period was fixed at 1 second.\ For example, `rate: 3 period: 1d` means publishes will be throttled to 3 per day. * Applied several performance optimizations. * **Console:** * Added `IN_PROGRESS` as a filter option when grouping by message ID, making it easier to query in-flight messages.\ See [here](/qstash/howto/debug-logs#lifecycle-of-a-message) for an explanation of message states. * **TypeScript SDK (`qstash-js`):** * Renamed `events` to `logs` for clarity when referring to QStash features. `client.events()` is now deprecated, and `client.logs()` has been introduced. See [details here](https://github.com/upstash/qstash-js/pull/225). * For all fixes, see the full changelog [here](https://github.com/upstash/qstash-js/compare/v2.7.22...v2.7.23). * **QStash Server:** * Fixed an issue where messages with delayed callbacks were silently failing. Now, such messages are explicitly rejected during insertion. * **Python SDK (`qstash-py`):** * Flow Control Parallelism and Rate. See [here](https://github.com/upstash/qstash-py/pull/36) * Addressed a few minor bugs. See the full changelog [here](https://github.com/upstash/qstash-py/compare/v2.0.3...v2.0.5) * **QStash Server:** * Introduced RateLimit and Parallelism controls to manage the rate and concurrency of message processing. Learn more [here](/qstash/features/flowcontrol). * Improved connection timeout detection mechanism to enhance scalability. * Added several new features to better support webhook use cases: * Support for saving headers in a URL group. See [here](/qstash/howto/webhook#2-url-group). * Ability to pass configuration parameters via query strings instead of headers. See [here](/qstash/howto/webhook#1-publish). * Introduced a new `Upstash-Header-Forward` header to forward all headers from the incoming request. See [here](/qstash/howto/webhook#1-publish). * **Python SDK (`qstash-py`):** * Addressed a few minor bugs. See the full changelog [here](https://github.com/upstash/qstash-py/compare/v2.0.2...v2.0.3). * **Local Development Server:** * The local development server is now publicly available. This server allows you to test your Qstash setup locally. Learn more about the local development server [here](/qstash/howto/local-development). * **Console:** * Separated the Workflow and QStash consoles for an improved user experience. * Separated their DLQ messages as well. * **QStash Server:** * The core team focused on RateLimit and Parallelism features. These features are ready on the server and will be announced next month after the documentation and SDKs are completed. * **TypeScript SDK (`qstash-js`):** * Added global headers to the client, which are automatically included in every publish request. * Resolved issues related to the Anthropics and Resend integrations. * Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.17...v2.7.20). * **Python SDK (`qstash-py`):** * Introduced support for custom `schedule_id` values. * Enabled passing headers to callbacks using the `Upstash-Callback-Forward-...` prefix. * Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-py/compare/v2.0.0...v2.0.1). * **Qstash Server:** * Finalized the local development server, now almost ready for public release. * Improved error reporting by including the field name in cases of invalid input. * Increased the maximum response body size for batch use cases to 100 MB per REST call. * Extended event retention to up to 14 days, instead of limiting to the most recent 10,000 events. Learn more on the [Pricing page](https://upstash.com/pricing/qstash). * **TypeScript SDK (qstash-js):** * Added support for the Anthropics provider and refactored the `api` field of `publishJSON`. See the documentation [here](/qstash/integrations/anthropic). * Full changelog, including fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.14...v2.7.17). * **Qstash Server:** * Fixed a bug in schedule reporting. The Upstash-Caller-IP header now correctly reports the user’s IP address instead of an internal IP for schedules. * Validated the scheduleId parameter. The scheduleId must now be alphanumeric or include hyphens, underscores, or periods. * Added filtering support to bulk message cancellation. Users can now delete messages matching specific filters. See Rest API [here](/qstash/api/messages/bulk-cancel). * Resolved a bug that caused the DLQ Console to become unusable when data was too large. * Fixed an issue with queues that caused them to stop during temporary network communication problems with the storage layer. * **TypeScript SDK (qstash-js):** * Fixed a bug on qstash-js where we skipped using the next signing key when the current signing key fails to verify the `upstash-signature`. Released with qstash-js v2.7.14. * Added resend API. See [here](/qstash/integrations/resend). Released with qstash-js v2.7.14. * Added `schedule to queues` feature to the qstash-js. See [here](/qstash/features/schedules#scheduling-to-a-queue). Released with qstash-js v2.7.14. * **Console:** * Optimized the console by trimming event bodies, reducing resource usage and enabling efficient querying of events with large payloads. * **Qstash Server:** * Began development on a new architecture to deliver faster event processing on the server. * Added more fields to events in the [REST API](/qstash/api/events/list), including `Timeout`, `Method`, `Callback`, `CallbackHeaders`, `FailureCallback`, `FailureCallbackHeaders`, and `MaxRetries`. * Enhanced retry backoff logic by supporting additional headers for retry timing. Along with `Retry-After`, Qstash now recognizes `X-RateLimit-Reset`, `X-RateLimit-Reset-Requests`, and `X-RateLimit-Reset-Tokens` as backoff time indicators. See [here](/qstash/features/retry#retry-after-headers) for more details. * Improved performance, resulting in reduced latency for average publish times. * Set the `nbf` (not before) claim on Signing Keys to 0. This claim specifies the time before which the JWT must not be processed. Previously, this was incorrectly used, causing validation issues when there were minor clock discrepancies between systems. * Fixed queue name validation. Queue names must now be alphanumeric or include hyphens, underscores, or periods, consistent with other API resources. * Resolved bugs related to [overwriting a schedule](/qstash/features/schedules#overwriting-an-existing-schedule). * Released [Upstash Workflow](/qstash/workflow). * Fixed a bug where paused schedules were mistakenly resumed after a process restart (typically occurring during new version releases). * Big update on the UI, where all the Rest functinality exposed in the Console. * Addded order query parameter to [/v2/events](/qstash/api/events/list) and [/v2/dlq](/qstash/api/dlq/listMessages) endpoints. * Added [ability to configure](/qstash/features/callbacks#configuring-callbacks) callbacks(/failure\_callbacks) * A critical fix for schedule pause and resume Rest APIs where the endpoints were not working at all before the fix. * Pause and resume for scheduled messages * Pause and resume for queues * [Bulk cancel](/qstash/api/messages/bulk-cancel) messages * Body and headers on [events](/qstash/api/events/list) * Fixed inaccurate queue lag * [Retry-After](/qstash/features/retry#retry-after-header) support for rate-limited endpoints * [Upstash-Timeout](/qstash/api/publish) header * [Queues and parallelism](/qstash/features/queues) * [Event filtering](/qstash/api/events/list) * [Batch publish messages](/qstash/api/messages/batch) * [Bulk delete](/qstash/api/dlq/deleteMessages) for DLQ * Added [failure callback support](/qstash/api/schedules/create) to scheduled messages * Added Upstash-Caller-IP header to outgoing messages. See \[[https://upstash.com/docs/qstash/howto/receiving](https://upstash.com/docs/qstash/howto/receiving)] for all headers * Added Schedule ID to [events](/qstash/api/events/list) and [messages](/qstash/api/messages/get) * Put last response in DLQ * DLQ [get message](/qstash/api/dlq/getMessage) * Pass schedule ID to the header when calling the user's endpoint * Added more information to [callbacks](/qstash/features/callbacks) * Added [Upstash-Failure-Callback](/qstash/features/callbacks#what-is-a-failure-callback) # Compare Source: https://upstash.com/docs/qstash/overall/compare In this section, we will compare QStash with alternative solutions. ### BullMQ BullMQ is a message queue for NodeJS based on Redis. BullMQ is open source project, you can run BullMQ yourself. * Using BullMQ in serverless environments is problematic due to stateless nature of serverless. QStash is designed for serverless environments. * With BullMQ, you need to run a stateful application to consume messages. QStash calls the API endpoints, so you do not need your application to consume messages continuously. * You need to run and maintain BullMQ and Redis yourself. QStash is completely serverless, you maintain nothing and pay for just what you use. ### Zeplo Zeplo is a message queue targeting serverless. Just like QStash it allows users to queue and schedule HTTP requests. While Zeplo targets serverless, it has a fixed monthly price in paid plans which is \$39/month. In QStash, price scales to zero, you do not pay if you are not using it. With Zeplo, you can send messages to a single endpoint. With QStash, in addition to endpoint, you can submit messages to a URL Group which groups one or more endpoints into a single namespace. Zeplo does not have URL Group functionality. ### Quirrel Quirrel is a job queueing service for serverless. It has a similar functionality with QStash. Quirrel is acquired by Netlify, some of its functionality is available as Netlify scheduled functions. QStash is platform independent, you can use it anywhere. # Prod Pack & Enterprise Source: https://upstash.com/docs/qstash/overall/enterprise Upstash has Prod Pack and Enterprise plans for customers with critical production workloads. Prod Pack and Enterprise plans include additional monitoring and security features in addition to higher capacity limits and more powerful resources. Prod Pack add-on is available for both pay-as-you-go and fixed-price plans. Enterprise plans are custom plans with additional features and higher limits. All features of Prod Pack and Enterprise plan for Upstash QStash are detailed below. ## How to Upgrade You can activate Prod Pack in the QStash settings page in the [Upstash Console](https://upstash.com/dashboard/qstash). For the Enterprise plan, please create a request through the Upstash Console or contact [support@upstash.com](mailto:support@upstash.com). # Prod Pack Features Below QStash features are enabled with Prod Pack. ### Uptime SLA All Prod Pack accounts come with an SLA guaranteeing 99.99% uptime. For mission-critical messaging where uptime is crucial, we recommend Prod Pack plans. Learn more about [Uptime SLA](/common/help/sla). ### SOC-2 Type 2 Compliance & Report Upstash QStash is SOC-2 Type 2 compliant with Prod Pack. Once you enable Prod Pack, you can request access to the report by going to [Upstash Trust Center](https://trust.upstash.com/) or contacting [support@upstash.com](mailto:support@upstash.com). ### Encryption at Rest Encrypts the storage where your QStash message data is persisted and stored. ### Prometheus Metrics Prometheus is an open-source monitoring system widely used for monitoring and alerting in cloud-native and containerized environments. Upstash Prod Pack and Enterprise plans offer Prometheus metrics collection, enabling you to monitor your QStash messages with Prometheus in addition to console metrics. Learn more about [Prometheus integration](/qstash/integrations/prometheus). ### Datadog Integration Upstash Prod Pack and Enterprise plans include integration with Datadog, allowing you to monitor your QStash messages with Datadog in addition to console metrics. Learn more about [Datadog integration](/qstash/integrations/datadog). # Enterprise Features All Prod Pack features are included in the Enterprise plan. Additionally, Enterprise plans include: ### 100M+ Messages Daily Enterprise plans support 100 million or more messages per day, suitable for high-volume production workloads. ### Unlimited Bandwidth Enterprise plans include unlimited bandwidth, ensuring no data transfer limits for your messaging needs. ### SAML SSO Single Sign-On (SSO) allows you to use your existing identity provider to authenticate users for your Upstash account. This feature is available upon request for Enterprise customers. ### Professional Support with SLA Enterprise plans include access to our professional support with response time SLAs and priority access to our support team. Check out the [support page](/common/help/prosupport) for more details. ### Dedicated Resources for Isolation Enterprise customers receive dedicated resources to ensure isolation and consistent performance for their messaging workloads. # Getting Started Source: https://upstash.com/docs/qstash/overall/getstarted QStash is a **serverless messaging and scheduling solution**. It fits easily into your existing workflow and allows you to build reliable systems without managing infrastructure. Instead of calling an endpoint directly, QStash acts as a middleman between you and an API to guarantee delivery, perform automatic retries on failure, and more. We have a new SDK called [Upstash Workflow](/workflow/getstarted). **Upstash Workflow SDK** is **QStash** simplified for your complex applications * Skip the details of preparing a complex dependent endpoints. * Focus on the essential parts. * Enjoy automatic retries and delivery guarantees. * Avoid platform-specific timeouts. Check out [Upstash Workflow Getting Started](/workflow/getstarted) for more. ## Quick Start Check out these Quick Start guides to get started with QStash in your application. Build a Next application that uses QStash to start a long-running job on your platform Build a Python application that uses QStash to schedule a daily job that clean up a database Or continue reading to learn how to send your first message! ## Send your first message **Prerequisite** You need an Upstash account before publishing messages, create one [here](https://console.upstash.com). ### Public API Make sure you have a publicly available HTTP API that you want to send your messages to. If you don't, you can use something like [requestcatcher.com](https://requestcatcher.com/), [webhook.site](https://webhook.site/) or [webhook-test.com](https://webhook-test.com/) to try it out. For example, you can use this URL to test your messages: [https://firstqstashmessage.requestcatcher.com](https://firstqstashmessage.requestcatcher.com) ### Get your token Go to the [Upstash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN`. ### Publish a message A message can be any shape or form: json, xml, binary, anything, that can be transmitted in the http request body. We do not impose any restrictions other than a size limit of 1 MB (which can be customized at your request). In addition to the request body itself, you can also send HTTP headers. Learn more about this in the [message publishing section](/qstash/howto/publishing). ```bash cURL theme={"system"} curl -XPOST \ -H 'Authorization: Bearer ' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://' ``` ```bash cURL RequestCatcher theme={"system"} curl -XPOST \ -H 'Authorization: Bearer ' \ -H "Content-type: application/json" \ -d '{ "hello": "world" }' \ 'https://qstash.upstash.io/v2/publish/https://firstqstashmessage.requestcatcher.com/test' ``` Don't worry, we have SDKs for different languages so you don't have to make these requests manually. ### Check Response You should receive a response with a unique message ID. ### Check Message Status Head over to [Upstash Console](https://console.upstash.com/qstash) and go to the `Logs` tab where you can see your message activities. Learn more about different states [here](/qstash/howto/debug-logs). ## Features and Use Cases Run long-running tasks in the background, without blocking your application Schedule messages to be delivered at a time in the future Publish messages to multiple endpoints, in parallel, using URL Groups Enqueue messages to be delivered one by one in the order they have enqueued. Custom rate per second and parallelism limits to avoid overflowing your endpoint. Get a response delivered to your API when a message is delivered Use a Dead Letter Queue to have full control over failed messages Prevent duplicate messages from being delivered Publish, enqueue, or batch chat completion requests using large language models with QStash features. # llms.txt Source: https://upstash.com/docs/qstash/overall/llms-txt # Pricing & Limits Source: https://upstash.com/docs/qstash/overall/pricing Please check our [pricing page](https://upstash.com/pricing/qstash) for the most up-to-date information on pricing and limits. # Roadmap Source: https://upstash.com/docs/qstash/overall/roadmap We have moved the roadmap and the changelog to [Github Discussions](https://github.com/orgs/upstash/discussions) starting from October 2025.Now you can follow `In Progress` features. You can see that your `Feature Requests` are recorded. You can vote for them and comment your specific use-cases to shape the feature to your needs. # Use Cases Source: https://upstash.com/docs/qstash/overall/usecases TODO: andreas: rework and reenable this page after we have 2 use cases ready [https://linear.app/upstash/issue/QSTH-84/use-cases-summaryhighlights-of-recipes](https://linear.app/upstash/issue/QSTH-84/use-cases-summaryhighlights-of-recipes) This section is still a work in progress. We will be adding detailed tutorials for each use case soon. Tell us on [Discord](https://discord.gg/w9SenAtbme) or [X](https://x.com/upstash) what you would like to see here. ### Triggering Nextjs Functions on a schedule Create a schedule in QStash that runs every hour and calls a Next.js serverless function hosted on Vercel. ### Reset Billing Cycle in your Database Once a month, reset database entries to start a new billing cycle. ### Fanning out alerts to Slack, email, Opsgenie, etc. Createa QStash URL Group that receives alerts from a single source and delivers them to multiple destinations. ### Send delayed message when a new user signs up Publish delayed messages whenever a new user signs up in your app. After a certain delay (e.g. 10 minutes), QStash will send a request to your API, allowing you to email the user a welcome message. # AWS Lambda (Node) Source: https://upstash.com/docs/qstash/quickstarts/aws-lambda/nodejs ## Setting up a Lambda The [AWS CDK](https://aws.amazon.com/cdk/) is the most convenient way to create a new project on AWS Lambda. For example, it lets you directly define integrations such as APIGateway, a tool to make our lambda publicly available as an API, in your code. ```bash Terminal theme={"system"} mkdir my-app cd my-app cdk init app -l typescript npm i esbuild @upstash/qstash mkdir lambda touch lambda/index.ts ``` ## Webhook verification ### Using the SDK (recommended) Edit `lambda/index.ts`, the file containing our core lambda logic: ```ts lambda/index.ts theme={"system"} import { Receiver } from "@upstash/qstash" import type { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda" const receiver = new Receiver({ currentSigningKey: process.env.QSTASH_CURRENT_SIGNING_KEY ?? "", nextSigningKey: process.env.QSTASH_NEXT_SIGNING_KEY ?? "", }) export const handler = async ( event: APIGatewayProxyEvent ): Promise => { const signature = event.headers["upstash-signature"] const lambdaFunctionUrl = `https://${event.requestContext.domainName}` if (!signature) { return { statusCode: 401, body: JSON.stringify({ message: "Missing signature" }), } } try { await receiver.verify({ signature: signature, body: event.body ?? "", url: lambdaFunctionUrl, }) } catch (err) { return { statusCode: 401, body: JSON.stringify({ message: "Invalid signature" }), } } // Request is valid, perform business logic return { statusCode: 200, body: JSON.stringify({ message: "Request processed successfully" }), } } ``` We'll set the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables together when deploying our Lambda. ### Manual Verification In this section, we'll manually verify our incoming QStash requests without additional packages. Also see our [manual verification example](https://github.com/upstash/qstash-examples/tree/main/aws-lambda). 1. Implement the handler function ```ts lambda/index.ts theme={"system"} import type { APIGatewayEvent, APIGatewayProxyResult } from "aws-lambda" import { createHash, createHmac } from "node:crypto" export const handler = async ( event: APIGatewayEvent, ): Promise => { const signature = event.headers["upstash-signature"] ?? "" const currentSigningKey = process.env.QSTASH_CURRENT_SIGNING_KEY ?? "" const nextSigningKey = process.env.QSTASH_NEXT_SIGNING_KEY ?? "" const url = `https://${event.requestContext.domainName}` try { // Try to verify the signature with the current signing key and if that fails, try the next signing key // This allows you to roll your signing keys once without downtime await verify(signature, currentSigningKey, event.body, url).catch((err) => { console.error( `Failed to verify signature with current signing key: ${err}` ) return verify(signature, nextSigningKey, event.body, url) }) } catch (err) { const message = err instanceof Error ? err.toString() : err return { statusCode: 400, body: JSON.stringify({ error: message }), } } // Add your business logic here return { statusCode: 200, body: JSON.stringify({ message: "Request processed successfully" }), } } ``` 2. Implement the `verify` function: ```ts lambda/index.ts theme={"system"} /** * @param jwt - The content of the `upstash-signature` header (JWT) * @param signingKey - The signing key to use to verify the signature (Get it from Upstash Console) * @param body - The raw body of the request * @param url - The public URL of the lambda function */ async function verify( jwt: string, signingKey: string, body: string | null, url: string ): Promise { const split = jwt.split(".") if (split.length != 3) { throw new Error("Invalid JWT") } const [header, payload, signature] = split if ( signature != createHmac("sha256", signingKey) .update(`${header}.${payload}`) .digest("base64url") ) { throw new Error("Invalid JWT signature") } // JWT is verified, start looking at payload claims const p: { sub: string iss: string exp: number nbf: number body: string } = JSON.parse(Buffer.from(payload, "base64url").toString()) if (p.iss !== "Upstash") { throw new Error(`invalid issuer: ${p.iss}, expected "Upstash"`) } if (p.sub !== url) { throw new Error(`invalid subject: ${p.sub}, expected "${url}"`) } const now = Math.floor(Date.now() / 1000) if (now > p.exp) { throw new Error("token has expired") } if (now < p.nbf) { throw new Error("token is not yet valid") } if (body != null) { if ( p.body.replace(/=+$/, "") != createHash("sha256").update(body).digest("base64url") ) { throw new Error("body hash does not match") } } } ``` You can find the complete example [here](https://github.com/upstash/qstash-examples/blob/main/aws-lambda/typescript-example/index.ts). ## Deploying a Lambda ### Using the AWS CDK (recommended) Because we used the AWS CDK to initialize our project, deployment is straightforward. Edit the `lib/.ts` file the CDK created when bootstrapping the project. For example, if our lambda webhook does video processing, it could look like this: ```ts lib/.ts theme={"system"} import * as cdk from "aws-cdk-lib"; import * as lambda from "aws-cdk-lib/aws-lambda"; import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs"; import { Construct } from "constructs"; import path from "path"; import * as apigateway from 'aws-cdk-lib/aws-apigateway'; export class VideoProcessingStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props) // Create the Lambda function const videoProcessingLambda = new NodejsFunction(this, 'VideoProcessingLambda', { runtime: lambda.Runtime.NODEJS_20_X, handler: 'handler', entry: path.join(__dirname, '../lambda/index.ts'), }); // Create the API Gateway const api = new apigateway.RestApi(this, 'VideoProcessingApi', { restApiName: 'Video Processing Service', description: 'This service handles video processing.', defaultMethodOptions: { authorizationType: apigateway.AuthorizationType.NONE, }, }); api.root.addMethod('POST', new apigateway.LambdaIntegration(videoProcessingLambda)); } } ``` Every time we now run the following deployment command in our terminal, our changes are going to be deployed right to a publicly available API, authorized by our QStash webhook logic from before. ```bash Terminal theme={"system"} cdk deploy ``` You may be prompted to confirm the necessary AWS permissions during this process, for example allowing APIGateway to invoke your lambda function. Once your code has been deployed to Lambda, you'll receive a live URL to your endpoint via the CLI and can see the new APIGateway connection in your AWS dashboard: The URL you use to invoke your function typically follows this format, especially if you follow the same stack configuration as shown above: `https://.execute-api..amazonaws.com/prod/` To provide our `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables, navigate to your QStash dashboard: and make these two variables available to your Lambda in your function configuration: Tada, we just deployed a live Lambda with the AWS CDK! 🎉 ### Manual Deployment 1. Create a new Lambda function by going to the [AWS dashboard](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/create/function) for your desired lambda region. Give your new function a name and select `Node.js 20.x` as runtime, then create the function. 2. To make this Lambda available under a public URL, navigate to the `Configuration` tab and click `Function URL`: 3. In the following dialog, you'll be asked to select one of two authentication types. Select `NONE`, because we are handling authentication ourselves. Then, click `Save`. You'll see the function URL on the right side of your function overview: 4. Get your current and next signing key from the [Upstash Console](https://console.upstash.com/qstash). 5. Still under the `Configuration` tab, set the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables: 6. Add the following script to your `package.json` file to build and zip your code: ```json package.json theme={"system"} { "scripts": { "build": "rm -rf ./dist; esbuild index.ts --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=dist/index.js && cd dist && zip -r index.zip index.js*" } } ``` 7. Click the `Upload from` button for your Lambda and deploy the code to AWS. Select `./dist/index.zip` as the upload file. Tada, you've manually deployed a zip file to AWS Lambda! 🎉 ## Testing the Integration To make sure everything works as expected, navigate to your QStash request builder and send a request to your freshly deployed Lambda function: Alternatively, you can also send a request via CURL: ```bash Terminal theme={"system"} curl --request POST "https://qstash.upstash.io/v2/publish/" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d "{ \"hello\": \"world\"}" ``` # AWS Lambda (Python) Source: https://upstash.com/docs/qstash/quickstarts/aws-lambda/python [Source Code](https://github.com/upstash/qstash-examples/tree/main/aws-lambda/python-example) This is a step by step guide on how to receive webhooks from QStash in your Lambda function on AWS. ### 1. Create a new project Let's create a new folder called `aws-lambda` and initialize a new project by creating `lambda_function.py` This example uses Makefile, but the scripts can also be written for `Pipenv`. ```bash theme={"system"} mkdir aws-lambda cd aws-lambda touch lambda_function.py ``` ### 2. Dependencies We are using `PyJwt` for decoding the JWT token in our code. We will install the package in the zipping stage. ### 3. Creating the handler function In this example we will show how to receive a webhook from QStash and verify the signature. First, let's import everything we need: ```python theme={"system"} import json import os import hmac import hashlib import base64 import time import jwt ``` Now, we create the handler function. In the handler we will prepare all necessary variables that we need for verification. This includes the signature, the signing keys and the url of the lambda function. Then we try to verify the request using the current signing key and if that fails we will try the next one. If the signature could be verified, we can start processing the request. ```python theme={"system"} def lambda_handler(event, context): # parse the inputs current_signing_key = os.environ['QSTASH_CURRENT_SIGNING_KEY'] next_signing_key = os.environ['QSTASH_NEXT_SIGNING_KEY'] headers = event['headers'] signature = headers['upstash-signature'] url = "https://{}{}".format(event["requestContext"]["domainName"], event["rawPath"]) body = None if 'body' in event: body = event['body'] # check verification now try: verify(signature, current_signing_key, body, url) except Exception as e: print("Failed to verify signature with current signing key:", e) try: verify(signature, next_signing_key, body, url) except Exception as e2: return { "statusCode": 400, "body": json.dumps({ "error": str(e2), }), } # Your logic here... return { "statusCode": 200, "body": json.dumps({ "message": "ok", }), } ``` The `verify` function will handle the actual verification of the signature. The signature itself is actually a [JWT](https://jwt.io) and includes claims about the request. See [here](/qstash/features/security#claims). ```python theme={"system"} # @param jwt_token - The content of the `upstash-signature` header # @param signing_key - The signing key to use to verify the signature (Get it from Upstash Console) # @param body - The raw body of the request # @param url - The public URL of the lambda function def verify(jwt_token, signing_key, body, url): split = jwt_token.split(".") if len(split) != 3: raise Exception("Invalid JWT.") header, payload, signature = split message = header + '.' + payload generated_signature = base64.urlsafe_b64encode(hmac.new(bytes(signing_key, 'utf-8'), bytes(message, 'utf-8'), digestmod=hashlib.sha256).digest()).decode() if generated_signature != signature and signature + "=" != generated_signature : raise Exception("Invalid JWT signature.") decoded = jwt.decode(jwt_token, options={"verify_signature": False}) sub = decoded['sub'] iss = decoded['iss'] exp = decoded['exp'] nbf = decoded['nbf'] decoded_body = decoded['body'] if iss != "Upstash": raise Exception("Invalid issuer: {}".format(iss)) if sub.rstrip("/") != url.rstrip("/"): raise Exception("Invalid subject: {}".format(sub)) now = time.time() if now > exp: raise Exception("Token has expired.") if now < nbf: raise Exception("Token is not yet valid.") if body != None: while decoded_body[-1] == "=": decoded_body = decoded_body[:-1] m = hashlib.sha256() m.update(bytes(body, 'utf-8')) m = m.digest() generated_hash = base64.urlsafe_b64encode(m).decode() if generated_hash != decoded_body and generated_hash != decoded_body + "=" : raise Exception("Body hash doesn't match.") ``` You can find the complete file [here](https://github.com/upstash/qstash-examples/tree/main/aws-lambda/python-example/lambda_function.py). That's it, now we can create the function on AWS and test it. ### 4. Create a Lambda function on AWS Create a new Lambda function from scratch by going to the [AWS console](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/create/function). (Make sure you select your desired region) Give it a name and select `Python 3.8` as runtime, then create the function. Afterwards we will add a public URL to this lambda by going to the `Configuration` tab: Select `Auth Type = NONE` because we are handling authentication ourselves. After creating the url, you should see it on the right side of the overview of your function: ### 5. Set Environment Variables Get your current and next signing key from the [Upstash Console](https://console.upstash.com/qstash) On the same `Configuration` tab from earlier, we will now set the required environment variables: ### 6. Deploy your Lambda function We need to bundle our code and zip it to deploy it to AWS. Add the following script to your `Makefile` file (or corresponding pipenv script): ```yaml theme={"system"} zip: rm -rf dist pip3 install --target ./dist pyjwt cp lambda_function.py ./dist/lambda_function.py cd dist && zip -r lambda.zip . mv ./dist/lambda.zip ./ ``` When calling `make zip` this will install PyJwt and zip the code. Afterwards we can click the `Upload from` button in the lower right corner and deploy the code to AWS. Select `lambda.zip` as upload file. ### 7. Publish a message Open a different terminal and publish a message to QStash. Note the destination url is the URL from step 4. ```bash theme={"system"} curl --request POST "https://qstash.upstash.io/v2/publish/https://urzdbfn4et56vzeasu3fpcynym0zerme.lambda-url.eu-west-1.on.aws" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d "{ \"hello\": \"world\"}" ``` ## Next Steps That's it, you have successfully created a secure AWS lambda function, that receives and verifies incoming webhooks from qstash. Learn more about publishing a message to qstash [here](/qstash/howto/publishing) # Cloudflare Workers Source: https://upstash.com/docs/qstash/quickstarts/cloudflare-workers This is a step by step guide on how to receive webhooks from QStash in your Cloudflare Worker. ### Project Setup We will use **C3 (create-cloudflare-cli)** command-line tool to create our functions. You can open a new terminal window and run C3 using the prompt below. ```shell npm theme={"system"} npm create cloudflare@latest ``` ```shell yarn theme={"system"} yarn create cloudflare@latest ``` This will install the `create-cloudflare` package, and lead you through setup. C3 will also install Wrangler in projects by default, which helps us testing and deploying the projects. ```text theme={"system"} ➜ npm create cloudflare@latest Need to install the following packages: create-cloudflare@2.52.3 Ok to proceed? (y) y using create-cloudflare version 2.52.3 ╭ Create an application with Cloudflare Step 1 of 3 │ ├ In which directory do you want to create your application? │ dir ./cloudflare_starter │ ├ What would you like to start with? │ category Hello World example │ ├ Which template would you like to use? │ type Worker only │ ├ Which language do you want to use? │ lang TypeScript │ ├ Do you want to use git for version control? │ yes git │ ╰ Application created ``` We will also install the **Upstash QStash library**. ```bash theme={"system"} npm install @upstash/qstash ``` ### 3. Use QStash in your handler First we import the library: ```ts src/index.ts theme={"system"} import { Receiver } from "@upstash/qstash"; ``` Then we adjust the `Env` interface to include the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables. ```ts src/index.ts theme={"system"} export interface Env { QSTASH_CURRENT_SIGNING_KEY: string; QSTASH_NEXT_SIGNING_KEY: string; } ``` And then we validate the signature in the `handler` function. First we create a new receiver and provide it with the signing keys. ```ts src/index.ts theme={"system"} const receiver = new Receiver({ currentSigningKey: env.QSTASH_CURRENT_SIGNING_KEY, nextSigningKey: env.QSTASH_NEXT_SIGNING_KEY, }); ``` Then we verify the signature. ```ts src/index.ts theme={"system"} const body = await request.text(); const isValid = await receiver.verify({ signature: request.headers.get("Upstash-Signature")!, body, }); ``` The entire file looks like this now: ```ts src/index.ts theme={"system"} import { Receiver } from "@upstash/qstash"; export interface Env { QSTASH_CURRENT_SIGNING_KEY: string; QSTASH_NEXT_SIGNING_KEY: string; } export default { async fetch(request, env, ctx): Promise { const receiver = new Receiver({ currentSigningKey: env.QSTASH_CURRENT_SIGNING_KEY, nextSigningKey: env.QSTASH_NEXT_SIGNING_KEY, }); const body = await request.text(); const isValid = await receiver.verify({ signature: request.headers.get("Upstash-Signature")!, body, }); if (!isValid) { return new Response("Invalid signature", { status: 401 }); } // signature is valid return new Response("Hello World!"); }, } satisfies ExportedHandler; ``` ### Configure Credentials There are two methods for setting up the credentials for QStash. One for worker level, the other for account level. #### Using Cloudflare Secrets (Worker Level Secrets) This is the common way of creating secrets for your worker, see [Workflow Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) * Navigate to [Upstash Console](https://console.upstash.com) and get your QStash credentials. * In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Compute (Workers)** > **Workers & Pages**. * Select your worker and go to **Settings** > **Variables and Secrets**. * Add your QStash credentials as secrets here: #### Using Cloudflare Secrets Store (Account Level Secrets) This method requires a few modifications in the worker code, see [Access to Secret on Env Object](https://developers.cloudflare.com/secrets-store/integrations/workers/#3-access-the-secret-on-the-env-object) ```ts src/index.ts theme={"system"} import { Receiver } from "@upstash/qstash"; export interface Env { QSTASH_CURRENT_SIGNING_KEY: SecretsStoreSecret; QSTASH_NEXT_SIGNING_KEY: SecretsStoreSecret; } export default { async fetch(request, env, ctx): Promise { const c = new Receiver({ currentSigningKey: await env.QSTASH_CURRENT_SIGNING_KEY.get(), nextSigningKey: await env.QSTASH_NEXT_SIGNING_KEY.get(), }); // Rest of the code }, }; ``` After doing these modifications, you can deploy the worker to Cloudflare with `npx wrangler deploy`, and follow the steps below to define the secrets: * Navigate to [Upstash Console](https://console.upstash.com) and get your QStash credentials. * In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Secrets Store** and add QStash credentials as secrets. * Under **Compute (Workers)** > **Workers & Pages**, find your worker and add these secrets as bindings. ### Deployment Newer deployments may revert the configurations you did in the dashboard. While worker level secrets persist, the bindings will be gone! Deploy your function to Cloudflare with `npx wrangler deploy` The endpoint of the function will be provided to you, once the deployment is done. ### Publish a message Open a different terminal and publish a message to QStash. Note the destination url is the same that was printed in the previous deploy step. ```bash theme={"system"} curl --request POST "https://qstash.upstash.io/v2/publish/https://..workers.dev" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d "{ \"hello\": \"world\"}" ``` In the logs you should see something like this: ```bash theme={"system"} $ npx wrangler tail ⛅️ wrangler 4.43.0 -------------------- Successfully created tail, expires at 2025-10-16T00:25:17Z Connected to , waiting for logs... POST https://..workers.dev/ - Ok @ 10/15/2025, 10:34:55 PM ``` ## Next Steps That's it, you have successfully created a secure Cloudflare Worker, that receives and verifies incoming webhooks from qstash. Learn more about publishing a message to qstash [here](/qstash/howto/publishing). You can find the source code [here](https://github.com/upstash/qstash-examples/tree/main/cloudflare-workers). # Deno Deploy Source: https://upstash.com/docs/qstash/quickstarts/deno-deploy [Source Code](https://github.com/upstash/qstash-examples/tree/main/deno-deploy) This is a step by step guide on how to receive webhooks from QStash in your Deno deploy project. ### 1. Create a new project Go to [https://dash.deno.com/projects](https://dash.deno.com/projects) and create a new playground project. ### 2. Edit the handler function Then paste the following code into the browser editor: ```ts theme={"system"} import { serve } from "https://deno.land/std@0.142.0/http/server.ts"; import { Receiver } from "https://deno.land/x/upstash_qstash@v0.1.4/mod.ts"; serve(async (req: Request) => { const r = new Receiver({ currentSigningKey: Deno.env.get("QSTASH_CURRENT_SIGNING_KEY")!, nextSigningKey: Deno.env.get("QSTASH_NEXT_SIGNING_KEY")!, }); const isValid = await r .verify({ signature: req.headers.get("Upstash-Signature")!, body: await req.text(), }) .catch((err: Error) => { console.error(err); return false; }); if (!isValid) { return new Response("Invalid signature", { status: 401 }); } console.log("The signature was valid"); // do work return new Response("OK", { status: 200 }); }); ``` ### 3. Add your signing keys Click on the `settings` button at the top of the screen and then click `+ Add Variable` Get your current and next signing key from [Upstash](https://console.upstash.com/qstash) and then set them in deno deploy. ### 4. Deploy Simply click on `Save & Deploy` at the top of the screen. ### 5. Publish a message Make note of the url displayed in the top right. This is the public url of your project. ```bash theme={"system"} curl --request POST "https://qstash.upstash.io/v2/publish/https://early-frog-33.deno.dev" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d "{ \"hello\": \"world\"}" ``` In the logs you should see something like this: ```basheurope-west3isolate start time: 2.21 ms theme={"system"} Listening on http://localhost:8000/ The signature was valid ``` ## Next Steps That's it, you have successfully created a secure deno API, that receives and verifies incoming webhooks from qstash. Learn more about publishing a message to qstash [here](/qstash/howto/publishing) # Golang Source: https://upstash.com/docs/qstash/quickstarts/fly-io/go [Source Code](https://github.com/upstash/qstash-examples/tree/main/fly.io/go) This is a step by step guide on how to receive webhooks from QStash in your Golang application running on [fly.io](https://fly.io). ## 0. Prerequisites * [flyctl](https://fly.io/docs/getting-started/installing-flyctl/) - The fly.io CLI ## 1. Create a new project Let's create a new folder called `flyio-go` and initialize a new project. ```bash theme={"system"} mkdir flyio-go cd flyio-go go mod init flyio-go ``` ## 2. Creating the main function In this example we will show how to receive a webhook from QStash and verify the signature using the popular [golang-jwt/jwt](https://github.com/golang-jwt/jwt) library. First, let's import everything we need: ```go theme={"system"} package main import ( "crypto/sha256" "encoding/base64" "fmt" "github.com/golang-jwt/jwt/v4" "io" "net/http" "os" "time" ) ``` Next we create `main.go`. Ignore the `verify` function for now. We will add that next. In the handler we will prepare all necessary variables that we need for verification. This includes the signature and the signing keys. Then we try to verify the request using the current signing key and if that fails we will try the next one. If the signature could be verified, we can start processing the request. ```go theme={"system"} func main() { port := os.Getenv("PORT") if port == "" { port = "8080" } http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() currentSigningKey := os.Getenv("QSTASH_CURRENT_SIGNING_KEY") nextSigningKey := os.Getenv("QSTASH_NEXT_SIGNING_KEY") tokenString := r.Header.Get("Upstash-Signature") body, err := io.ReadAll(r.Body) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } err = verify(body, tokenString, currentSigningKey) if err != nil { fmt.Printf("Unable to verify signature with current signing key: %v", err) err = verify(body, tokenString, nextSigningKey) } if err != nil { http.Error(w, err.Error(), http.StatusUnauthorized) return } // handle your business logic here w.WriteHeader(http.StatusOK) }) fmt.Println("listening on", port) err := http.ListenAndServe(":"+port, nil) if err != nil { panic(err) } } ``` The `verify` function will handle verification of the [JWT](https://jwt.io), that includes claims about the request. See [here](/qstash/features/security#claims). ```go theme={"system"} func verify(body []byte, tokenString, signingKey string) error { token, err := jwt.Parse( tokenString, func(token *jwt.Token) (interface{}, error) { if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok { return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"]) } return []byte(signingKey), nil }) if err != nil { return err } claims, ok := token.Claims.(jwt.MapClaims) if !ok || !token.Valid { return fmt.Errorf("Invalid token") } if !claims.VerifyIssuer("Upstash", true) { return fmt.Errorf("invalid issuer") } if !claims.VerifyExpiresAt(time.Now().Unix(), true) { return fmt.Errorf("token has expired") } if !claims.VerifyNotBefore(time.Now().Unix(), true) { return fmt.Errorf("token is not valid yet") } bodyHash := sha256.Sum256(body) if claims["body"] != base64.URLEncoding.EncodeToString(bodyHash[:]) { return fmt.Errorf("body hash does not match") } return nil } ``` You can find the complete file [here](https://github.com/upstash/qstash-examples/blob/main/fly.io/go/main.go). That's it, now we can deploy our API and test it. ## 3. Create app on fly.io [Login](https://fly.io/docs/getting-started/log-in-to-fly/) with `flyctl` and then `flyctl launch` the new app. This will create the necessary `fly.toml` for us. It will ask you a bunch of questions. I chose all defaults here except for the last question. We do not want to deploy just yet. ```bash theme={"system"} $ flyctl launch Creating app in /Users/andreasthomas/github/upstash/qstash-examples/fly.io/go Scanning source code Detected a Go app Using the following build configuration: Builder: paketobuildpacks/builder:base Buildpacks: gcr.io/paketo-buildpacks/go ? App Name (leave blank to use an auto-generated name): Automatically selected personal organization: Andreas Thomas ? Select region: fra (Frankfurt, Germany) Created app winer-cherry-9545 in organization personal Wrote config file fly.toml ? Would you like to setup a Postgresql database now? No ? Would you like to deploy now? No Your app is ready. Deploy with `flyctl deploy` ``` ## 4. Set Environment Variables Get your current and next signing key from the [Upstash Console](https://console.upstash.com/qstash) Then set them using `flyctl secrets set ...` ```bash theme={"system"} flyctl secrets set QSTASH_CURRENT_SIGNING_KEY=... flyctl secrets set QSTASH_NEXT_SIGNING_KEY=... ``` ## 5. Deploy the app Fly.io made this step really simple. Just `flyctl deploy` and enjoy. ```bash theme={"system"} flyctl deploy ``` ## 6. Publish a message Now you can publish a message to QStash. Note the destination url is basically your app name, if you are not sure what it is, you can go to [fly.io/dashboard](https://fly.io/dashboard) and find out. In my case the app is named "winter-cherry-9545" and the public url is "[https://winter-cherry-9545.fly.dev](https://winter-cherry-9545.fly.dev)". ```bash theme={"system"} curl --request POST "https://qstash.upstash.io/v2/publish/https://winter-cherry-9545.fly.dev" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d "{ \"hello\": \"world\"}" ``` ## Next Steps That's it, you have successfully created a Go API hosted on fly.io, that receives and verifies incoming webhooks from qstash. Learn more about publishing a message to qstash [here](/qstash/howto/publishing) # Python on Vercel Source: https://upstash.com/docs/qstash/quickstarts/python-vercel ## Introduction This quickstart will guide you through setting up QStash to run a daily script to clean up your database. This is useful for testing and development environments where you want to reset the database every day. ## Prerequisites * Create an Upstash account and get your [QStash token](https://console.upstash.com/qstash) First, we'll create a new directory for our Python app. We'll call it `clean-db-cron`. The database we'll be using is Redis, so we'll need to install the `upstash_redis` package. ```bash theme={"system"} mkdir clean-db-cron ``` ```bash theme={"system"} cd clean-db-cron ``` ```bash theme={"system"} pip install upstash-redis ``` Let's write the Python code to clean up the database. We'll use the `upstash_redis` package to connect to the database and delete all keys. ```python index.py theme={"system"} from upstash_redis import Redis redis = Redis(url="https://YOUR_REDIS_URL", token="YOUR_TOKEN") def delete_all_entries(): keys = redis.keys("*") # Match all keys redis.delete(*keys) delete_all_entries() ``` Try running the code to see if it works. Your database keys should be deleted! In order to use QStash, we need to make the Python code into a public endpoint. There are many ways to do this such as using Flask, FastAPI, or Django. In this example, we'll use the Python `http.server` module to create a simple HTTP server. ```python api/index.py theme={"system"} from http.server import BaseHTTPRequestHandler from upstash_redis import Redis redis = Redis(url="https://YOUR_REDIS_URL", token="YOUR_TOKEN") def delete_all_entries(): keys = redis.keys("*") # Match all keys redis.delete(*keys) class handler(BaseHTTPRequestHandler): def do_POST(self): delete_all_entries() self.send_response(200) self.end_headers() ``` For the purpose of this tutorial, I'll deploy the application to Vercel using the [Python Runtime](https://vercel.com/docs/functions/runtimes/python), but feel free to use any other hosting provider. There are many ways to [deploy to Vercel](https://vercel.com/docs/deployments/overview), but I'm going to use the Vercel CLI. ```bash theme={"system"} npm install -g vercel ``` ```bash theme={"system"} vercel ``` Once deployed, you can find the public URL in the dashboard. There are two ways we can go about configuring QStash. We can either use the QStash dashboard or the QStash API. In this example, it makes more sense to utilize the dashboard since we only need to set up a singular cronjob. However, you can imagine a scenario where you have a large number of cronjobs and you'd want to automate the process. In that case, you'd want to use the QStash Python SDK. To create the schedule, go to the [QStash dashboard](https://console.upstash.com/qstash) and enter the URL of the public endpoint you created. Then, set the type to schedule and change the `Upstash-Cron` header to run daily at a time of your choosing. ``` URL: https://your-vercel-app.vercel.app/api Type: Schedule Every: every day at midnight (feel free to customize) ``` QStash console scheduling Once you start the schedule, QStash will invoke the endpoint at the specified time. You can scroll down and verify the job has been created! If you have a use case where you need to automate the creation of jobs, you can use the SDK instead. ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="https://YOUR_URL.vercel.app/api", cron="0 12 * * *", ) ``` Now, go ahead and try it out for yourself! Try using some of the other features of QStash, such as [callbacks](/qstash/features/callbacks) and [URL Groups](/qstash/features/url-groups). # Next.js Source: https://upstash.com/docs/qstash/quickstarts/vercel-nextjs QStash is a robust message queue and task-scheduling service that integrates perfectly with Next.js. This guide will show you how to use QStash in your Next.js projects, including a quickstart and a complete example. ## Quickstart At its core, each QStash message contains two pieces of information: * URL (which endpoint to call) * Request body (e.g. IDs of items you want to process) The following endpoint could be used to upload an image and then asynchronously queue a processing task to optimize the image in the background. ```tsx upload-image/route.ts theme={"system"} import { Client } from "@upstash/qstash" import { NextResponse } from "next/server" const client = new Client({ token: process.env.QSTASH_TOKEN! }) export const POST = async (req: Request) => { // Image uploading logic // 👇 Once uploading is done, queue an image processing task const result = await client.publishJSON({ url: "https://your-api-endpoint.com/process-image", body: { imageId: "123" }, }) return NextResponse.json({ message: "Image queued for processing!", qstashMessageId: result.messageId, }) } ``` Note that the URL needs to be publicly available for QStash to call, either as a deployed project or by [developing with QStash locally](/qstash/howto/local-tunnel). Because QStash calls our image processing task, we get automatic retries whenever the API throws an error. These retries make our function very reliable. We also let the user know immediately that their image has been successfully queued. Now, let's **receive the QStash message** in our image processing endpoint: ```tsx process-image/route.ts theme={"system"} import { verifySignatureAppRouter } from "@upstash/qstash/nextjs" // 👇 Verify that this messages comes from QStash export const POST = verifySignatureAppRouter(async (req: Request) => { const body = await req.json() const { imageId } = body as { imageId: string } // Image processing logic, i.e. using sharp return new Response(`Image with id "${imageId}" processed successfully.`) }) ``` ```bash .env theme={"system"} # Copy all three from your QStash dashboard QSTASH_TOKEN= QSTASH_CURRENT_SIGNING_KEY= QSTASH_NEXT_SIGNING_KEY= ``` Just like that, we set up a reliable and asynchronous image processing system in Next.js. The same logic works for email queues, reliable webhook processing, long-running report generations and many more. ## Example project * Create an Upstash account and get your [QStash token](https://console.upstash.com/qstash) * Node.js installed ```bash theme={"system"} npx create-next-app@latest qstash-bg-job ``` ```bash theme={"system"} cd qstash-bg-job ``` ```bash theme={"system"} npm install @upstash/qstash ``` ```bash theme={"system"} npm run dev ``` After removing the default content in `src/app/page.tsx`, let's create a simple UI to trigger the background job using a button. ```tsx src/app/page.tsx theme={"system"} "use client" export default function Home() { return (
) } ``` Quickstart UI
We can use QStash to start a background job by calling the `publishJSON` method. In this example, we're using Next.js server actions, but you can also use route handlers. Since we don't have our public API endpoint yet, we can use [Request Catcher](https://requestcatcher.com/) to test the background job. This will eventually be replaced with our own API endpoint. ```ts src/app/actions.ts theme={"system"} "use server" import { Client } from "@upstash/qstash" const qstashClient = new Client({ // Add your token to a .env file token: process.env.QSTASH_TOKEN!, }) export async function startBackgroundJob() { await qstashClient.publishJSON({ url: "https://firstqstashmessage.requestcatcher.com/test", body: { hello: "world", }, }) } ``` Now let's invoke the `startBackgroundJob` function when the button is clicked. ```tsx src/app/page.tsx theme={"system"} "use client" import { startBackgroundJob } from "@/app/actions" export default function Home() { async function handleClick() { await startBackgroundJob() } return (
) } ``` To test the background job, click the button and check the Request Catcher for the incoming request. You can also head over to [Upstash Console](https://console.upstash.com/qstash) and go to the `Logs` tab where you can see your message activities.
Now that we know QStash is working, let's create our own endpoint to handle a background job. This is the endpoint that will be invoked by QStash. This job will be responsible for sending 10 requests, each with a 500ms delay. Since we're deploying to Vercel, we have to be cautious of the [time limit for serverless functions](https://vercel.com/docs/functions/runtimes#max-duration). ```ts src/app/api/long-task/route.ts theme={"system"} export async function POST(request: Request) { const data = await request.json() for (let i = 0; i < 10; i++) { await fetch("https://firstqstashmessage.requestcatcher.com/test", { method: "POST", body: JSON.stringify(data), headers: { "Content-Type": "application/json" }, }) await new Promise((resolve) => setTimeout(resolve, 500)) } return Response.json({ success: true }) } ``` Now let's update our `startBackgroundJob` function to use our new endpoint. There's 1 problem: our endpoint is not public. We need to make it public so that QStash can call it. We have 2 options: 1. Deploy our application to a platform like Vercel and use the public URL. 2. Create a [local tunnel](/qstash/howto/local-tunnel) to test the endpoint locally. For the purpose, of this tutorial, I'll deploy the application to Vercel, but feel free to use a local tunnel if you prefer. There are many ways to [deploy to Vercel](https://vercel.com/docs/deployments/overview), but I'm going to use the Vercel CLI. ```bash theme={"system"} npm install -g vercel ``` ```bash theme={"system"} vercel ``` Once deployed, you can find the public URL in the Vercel dashboard. Now that we have a public URL, we can update the URL. ```ts src/app/actions.ts theme={"system"} "use server" import { Client } from "@upstash/qstash" const qstashClient = new Client({ token: process.env.QSTASH_TOKEN!, }) export async function startBackgroundJob() { await qstashClient.publishJSON({ // Replace with your public URL url: "https://qstash-bg-job.vercel.app/api/long-task", body: { hello: "world", }, }) } ``` And voila! You've created a Next.js app that calls a long-running background job using QStash. QStash is a great way to handle background jobs, but it's important to remember that it's a public API. This means that anyone can call your endpoint. Make sure to add security measures to your endpoint to ensure that QStash is the sender of the request. Luckily, our SDK provides a way to verify the sender of the request. Make sure to get your signing keys from the QStash console and add them to your environment variables. The `verifySignatureAppRouter` will try to load `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` from the environment. If one of them is missing, an error is thrown. ```ts src/app/api/long-task/route.ts theme={"system"} import { verifySignatureAppRouter } from "@upstash/qstash/nextjs" async function handler(request: Request) { const data = await request.json() for (let i = 0; i < 10; i++) { await fetch("https://firstqstashmessage.requestcatcher.com/test", { method: "POST", body: JSON.stringify(data), headers: { "Content-Type": "application/json" }, }) await new Promise((resolve) => setTimeout(resolve, 500)) } return Response.json({ success: true }) } export const POST = verifySignatureAppRouter(handler) ``` Let's also add error catching to our action and a loading state to our UI. ```ts src/app/actions.ts theme={"system"} "use server" import { Client } from "@upstash/qstash"; const qstashClient = new Client({ token: process.env.QSTASH_TOKEN!, }); export async function startBackgroundJob() { try { const response = await qstashClient.publishJSON({ "url": "https://qstash-bg-job.vercel.app/api/long-task", body: { "hello": "world" } }); return response.messageId; } catch (error) { console.error(error); return null; } } ``` ```tsx src/app/page.tsx theme={"system"} "use client" import { startBackgroundJob } from "@/app/actions"; import { useState } from "react"; export default function Home() { const [loading, setLoading] = useState(false); const [msg, setMsg] = useState(""); async function handleClick() { setLoading(true); const messageId = await startBackgroundJob(); if (messageId) { setMsg(`Started job with ID ${messageId}`); } else { setMsg("Failed to start background job"); } setLoading(false); } return (
{loading &&
Loading...
} {msg &&

{msg}

}
); } ```
## Result We have now created a Next.js app that calls a long-running background job using QStash! Here's the app in action: Quickstart Result Gif We can also view the logs on Vercel and QStash Vercel Vercel Logs QStash Vercel Logs And the code for the 3 files we created: ```tsx src/app/page.tsx theme={"system"} "use client" import { startBackgroundJob } from "@/app/actions"; import { useState } from "react"; export default function Home() { const [loading, setLoading] = useState(false); const [msg, setMsg] = useState(""); async function handleClick() { setLoading(true); const messageId = await startBackgroundJob(); if (messageId) { setMsg(`Started job with ID ${messageId}`); } else { setMsg("Failed to start background job"); } setLoading(false); } return (
{loading &&
Loading...
} {msg &&

{msg}

}
); } ``` ```ts src/app/actions.ts theme={"system"} "use server" import { Client } from "@upstash/qstash"; const qstashClient = new Client({ token: process.env.QSTASH_TOKEN!, }); export async function startBackgroundJob() { try { const response = await qstashClient.publishJSON({ "url": "https://qstash-bg-job.vercel.app/api/long-task", body: { "hello": "world" } }); return response.messageId; } catch (error) { console.error(error); return null; } } ``` ```ts src/app/api/long-task/route.ts theme={"system"} import { verifySignatureAppRouter } from "@upstash/qstash/nextjs" async function handler(request: Request) { const data = await request.json() for (let i = 0; i < 10; i++) { await fetch("https://firstqstashmessage.requestcatcher.com/test", { method: "POST", body: JSON.stringify(data), headers: { "Content-Type": "application/json" }, }) await new Promise((resolve) => setTimeout(resolve, 500)) } return Response.json({ success: true }) } export const POST = verifySignatureAppRouter(handler) ```
Now, go ahead and try it out for yourself! Try using some of the other features of QStash, like [schedules](/qstash/features/schedules), [callbacks](/qstash/features/callbacks), and [URL Groups](/qstash/features/url-groups). # Periodic Data Updates Source: https://upstash.com/docs/qstash/recipes/periodic-data-updates * Code: [Repository](https://github.com/upstash/qstash-examples/tree/main/periodic-data-updates) * App: [qstash-examples-periodic-data-updates.vercel.app](https://qstash-examples-periodic-data-updates.vercel.app) This recipe shows how to use QStash as a trigger for a Next.js api route, that fetches data from somewhere and stores it in your database. For the database we will use Redis because it's very simple to setup and is not really the main focus of this recipe. ## What will be build? Let's assume there is a 3rd party API that provides some data. One approach would be to just query the API whenever you or your users need it, however that might not work well if the API is slow, unavailable or rate limited. A better approach would be to continuously fetch fresh data from the API and store it in your database. Traditionally this would require a long running process, that would continuously call the API. With QStash you can do this inside your Next.js app and you don't need to worry about maintaining anything. For the purpose of this recipe we will build a simple app, that scrapes the current Bitcoin price from a public API, stores it in redis and then displays a chart in the browser. ## Setup If you don't have one already, create a new Next.js project with `npx create-next-app@latest --ts`. Then install the required packages ```bash theme={"system"} npm install @upstash/qstash @upstash/redis ``` You can replace `@upstash/redis` with any kind of database client you want. ## Scraping the API Create a new serverless function in `/pages/api/cron.ts` ````ts theme={"system"} import { NextApiRequest, NextApiResponse } from "next"; import { Redis } from "@upstash/redis"; import { verifySignature } from "@upstash/qstash/nextjs"; /** * You can use any database you want, in this case we use Redis */ const redis = Redis.fromEnv(); /** * Load the current bitcoin price in USD and store it in our database at the * current timestamp */ async function handler(_req: NextApiRequest, res: NextApiResponse) { try { /** * The API returns something like this: * ```json * { * "USD": { * "last": 123 * }, * ... * } * ``` */ const raw = await fetch("https://blockchain.info/ticker"); const prices = await raw.json(); const bitcoinPrice = prices["USD"]["last"] as number; /** * After we have loaded the current bitcoin price, we can store it in the * database together with the current time */ await redis.zadd("bitcoin-prices", { score: Date.now(), member: bitcoinPrice, }); res.send("OK"); } catch (err) { res.status(500).send(err); } finally { res.end(); } } /** * Wrap your handler with `verifySignature` to automatically reject all * requests that are not coming from Upstash. */ export default verifySignature(handler); /** * To verify the authenticity of the incoming request in the `verifySignature` * function, we need access to the raw request body. */ export const config = { api: { bodyParser: false, }, }; ```` ## Deploy to Vercel That's all we need to fetch fresh data. Let's deploy our app to Vercel. You can either push your code to a git repository and deploy it to Vercel, or you can deploy it directly from your local machine using the [vercel cli](https://vercel.com/docs/cli). For a more indepth tutorial on how to deploy to Vercel, check out this [quickstart](/qstash/quickstarts/vercel-nextjs#4-deploy-to-vercel). After you have deployed your app, it is time to add your secrets to your environment variables. ## Secrets Head over to [QStash](https://console.upstash.com/qstash) and copy the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` to vercel's environment variables. If you are not using a custom database, you can quickly create a new [Redis database](https://console.upstash.com/redis). Afterwards copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to vercel. In the near future we will update our [Vercel integration](https://vercel.com/integrations/upstash) to do this for you. ## Redeploy To use the environment variables, you need to redeploy your app. Either with `npx vercel --prod` or in the UI. ## Create cron trigger in QStash The last part is to add the trigger in QStash. Go to [QStash](https://console.upstash.com/qstash) and create a new schedule. Now we will call your api function whenever you schedule is triggered. ## Adding frontend UI This part is probably the least interesting and would require more dependencies for styling etc. Check out the [index.tsx](https://github.com/upstash/qstash-examples/blob/main/periodic-data-updates/pages/index.tsx) file, where we load the data from the database and display it in a chart. ## Hosted example You can find a running example of this recipe [here](https://qstash-examples-periodic-data-updates.vercel.app/). # DLQ Source: https://upstash.com/docs/qstash/sdks/py/examples/dlq You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Get all messages with pagination using cursor Since the DLQ can have a large number of messages, they are paginated. You can go through the results using the `cursor`. ```python theme={"system"} from qstash import QStash client = QStash("") all_messages = [] cursor = None while True: res = client.dlq.list(cursor=cursor) all_messages.extend(res.messages) cursor = res.cursor if cursor is None: break ``` #### Get a message from the DLQ ```python theme={"system"} from qstash import QStash client = QStash("") msg = client.dlq.get("") ``` #### Delete a message from the DLQ ```python theme={"system"} from qstash import QStash client = QStash("") client.dlq.delete("") ``` # Events Source: https://upstash.com/docs/qstash/sdks/py/examples/events You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Get all events with pagination using cursor Since there can be a large number of events, they are paginated. You can go through the results using the `cursor`. ```python theme={"system"} from qstash import QStash client = QStash("") all_events = [] cursor = None while True: res = client.event.list(cursor=cursor) all_events.extend(res.events) cursor = res.cursor if cursor is None: break ``` # Keys Source: https://upstash.com/docs/qstash/sdks/py/examples/keys You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Retrieve your signing Keys ```python theme={"system"} from qstash import QStash client = QStash("") signing_key = client.signing_key.get() print(signing_key.current, signing_key.next) ``` #### Rotate your signing Keys ```python theme={"system"} from qstash import QStash client = QStash("") new_signing_key = client.signing_key.rotate() print(new_signing_key.current, new_signing_key.next) ``` # Messages Source: https://upstash.com/docs/qstash/sdks/py/examples/messages You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. Messages are removed from the database shortly after they're delivered, so you will not be able to retrieve a message after. This endpoint is intended to be used for accessing messages that are in the process of being delivered/retried. #### Retrieve a message ```python theme={"system"} from qstash import QStash client = QStash("") msg = client.message.get("") ``` #### Cancel/delete a message ```python theme={"system"} from qstash import QStash client = QStash("") client.message.cancel("") ``` #### Cancel messages in bulk Cancel many messages at once or cancel all messages ```python theme={"system"} from qstash import QStash client = QStash("") # cancel more than one message client.message.cancel_many(["", ""]) # cancel all messages client.message.cancel_all() ``` # Overview Source: https://upstash.com/docs/qstash/sdks/py/examples/overview These are example usages of each method in the QStash SDK. You can also reference the [examples repo](https://github.com/upstash/qstash-py/tree/main/examples) and [API examples](/qstash/overall/apiexamples) for more. # Publish Source: https://upstash.com/docs/qstash/sdks/py/examples/publish You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Publish to a URL with a 3 second delay and headers/body ```python theme={"system"} from qstash import QStash client = QStash("") res = client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, headers={ "test-header": "test-value", }, delay="3s", ) print(res.message_id) ``` #### Publish to a URL group with a 3 second delay and headers/body You can make a URL group on the QStash console or using the [URL group API](/qstash/sdks/py/examples/url-groups) ```python theme={"system"} from qstash import QStash client = QStash("") res = client.message.publish_json( url_group="my-url-group", body={ "hello": "world", }, headers={ "test-header": "test-value", }, delay="3s", ) # When publishing to a URL group, the response is an array of messages for each URL in the group print(res[0].message_id) ``` #### Publish a method with a callback URL [Callbacks](/qstash/features/callbacks) are useful for long running functions. Here, QStash will return the response of the publish request to the callback URL. We also change the `method` to `GET` in this use case so QStash will make a `GET` request to the `url`. The default is `POST`. ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, callback="https://my-callback...", failure_callback="https://my-failure-callback...", method="GET", ) ``` #### Configure the number of retries The max number of retries is based on your [QStash plan](https://upstash.com/pricing/qstash) ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, retries=1, ) ``` By default, the delay between retries is calculated using an exponential backoff algorithm. You can customize this using the `retryDelay` parameter. Check out [the retries page to learn more about custom retry delay values](/qstash/features/retry#custom-retry-delay). #### Publish HTML content instead of JSON ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish( url="https://my-api...", body="

Hello World

", content_type="text/html", ) ``` #### Publish a message with [content-based-deduplication](/qstash/features/deduplication) ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, content_based_deduplication=True, ) ``` #### Publish a message with timeout Timeout value to use when calling a url ([See `Upstash-Timeout` in Publish Message page](/qstash/api/publish#request)) ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json( url="https://my-api...", body={ "hello": "world", }, timeout="30s", ) ``` # Queues Source: https://upstash.com/docs/qstash/sdks/py/examples/queues #### Create a queue with parallelism ```python theme={"system"} from qstash import QStash client = QStash("") queue_name = "upstash-queue" client.queue.upsert(queue_name, parallelism=2) print(client.queue.get(queue_name)) ``` #### Delete a queue ```python theme={"system"} from qstash import QStash client = QStash("") queue_name = "upstash-queue" client.queue.delete(queue_name) ``` Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. #### Pause/Resume a queue ```python theme={"system"} from qstash import QStash client = QStash("") queue_name = "upstash-queue" client.queue.upsert(queue_name, parallelism=1) client.queue.pause(queue_name) queue = client.queue.get(queue_name) print(queue.paused) # prints True client.queue.resume(queue_name) ``` Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. # Receiver Source: https://upstash.com/docs/qstash/sdks/py/examples/receiver When receiving a message from QStash, you should [verify the signature](/qstash/howto/signature). The QStash Python SDK provides a helper function for this. ```python theme={"system"} from qstash import Receiver receiver = Receiver( current_signing_key="YOUR_CURRENT_SIGNING_KEY", next_signing_key="YOUR_NEXT_SIGNING_KEY", ) # ... in your request handler signature, body = req.headers["Upstash-Signature"], req.body receiver.verify( body=body, signature=signature, url="YOUR-SITE-URL", ) ``` # Schedules Source: https://upstash.com/docs/qstash/sdks/py/examples/schedules You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Create a schedule that runs every 5 minutes ```python theme={"system"} from qstash import QStash client = QStash("") schedule_id = client.schedule.create( destination="https://my-api...", cron="*/5 * * * *", ) print(schedule_id) ``` #### Create a schedule that runs every hour and sends the result to a [callback URL](/qstash/features/callbacks) ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="https://my-api...", cron="0 * * * *", callback="https://my-callback...", failure_callback="https://my-failure-callback...", ) ``` #### Create a schedule to a URL group that runs every minute ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.create( destination="my-url-group", cron="0 * * * *", ) ``` #### Get a schedule by schedule id ```python theme={"system"} from qstash import QStash client = QStash("") schedule = client.schedule.get("") print(schedule.cron) ``` #### List all schedules ```python theme={"system"} from qstash import QStash client = QStash("") all_schedules = client.schedule.list() print(all_schedules) ``` #### Delete a schedule ```python theme={"system"} from qstash import QStash client = QStash("") client.schedule.delete("") ``` #### Create a schedule with timeout Timeout value to use when calling a schedule URL ([See `Upstash-Timeout` in Create Schedule page](/qstash/api/schedules/create)). ```python theme={"system"} from qstash import QStash client = QStash("") schedule_id = client.schedule.create( destination="https://my-api...", cron="*/5 * * * *", timeout="30s", ) print(schedule_id) ``` #### Pause/Resume a schedule ```python theme={"system"} from qstash import QStash client = QStash("") schedule_id = "scd_1234" client.schedule.pause(schedule_id) schedule = client.schedule.get(schedule_id) print(schedule.paused) # prints True client.schedule.resume(schedule_id) ``` # URL Groups Source: https://upstash.com/docs/qstash/sdks/py/examples/url-groups You can run the async code by importing `AsyncQStash` from `qstash` and awaiting the methods. #### Create a URL group and add 2 endpoints ```python theme={"system"} from qstash import QStash client = QStash("") client.url_group.upsert_endpoints( url_group="my-url-group", endpoints=[ {"url": "https://my-endpoint-1"}, {"url": "https://my-endpoint-2"}, ], ) ``` #### Get URL group by name ```python theme={"system"} from qstash import QStash client = QStash("") url_group = client.url_group.get("my-url-group") print(url_group.name, url_group.endpoints) ``` #### List URL groups ```python theme={"system"} from qstash import QStash client = QStash("") all_url_groups = client.url_group.list() for url_group in all_url_groups: print(url_group.name, url_group.endpoints) ``` #### Remove an endpoint from a URL group ```python theme={"system"} from qstash import QStash client = QStash("") client.url_group.remove_endpoints( url_group="my-url-group", endpoints=[ {"url": "https://my-endpoint-1"}, ], ) ``` #### Delete a URL group ```python theme={"system"} from qstash import QStash client = QStash("") client.url_group.delete("my-url-group") ``` # Getting Started Source: https://upstash.com/docs/qstash/sdks/py/gettingstarted ## Install ### PyPI ```bash theme={"system"} pip install qstash ``` ## Get QStash token Follow the instructions [here](/qstash/overall/getstarted) to get your QStash token and signing keys. ## Usage #### Synchronous Client ```python theme={"system"} from qstash import QStash client = QStash("") client.message.publish_json(...) ``` #### Asynchronous Client ```python theme={"system"} import asyncio from qstash import AsyncQStash async def main(): client = AsyncQStash("") await client.message.publish_json(...) asyncio.run(main()) ``` #### RetryConfig You can configure the retry policy of the client by passing the configuration to the client constructor. Note: This isn for sending the request to QStash, not for the retry policy of QStash. The default number of retries is **5** and the default backoff function is `lambda retry_count: math.exp(retry_count) * 50`. You can also pass in `False` to disable retrying. ```python theme={"system"} from qstash import QStash client = QStash( "", retry={ "retries": 3, "backoff": lambda retry_count: (2**retry_count) * 20, }, ) ``` # Overview Source: https://upstash.com/docs/qstash/sdks/py/overview `qstash` is an Python SDK for QStash, allowing for easy access to the QStash API. Using `qstash` you can: * Publish a message to a URL/URL group/API * Publish a message with a delay * Schedule a message to be published * Access logs for the messages that have been published * Create, read, update, or delete URL groups. * Read or remove messages from the [DLQ](/qstash/features/dlq) * Read or cancel messages * Verify the signature of a message You can find the Github Repository [here](https://github.com/upstash/qstash-py). # DLQ Source: https://upstash.com/docs/qstash/sdks/ts/examples/dlq #### Get all messages with pagination using cursor Since the DLQ can have a large number of messages, they are paginated. You can go through the results using the `cursor`. ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client(""); const dlq = client.dlq; const all_messages = []; let cursor = null; while (true) { const res = await dlq.listMessages({ cursor }); all_messages.push(...res.messages); cursor = res.cursor; if (!cursor) { break; } } ``` #### Delete a message from the DLQ ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const dlq = client.dlq; await dlq.delete("dlqId"); ``` # Logs Source: https://upstash.com/docs/qstash/sdks/ts/examples/logs #### Get all logs with pagination using cursor Since there can be a large number of logs, they are paginated. You can go through the results using the `cursor`. ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const logs = []; let cursor = null; while (true) { const res = await client.logs({ cursor }); logs.push(...res.logs); cursor = res.cursor; if (!cursor) { break; } } ``` #### Filter logs by state and only return the first 50. More filters can be found in the [API Reference](/qstash/api/events/list). ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.logs({ filter: { state: "DELIVERED", count: 50 } }); ``` # Messages Source: https://upstash.com/docs/qstash/sdks/ts/examples/messages Messages are removed from the database shortly after they're delivered, so you will not be able to retrieve a message after. This endpoint is intended to be used for accessing messages that are in the process of being delivered/retried. #### Retrieve a message ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const messages = client.messages const msg = await messages.get("msgId"); ``` #### Cancel/delete a message ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const messages = client.messages const msg = await messages.delete("msgId"); ``` #### Cancel messages in bulk Cancel many messages at once or cancel all messages ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); // deleting two messages at once await client.messages.deleteMany([ "message-id-1", "message-id-2", ]) // deleting all messages await client.messages.deleteAll() ``` # Overview Source: https://upstash.com/docs/qstash/sdks/ts/examples/overview These are example usages of each method in the QStash SDK. You can also reference the [examples repo](https://github.com/upstash/sdk-qstash-ts/tree/main/examples) and [API examples](/qstash/overall/apiexamples) for more. # Publish Source: https://upstash.com/docs/qstash/sdks/ts/examples/publish #### Publish to a URL with a 3 second delay and headers/body ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, headers: { "test-header": "test-value" }, delay: "3s", }); ``` #### Publish to a URL group with a 3 second delay and headers/body You create URL group on the QStash console or using the [URL Group API](/qstash/sdks/ts/examples/url-groups#create-a-url-group-and-add-2-endpoints) ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ urlGroup: "my-url-group", body: { hello: "world" }, headers: { "test-header": "test-value" }, delay: "3s", }); // When publishing to a URL Group, the response is an array of messages for each URL in the URL Group console.log(res[0].messageId); ``` #### Publish a method with a callback URL [Callbacks](/qstash/features/callbacks) are useful for long running functions. Here, QStash will return the response of the publish request to the callback URL. We also change the `method` to `GET` in this use case so QStash will make a `GET` request to the `url`. The default is `POST`. ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, callback: "https://my-callback...", failureCallback: "https://my-failure-callback...", method: "GET", }); ``` #### Configure the number of retries The max number of retries is based on your [QStash plan](https://upstash.com/pricing/qstash) ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, retries: 1, }); ``` By default, the delay between retries is calculated using an exponential backoff algorithm. You can customize this using the `retry_delay` parameter. Check out [the retries documentation to learn more about custom retry delay values](/qstash/features/retry#custom-retry-delay). #### Publish HTML content instead of JSON ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publish({ url: "https://my-api...", body: "

Hello World

", headers: { "Content-Type": "text/html", }, }); ``` #### Publish a message with [content-based-deduplication](/qstash/features/deduplication) ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, contentBasedDeduplication: true, }); ``` #### Publish a message with timeout Timeout value in seconds to use when calling a url ([See `Upstash-Timeout` in Publish Message page](/qstash/api/publish#request)) ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.publishJSON({ url: "https://my-api...", body: { hello: "world" }, timeout: "30s" // 30 seconds timeout }); ``` # Queues Source: https://upstash.com/docs/qstash/sdks/ts/examples/queues #### Create a queue with parallelism 2 ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const queueName = "upstash-queue"; await client.queue({ queueName }).upsert({ parallelism: 2 }); const queueDetails = await client.queue({ queueName }).get(); ``` #### Delete Queue ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const queueName = "upstash-queue"; await client.queue({ queueName: queueName }).delete(); ``` Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. #### Pause/Resume a queue ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const name = "upstash-pause-resume-queue"; const queue = client.queue({ queueName: name }); await queue.upsert({ parallelism: 1 }); // pause queue await queue.pause(); const queueInfo = await queue.get(); console.log(queueInfo.paused); // prints true // resume queue await queue.resume(); ``` Resuming or creating a queue may take up to a minute. Therefore, it is not recommended to pause or delete a queue during critical operations. # Receiver Source: https://upstash.com/docs/qstash/sdks/ts/examples/receiver When receiving a message from QStash, you should [verify the signature](/qstash/howto/signature). The QStash Typescript SDK provides a helper function for this. ```typescript theme={"system"} import { Receiver } from "@upstash/qstash"; const receiver = new Receiver({ currentSigningKey: "YOUR_CURRENT_SIGNING_KEY", nextSigningKey: "YOUR_NEXT_SIGNING_KEY", }); // ... in your request handler const signature = req.headers["Upstash-Signature"]; const body = req.body; const isValid = await receiver.verify({ body, signature, url: "YOUR-SITE-URL", }); ``` # Schedules Source: https://upstash.com/docs/qstash/sdks/ts/examples/schedules #### Create a schedule that runs every 5 minutes ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://my-api...", cron: "*/5 * * * *", }); ``` #### Create a schedule that runs every hour and sends the result to a [callback URL](/qstash/features/callbacks) ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://my-api...", cron: "0 * * * *", callback: "https://my-callback...", failureCallback: "https://my-failure-callback...", }); ``` #### Create a schedule to a URL Group that runs every minute ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "my-url-group", cron: "* * * * *", }); ``` #### Get a schedule by schedule id ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const res = await client.schedules.get("scheduleId"); console.log(res.cron); ``` #### List all schedules ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const allSchedules = await client.schedules.list(); console.log(allSchedules); ``` #### Create/overwrite a schedule with a user chosen schedule id Note that if a schedule exists with the same id, the old one will be discarded and new schedule will be used. ```typescript Typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ destination: "https://example.com", scheduleId: "USER_PROVIDED_SCHEDULE_ID", cron: "* * * * *", }); ``` #### Delete a schedule ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.delete("scheduleId"); ``` #### Create a schedule with timeout Timeout value in seconds to use when calling a schedule URL ([See `Upstash-Timeout` in Create Schedule page](/qstash/api/schedules/create)). ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); await client.schedules.create({ url: "https://my-api...", cron: "* * * * *", timeout: "30" // 30 seconds timeout }); ``` #### Pause/Resume a schedule ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const scheduleId = "my-schedule" // pause schedule await client.schedules.pause({ schedule: scheduleId }); // check if paused const result = await client.schedules.get(scheduleId); console.log(getResult.isPaused) // prints true // resume schedule await client.schedules.resume({ schedule: scheduleId }); ``` # URL Groups Source: https://upstash.com/docs/qstash/sdks/ts/examples/url-groups #### Create a URL Group and add 2 endpoints ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const urlGroups = client.urlGroups; await urlGroups.addEndpoints({ name: "url_group_name", endpoints: [ { url: "https://my-endpoint-1" }, { url: "https://my-endpoint-2" }, ], }); ``` #### Get URL Group by name ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const urlGroups = client.urlGroups; const urlGroup = await urlGroups.get("urlGroupName"); console.log(urlGroup.name, urlGroup.endpoints); ``` #### List URL Groups ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const allUrlGroups = await client.urlGroups.list(); for (const urlGroup of allUrlGroups) { console.log(urlGroup.name, urlGroup.endpoints); } ``` #### Remove an endpoint from a URL Group ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const urlGroups = client.urlGroups; await urlGroups.removeEndpoints({ name: "urlGroupName", endpoints: [{ url: "https://my-endpoint-1" }], }); ``` #### Delete a URL Group ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "" }); const urlGroups = client.urlGroups; await urlGroups.delete("urlGroupName"); ``` # Getting Started Source: https://upstash.com/docs/qstash/sdks/ts/gettingstarted ## Install ### NPM ```bash theme={"system"} npm install @upstash/qstash ``` ## Get QStash token Follow the instructions [here](/qstash/overall/getstarted) to get your QStash token and signing keys. ## Usage ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "", }); ``` #### RetryConfig You can configure the retry policy of the client by passing the configuration to the client constructor. Note: This is for sending the request to QStash, not for the retry policy of QStash. The default number of attempts is **6** and the default backoff function is `(retry_count) => (Math.exp(retry_count) * 50)`. You can also pass in `false` to disable retrying. ```typescript theme={"system"} import { Client } from "@upstash/qstash"; const client = new Client({ token: "", retry: { retries: 3, backoff: retry_count => 2 ** retry_count * 20, }, }); ``` ## Telemetry This sdk sends anonymous telemetry headers to help us improve your experience. We collect the following: * SDK version * Platform (Cloudflare, AWS or Vercel) * Runtime version ([node@18.x](mailto:node@18.x)) You can opt out by setting the `UPSTASH_DISABLE_TELEMETRY` environment variable to any truthy value. Or setting `enableTelemetry: false` in the client options. ```ts theme={"system"} const client = new Client({ token: "", enableTelemetry: false, }); ``` # Overview Source: https://upstash.com/docs/qstash/sdks/ts/overview `@upstash/qstash` is a Typescript SDK for QStash, allowing for easy access to the QStash API. Using `@upstash/qstash` you can: * Publish a message to a URL/URL Group * Publish a message with a delay * Schedule a message to be published * Access logs for the messages that have been published * Create, read, update, or delete URL groups. * Read or remove messages from the [DLQ](/qstash/features/dlq) * Read or cancel messages * Verify the signature of a message You can find the Github Repository [here](https://github.com/upstash/sdk-qstash-ts). # Channels Source: https://upstash.com/docs/realtime/features/channels Channels allow you to scope events to specific people or rooms. For example: * Chat rooms * Emitting events to a specific user ## Default Channel By default, events are sent to the `default` channel. If we emit an event without specifying a channel like so: ```typescript theme={"system"} await realtime.emit("notification.alert", "hello world!") ``` it can automatically be read using the default channel: ```typescript theme={"system"} useRealtime({ events: ["notification.alert"], onData({ event, data, channel }) { console.log(data) }, }) ``` *** ## Custom Channels Emit events to a specific channel: ```typescript route.ts theme={"system"} const channel = realtime.channel("user-123") await channel.emit("notification.alert", "hello world!") ``` Subscribe to one or more channels: ```tsx page.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" export default function Page() { useRealtime({ channels: ["user-123"], events: ["notification.alert"], onData({ event, data, channel }) { console.log(data) }, }) return <>... } ``` ## Channel Patterns Send notifications to individual users: ```typescript route.ts theme={"system"} const channel = realtime.channel(`user-${userId}`) await channel.emit("notification.alert", "hello world!") ``` ```typescript page.tsx theme={"system"} useRealtime({ channels: [`user-${user.id}`], events: ["notification.alert"], onData({ data }) {}, }) ``` Broadcast to all users in a room: ```typescript route.ts theme={"system"} await realtime.channel(`room-${roomId}`).emit("room.message", { text: "Hello everyone!", sender: "Alice", }) ``` Scope events to team workspaces: ```typescript route.ts theme={"system"} await realtime.channel(`team-${teamId}`).emit("project.update", { project: "Website Redesign", status: "In Progress", }) ``` ## Dynamic Channels Subscribe to multiple channels at the same time: ```tsx page.tsx theme={"system"} "use client" import { useState } from "react" import { useRealtime } from "@/lib/realtime-client" export default function Page() { const [channels, setChannels] = useState(["lobby"]) useRealtime({ channels, events: ["chat.message"], onData({ event, data, channel }) { console.log(`Message from ${channel}:`, data) }, }) const joinRoom = (roomId: string) => { setChannels((prev) => [...prev, roomId]) } const leaveRoom = (roomId: string) => { setChannels((prev) => prev.filter((c) => c !== roomId)) } return (

Active channels: {channels.join(", ")}

) } ``` ## Broadcasting to Multiple Channels Emit to multiple channels at the same time: ```typescript route.ts theme={"system"} const rooms = ["lobby", "room-1", "room-2"] await Promise.all( rooms.map((room) => { const channel = realtime.channel(room) return channel.emit("chat.message", `Hi channel ${room}!`) }) ) ``` ## Channel Security Combine channels with [middleware](/realtime/features/middleware) for secure access control: ```typescript title="app/api/realtime/route.ts" theme={"system"} import { handle } from "@upstash/realtime" import { realtime } from "@/lib/realtime" import { currentUser } from "@/auth" export const GET = handle({ realtime, middleware: async ({ request, channels }) => { const user = await currentUser(request) for (const channel of channels) { if (!user.canAccessChannel(channel)) { return new Response("Unauthorized", { status: 401 }) } } }, }) ``` See the middleware documentation for authentication examples # Client-Side Usage Source: https://upstash.com/docs/realtime/features/client-side The `useRealtime` hook connects your React components to realtime events with full type safety. ## Setup ### 1. Add the Provider Wrap your app in the `RealtimeProvider`: ```tsx providers.tsx theme={"system"} "use client" import { RealtimeProvider } from "@upstash/realtime/client" export function Providers({ children }: { children: React.ReactNode }) { return {children} } ``` ```tsx layout.tsx theme={"system"} import { Providers } from "./providers" export default function RootLayout({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ### 2. Create Typed Hook Create a typed `useRealtime` hook using `createRealtime`: ```typescript lib/realtime-client.ts theme={"system"} "use client" import { createRealtime } from "@upstash/realtime/client" import type { RealtimeEvents } from "./realtime" export const { useRealtime } = createRealtime() ``` ## Basic Usage Subscribe to events in any client component: ```tsx page.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" export default function Page() { useRealtime({ events: ["notification.alert"], onData({ event, data, channel }) { console.log(`Received ${event}:`, data) }, }) return

Listening for events...

} ``` ## Provider Options API configuration: - `url`: The realtime endpoint URL - `withCredentials`: Whether to send cookies with requests Maximum number of reconnection attempts before giving up ```tsx providers.tsx theme={"system"} "use client" import { RealtimeProvider } from "@upstash/realtime/client" export function Providers({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ## Hook Options Array of event names to subscribe to (e.g. `["notification.alert", "chat.message"]`) Callback when an event is received. Receives an object with `event`, `data`, and `channel`. Array of channel names to subscribe to Whether the subscription is active. Set to `false` to disconnect. ## Return Value The hook returns an object with: Current connection state: `"connecting"`, `"connected"`, `"disconnected"`, or `"error"` ```tsx page.tsx theme={"system"} import { useRealtime } from "@/lib/realtime-client" const { status } = useRealtime({ events: ["notification.alert"], onData({ event, data, channel }) {}, }) console.log(status) ``` ## Connection Control Enable or disable connections dynamically: ```tsx page.tsx theme={"system"} "use client" import { useState } from "react" import { useRealtime } from "@/lib/realtime-client" export default function Page() { const [enabled, setEnabled] = useState(true) const { status } = useRealtime({ enabled, events: ["notification.alert"], onData({ event, data, channel }) { console.log(event, data, channel) }, }) return (

Status: {status}

) } ``` ### Conditional Connections Connect only when certain conditions are met: ```tsx page.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" import { useUser } from "@/hooks/auth" export default function Page() { const { user } = useUser() useRealtime({ enabled: Boolean(user), channels: [`user-${user?.id}`], events: ["notification.alert"], onData({ event, data, channel }) { console.log(data) }, }) return

Notifications {user ? "enabled" : "disabled"}

} ``` ## Multiple Events Subscribe to multiple events at once: ```tsx page.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" export default function Page() { useRealtime({ events: ["chat.message", "chat.reaction", "user.joined"], onData({ event, data, channel }) { // 👇 data is automatically typed based on the event if (event === "chat.message") console.log("New message:", data) if (event === "chat.reaction") console.log("New reaction:", data) if (event === "user.joined") console.log("User joined:", data) }, }) return

Listening to multiple events

} ``` ## Multiple Channels Subscribe to multiple channels at once: ```tsx page.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" export default function Page() { useRealtime({ channels: ["global", "announcements", "user-123"], events: ["notification.alert"], onData({ event, data, channel }) { console.log(`Message from ${channel}:`, data) }, }) return

Listening to multiple channels

} ``` ### Dynamic Channel Management Add and remove channels dynamically: ```tsx page.tsx theme={"system"} "use client" import { useState } from "react" import { useRealtime } from "@/lib/realtime-client" export default function Page() { const [channels, setChannels] = useState(["lobby"]) useRealtime({ channels, events: ["chat.message"], onData({ event, data, channel }) { console.log(`Message from ${channel}:`, data) }, }) const joinRoom = (roomId: string) => { setChannels((prev) => [...prev, roomId]) } const leaveRoom = (roomId: string) => { setChannels((prev) => prev.filter((c) => c !== roomId)) } return (

Active channels: {channels.join(", ")}

) } ``` ## Custom API Endpoint Configure a custom realtime endpoint in the provider: ```tsx providers.tsx theme={"system"} "use client" import { RealtimeProvider } from "@upstash/realtime/client" export function Providers({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ## Use Cases Show real-time notifications to users: ```tsx notifications.tsx theme={"system"} "use client" import { useRealtime } from "@/lib/realtime-client" import { toast } from "react-hot-toast" import { useUser } from "@/hooks/auth" export default function Notifications() { const { user } = useUser() useRealtime({ channels: [`user-${user.id}`], events: ["notification.alert"], onData({ data }) { toast(data) }, }) return

Listening for notifications...

} ```
Build a real-time chat: ```tsx chat.tsx theme={"system"} "use client" import { useState } from "react" import { useRealtime } from "@/lib/realtime-client" import z from "zod/v4" import type { RealtimeEvents } from "@/lib/realtime" type Message = z.infer export default function Chat() { const [messages, setMessages] = useState([]) useRealtime({ channels: ["room-123"], events: ["chat.message"], onData({ data }) { setMessages((prev) => [...prev, data]) }, }) return (
{messages.map((msg, i) => (

{msg.sender}: {msg.text}

))}
) } ```
Update metrics in real-time: ```tsx dashboard.tsx theme={"system"} "use client" import { useQuery, useQueryClient } from "@tanstack/react-query" import { useRealtime } from "@/lib/realtime-client" export default function Dashboard() { const queryClient = useQueryClient() const { data: metrics } = useQuery({ queryKey: ["metrics"], queryFn: async () => { const res = await fetch("/api/metrics?user=user-123") return res.json() }, }) useRealtime({ channels: ["user-123"], events: ["metrics.update"], onData() { queryClient.invalidateQueries({ queryKey: ["metrics"] }) }, }) return (

Active Users: {metrics?.users}

Revenue: ${metrics?.revenue}

) } ```
Sync changes across users: ```tsx editor.tsx theme={"system"} "use client" import { useState } from "react" import { useRealtime } from "@/lib/realtime-client" export default function Editor({ documentId }: { documentId: string }) { const [content, setContent] = useState("") useRealtime({ channels: [`doc-${documentId}`], events: ["document.update"], onData({ data }) { setContent(data.content) }, }) return