# null
Source: https://upstash.com/docs/README
# Mintlify Starter Kit
Click on `Use this template` to copy the Mintlify starter kit. The starter kit
contains examples including
* Guide pages
* Navigation
* Customizations
* API Reference pages
* Use of popular components
### 👩💻 Development
Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview
the documentation changes locally. To install, use the following command
```
npm i -g mintlify
```
Run the following command at the root of your documentation (where mint.json is)
```
mintlify dev
```
### 😎 Publishing Changes
Changes will be deployed to production automatically after pushing to the
default branch.
You can also preview changes using PRs, which generates a preview link of the
docs.
#### Troubleshooting
* Mintlify dev isn't running - Run `mintlify install` it'll re-install
dependencies.
* Page loads as a 404 - Make sure you are running in a folder with `mint.json`
# Get QStash
Source: https://upstash.com/docs/api-reference/qstash/get-qstash
devops/developer-api/openapi.yml get /qstash/user
Retrieves detailed information about the authenticated user's QStash, including plan details, limits, and configuration
# Get QStash Stats
Source: https://upstash.com/docs/api-reference/qstash/get-qstash-stats
devops/developer-api/openapi.yml get /qstash/stats
Retrieves detailed usage statistics for the QStash account including
daily requests, billing, bandwidth, and workflow metrics over time.
# Reset QStash Token
Source: https://upstash.com/docs/api-reference/qstash/reset-qstash-token
devops/developer-api/openapi.yml post /qstash/user/rotatetoken
Resets the authentication credentials for the QStash user account.
This invalidates the old password and token, and generates new ones.
Returns the updated user information with new credentials.
# Set QStash Plan
Source: https://upstash.com/docs/api-reference/qstash/set-qstash-plan
devops/developer-api/openapi.yml post /qstash-upgrade
Changes the QStash account to a different plan type.
This operation changes the plan and associated limits for the QStash account.
# Create Search Index
Source: https://upstash.com/docs/api-reference/search/create-search-index
devops/developer-api/openapi.yml post /search
Creates a new search index with the specified configuration
# Delete Search Index
Source: https://upstash.com/docs/api-reference/search/delete-search-index
devops/developer-api/openapi.yml delete /search/{id}
Permanently deletes a search index and all its data
# Get Index Stats
Source: https://upstash.com/docs/api-reference/search/get-index-stats
devops/developer-api/openapi.yml get /search/{id}/stats
Retrieves statistics and metrics for a specific search index
# Get Search Index
Source: https://upstash.com/docs/api-reference/search/get-search-index
devops/developer-api/openapi.yml get /search/{id}
Retrieves detailed information about a specific search index
# Get Search Stats
Source: https://upstash.com/docs/api-reference/search/get-search-stats
devops/developer-api/openapi.yml get /search/stats
Get search statistics for all the search indices associated with the authenticated user
# List Search Indexes
Source: https://upstash.com/docs/api-reference/search/list-search-indexes
devops/developer-api/openapi.yml get /search
Returns a list of all search indices belonging to the authenticated user.
# Rename Search Index
Source: https://upstash.com/docs/api-reference/search/rename-search-index
devops/developer-api/openapi.yml post /search/{id}/rename
Renames a search index.
# Reset Password
Source: https://upstash.com/docs/api-reference/search/reset-password
devops/developer-api/openapi.yml post /search/{id}/reset-password
This endpoint resets the regular and readonly tokens of a search index.
# Transfer Search Index
Source: https://upstash.com/docs/api-reference/search/transfer-search-index
devops/developer-api/openapi.yml post /search/{id}/transfer
Transfers ownership of a search index to another team.
Transferring to a personal account is not supported.
However, transferring from a personal account to a team is allowed.
# Get Index Stats
Source: https://upstash.com/docs/api-reference/vector/get-index-stats
devops/developer-api/openapi.yml get /vector/index/{id}/stats
Retrieves statistics and metrics for a specific vector index
# Get Vector Stats
Source: https://upstash.com/docs/api-reference/vector/get-vector-stats
devops/developer-api/openapi.yml get /vector/index/stats
Get vector statistics for all the vector indices associated with the authenticated user
# Add a Payment Method
Source: https://upstash.com/docs/common/account/addapaymentmethod
Upstash does not require a credit card for Free databases. However, for paid databases, you need to add at least one payment method. To add a payment method, follow these steps:
1. Click on your profile at the top right.
2. Select `Account` from the dropdown menu.
3. Navigate to the `Billing` tab.
4. On the screen, click the `Add Your Card` button.
5. Enter your name and credit card information in the following form:
You can enter multiple credit cards and set one of them as the default one. The
payments will be charged from the default credit card.
## Payment Security
Upstash does not store users' credit card information in its servers. We use
Stripe Inc payment processing company to handle payments. You can read more
about payment security in Stripe
[here](https://stripe.com/docs/security/stripe).
# Audit Logs
Source: https://upstash.com/docs/common/account/auditlogs
Audit logs give you a chronological set of activity records that have affected
your databases and Upstash account. You can see the list of all activities on a
single page. You can access your audit logs under `Account > Audit Logs` in your
console:
Here the `Source` column shows if the action has been called by the console or via
an API key. The `Entity` column gives you the name of the resource that has been
affected by the action. For example, when you delete a database, the name of the
database will be shown here. Also, you can see the IP address which performed the
action.
## Security
You can track your audit logs to detect any unusual activity on your account and
databases. When you suspect any security breach, you should delete the API key
related to suspicious activity and inform us by emailing
[support@upstash.com](mailto:support@upstash.com)
## Retention period
After the retention period, the audit logs are deleted. The retention period for free databases is 7 days, for pay-as-you-go databases, it is 30 days, and for the Pro tier, it is one year.
# AWS Marketplace
Source: https://upstash.com/docs/common/account/awsmarketplace
**Prerequisite**
You need an Upstash account before subscribing on AWS, create one
[here](https://console.upstash.com).
Upstash is available on the AWS Marketplace, which is particularly beneficial for users who already get other services from AWS Marketplace and can consolidate Upstash under a single bill.
You can search "Upstash" on AWS Marketplace or just click [here](https://aws.amazon.com/marketplace/pp/prodview-fssqvkdcpycco).
Once you click subscribe, you will be prompted to select which personal or team account you wish to link with your AWS Subscription.
Once your account is linked, regardless of which Upstash product you use, all of your usage will be billed to your AWS Account. You can also upgrade or downgrade your subscription through Upstash console.
# Cost Explorer
Source: https://upstash.com/docs/common/account/costexplorer
The Cost Explorer pages allow you to view your current and previous months’ costs. To access the Cost Explorer, navigate to the left menu and select Account > Cost Explorer. Below is an example report:
You can select a specific month to view the cost breakdown for that period. Here's the explanation of the fields in the report:
**Request:** This represents the total number of requests sent to the database.
**Storage:** This indicates the average size of the total storage consumed. Upstash database includes a persistence layer for data durability. For example, if you have 1 GB of data in your database throughout the entire month, this value will be 1 GB. Even if your database is empty for the first 29 days of the month and then expands to 30 GB on the last day, this value will still be 1 GB.
**Cost:** This field represents the total cost of your database in US Dollars.
> The values for the current month is updated hourly, so values can be stale up
> to 1 hour.
# Create an Account
Source: https://upstash.com/docs/common/account/createaccount
You can sign up for Upstash using your Amazon, Github or Google accounts. Alternatively, if you prefer not to use these authentication providers or want to sign up with a corporate email address, you can also sign up using email and password.
We do not access your information other than:
* Your email
* Your name
* Your profile picture and we never share your information with third parties.
# Developer API
Source: https://upstash.com/docs/common/account/developerapi
Using Upstash API, you can develop applications that can create and manage
Upstash databases and Upstash Vector Indexes. You can automate everything that
you can do in the console. To use developer API, you need to create an API key
in the console.
Note: The Developer API is only available to native Upstash accounts. Accounts created via third-party platforms like Vercel or Fly.io are not supported.
See [DevOps](/devops) for details.
# Account and Billing FAQ
Source: https://upstash.com/docs/common/account/faq
## How can I delete my account?
You can delete your account from `Account` > `Settings` > `Delete Account`. You should first delete all your databases and clusters. After you delete your account, all your data and payment information will be deleted and you will not be able to recover it.
## How can I delete my credit card?
You can delete your credit card from `Account` > `Billing` page. However, you should first add a new credit card to be able to delete the existing one. If you want to delete all of your payment information, you should delete your account.
## How can I change my email address?
You can change your account e-mail address in `Account` > `Settings` page. In order to change your billing e-mail adress, please see `Account` > `Billing` page. If you encounter any issues, please contact us at [support@upstash.com](mailto:support@upstash.com) to change your email address.
## Can I set an upper spending limit, so I don't get surprises after an unexpected amount of high traffic?
On Pay as You Go model, you can set a budget for your Redis instances. When your monthly cost reaches the max budget, we send an email to inform you and throttle your instance. You will not be charged beyond your set budget.
To set the budget, you can go to the "Usage" tab of your Redis instance and click "Change Budget" under the cost metric.
## What happens if my payment fails?
If a payment failure occurs, we will retry the payment three more times before suspending the account. During this time, you will receive email notifications about the payment failure. If the account is suspended, all resources in the account will be inaccessible. If you add a valid payment method after the account suspension, your account will be automatically unsuspended during the next payment attempt.
## What happens if I unsubscribe from AWS Marketplace but I don't have any other payment methods?
We send a warning email three times before suspending an account. If no valid payment method is added, we suspend the account. Once the account is suspended, all resources within the account will be inaccessible. If you add a valid payment method after the account suspension, your account will be automatically unsuspended during the next system check.
## I have a question about my bill, who should I contact?
Please contact us at [support@upstash.com](mailto:support@upstash.com).
# Payment History
Source: https://upstash.com/docs/common/account/paymenthistory
The Payment History page gives you information about your payments. You can open your
payment history in the left menu under Account > Payment History. Here an example
report:
You can download receipt. If one of your payments failed, you can retry your
payment on this page.
# Teams and Users
Source: https://upstash.com/docs/common/account/teams
Team management enables collaboration with other users. You can create a team and invite people to join by using their email addresses. Team members will have access to databases created under the team based on their assigned roles.
## Create Team
You can create a team using the menu `Account > Teams`
> A user can create up to 5 teams. You can be part of even more teams but only
> be the owner of 5 teams. If you need to own more teams please email us at
> [support@upstash.com](mailto:support@upstash.com).
You can still continue using your personal account or switch to a team.
> The databases in your personal account are not shared with anyone. If you want
> your database to be accessible by other users, you need to create it under a
> team.
## Switch Team
You need to switch to the team to create databases shared with other team
members. You can switch to the team via the switch button in the team table. Or
you can click your profile pic in the top right and switch to any team listed
there.
## Add/Remove Team Member
After switching to a team, if you are the Owner or an Admin of the team, you can add team members by navigating to `Account > Teams`. Simply enter their email addresses.It's not an issue if the email addresses are not yet registered with Upstash. Once the user registers with that email, they will gain access to the team. We do not send invitations; when you add a member, they become a member directly. You can also remove members from the same page.
> Only Admins or the Owner can add/remove users.
## Roles
While adding a team member, you will need to select a role. Here are the access rights associated with each role:
* Admin: This role has full access, including the ability to add and remove members, manage databases, and payment methods.
* Dev: This role can create, manage, and delete databases but cannot manage users or payment methods.
* Finance: This role is limited to managing payment methods and cannot manage databases or users.
* Owner: The Owner role has all the access rights of an Admin and, in addition to having the ability to delete the team. This role is automatically assigned to the user who created the team, and you cannot assign it to other members.
> If you want to change a user's role, you will need to delete and re-add them with the desired access rights.
## Delete Team
Only the original creator (owner) can delete a team. Also the team should not
have any active databases, namely all databases under the team should be deleted
first. To delete your team, first you need to switch your personal account then
you can delete your team in the team list under `Account > Teams`.
# Access Anywhere
Source: https://upstash.com/docs/common/concepts/access-anywhere
Upstash has integrated REST APIs into all its products to facilitate access from various runtime environments. This integration is particularly beneficial for edge runtimes like Cloudflare Workers and Vercel Edge, which do not permit TCP connections, and for serverless functions such as AWS Lambda, which are stateless and do not retain connection information between invocations.
### Rationale
The absence of TCP connection support in edge runtimes and the stateless nature of serverless functions necessitate a different approach for persistent connections typically used in traditional server setups. The stateless REST API provided by Upstash addresses this gap, enabling consistent and reliable communication with data stores from these platforms.
### REST API Design
The REST APIs for Upstash services are thoughtfully designed to align closely with the conventions of each product. This ensures that users who are already familiar with these services will find the interactions intuitive and familiar. Our API endpoints are self-explanatory, following standard REST practices to guarantee ease of use and seamless integration.
### SDKs for Popular Languages
To enhance the developer experience, Upstash is developing SDKs in various popular programming languages. These SDKs simplify the process of integrating Upstash services with your applications by providing straightforward methods and functions that abstract the underlying REST API calls.
### Resources
[Redis REST API Docs](/redis/features/restapi)
[QStash REST API Docs](/qstash/api/authentication)
[Redis SDK - Typescript](https://github.com/upstash/upstash-redis)
[Redis SDK - Python](https://github.com/upstash/redis-python)
[QStash SDK - Typescript](https://github.com/upstash/sdk-qstash-ts)
# Global Replication
Source: https://upstash.com/docs/common/concepts/global-replication
Global Replication for Low Latency and High Availability
Upstash Redis automatically replicates your data to the regions you choose, so your application stays fast and responsive-no matter where your users are.
Add or remove regions from a database at any time with zero downtime. Each region acts as a replica, holding a copy of your data for low latency and high availability.
***
## Built for Modern Serverless Architectures
In serverless computing, performance isn't just about fast code—it's also about fast, reliable data access from anywhere in the world. Whether you're using Vercel Functions, Cloudflare Workers, Fastly Compute, or Deno Deploy, your data layer needs to be as distributed and flexible as your compute for best performance.
Upstash Global replicates your Redis data across multiple regions to:
* Minimize round-trip latency
* Guarantee high availability at scale
...even under heavy or dynamic workloads. Our HTTP-based Redis® client is optimized for serverless environments and delivers consistent performance under high concurrency or variable workloads.
As serverless platforms evolve with features like in-function concurrency (e.g. [Vercel's Fluid Compute](https://vercel.com/fluid)), you need a data layer that can keep up. Upstash Redis is a globally distributed, low-latency database that scales with your compute, wherever it runs.
***
## How Global Replication Works
To minimize latency for read operations, we use a replica model. Our tests show sub-millisecond latency for read commands in the same AWS region as the Upstash Redis® instance.
**Read commands are automatically served from the geographically closest replica**:
**Write commands go to the primary database** for consistency. After a successful write, they are replicated to all read replicas:
***
## Available Regions
To create a globally distributed database, select a primary region and the number of read regions:
* Select a primary region for most write operations for best performance.
* Select read regions close to your users for optimized read speeds.
Each request is then automatically served by the closest read replica for maximum performance and minimum latency:
**You can create read replicas in the following regions:**
* AWS US-East-1 (North Virginia)
* AWS US-East-2 (Ohio)
* AWS US-West-1 (North California)
* AWS US-West-2 (Oregon)
* AWS EU-West-1 (Ireland)
* AWS EU-West-2 (London)
* AWS EU-Central-1 (Frankfurt)
* AWS AP-South-1 (Mumbai)
* AWS AP-Northeast-1 (Tokyo)
* AWS AP-Southeast-1 (Singapore)
* AWS AP-Southeast-2 (Sydney)
* AWS SA-East-1 (São Paulo)
Check out [our blog post](https://upstash.com/blog/global-database) to learn more about our global replication philosophy. You can also explore our [live benchmark](https://latency.upstash.com/) to see Upstash Redis latency from different locations around the world.
# Scale to Zero
Source: https://upstash.com/docs/common/concepts/scale-to-zero
Only pay for what you really use.
Traditionally, cloud services required users to predict their resource needs and provision servers or instances based on those predictions. This often led to over-provisioning to handle potential peak loads, resulting in paying for unused resources during periods of low demand.
By *scaling to zero*, our pricing model aligns more closely with actual usage.
## Pay for usage
You're only charged for the resources you actively use. When your application experiences low activity or no incoming requests, the system automatically scales down resources to a minimal level. This means you're no longer paying for idle capacity, resulting in cost savings.
## Flexibility
"Scaling to zero" offers flexibility in scaling both up and down. As your application experiences traffic spikes, the system scales up resources to meet demand. Conversely, during quiet periods, resources scale down.
## Focus on Innovation
Developers can concentrate on building and improving the application without constantly worrying about resource optimization. Upstash handles the scaling, allowing developers to focus on creating features that enhance user experiences.
In essence, this aligns pricing with actual utilization, increases cost efficiency, and promotes a more sustainable approach to resource consumption. This model empowers businesses to leverage cloud resources without incurring unnecessary expenses, making cloud computing more accessible and attractive to a broader range of organizations.
# Serverless
Source: https://upstash.com/docs/common/concepts/serverless
What do we mean by serverless?
Upstash is a modern serverless data platform. But what do we mean by serverless?
## No Server Management
In a serverless setup, developers don't need to worry about configuring or managing servers. We take care of server provisioning, scaling, and maintenance.
## Automatic Scaling
As traffic or demand increases, Upstash automatically scales the required resources to handle the load. This means applications can handle sudden spikes in traffic without manual intervention.
## Granular Billing
We charge based on the actual usage of resources rather than pre-allocated capacity. This can lead to more cost-effective solutions, as users only pay for what they consume. [Read more](/common/concepts/scale-to-zero)
## Stateless Functions
In serverless architectures, functions are typically stateless. However, the traditional approach involves establishing long-lived connections to databases, which can lead to issues in serverless environments if connections aren't properly managed after use. Additionally, there are scenarios where TCP connections may not be feasible. Upstash addresses this issue by offering access via HTTP, a universally available protocol across all platforms.
## Rapid Deployment
Fast iteration is the key to success in today's competitive environment. You can create a new Upstash database in seconds, with minimal required configuration.
# Account & Teams
Source: https://upstash.com/docs/common/help/account
## Create an Account
You can sign up to Upstash using your Amazon, Github or Google accounts. Alternatively you can sign up using
email/password registration if you don't want to use these auth providers, or you
want to sign up using a corporate email address.
We do not access your information other than:
* Your email
* Your name
* Your profile picture and we never share your information with third parties.
Team management allows you collaborate with other users. You can create a team
and invite people to the team by email addresses. The team members will have
access to the databases created under the team depending on their roles.
## Teams
### Create Team
You can create a team using the menu `Account > Teams`
> A user can create up to 5 teams. You can be part of even more teams but only
> be the owner of 5 teams. If you need to own more teams please email us at
> [support@upstash.com](mailto:support@upstash.com).
You can still continue using your personal account or switch to a team.
> The databases in your personal account are not shared with anyone. If you want
> your database to be accessible by other users, you need to create it under a
> team.
### Switch Team
You need to switch to the team to create databases shared with other team
members. You can switch to the team via the switch button in the team table. Or
you can click your profile pic in the top right and switch to any team listed
there.
### Add/Remove Team Member
Once you switched to a team, you can add team members in `Account > Teams` if
you are Owner or Admin for of the team. Entering email will be enough. The email
may not registered to Upstash yet, it is not a problem. Once the user registers
with that email, he/she will be able to switch to the team. We do not send
invitation, so when you add a member, he/she becomes a member directly. You can
remove the members from the same page.
> Only Admins or the Owner can add/remove users.
### Roles
While adding a team member you need to select a role. Here the privileges of
each role:
* Admin: This role has full access including adding removing members, databases,
payment methods.
* Dev: This role can create, manage and delete databases. It can not manage
users and payment methods.
* Finance: This role can only manage payment methods. It can not manage the
databases and users.
* Owner: Owner has all the privileges that admin has. In addition he is the only
person who can delete the team. This role is assigned to the user who created
the team. So you can not create a member with Owner role.
> If you want change role of a user, you need to delete and add again.
### Delete Team
Only the original creator (owner) can delete a team. Also the team should not
have any active databases, namely all databases under the team should be deleted
first. To delete your team, first you need to switch your personal account then
you can delete your team in the team list under `Account > Teams`.
# Announcements
Source: https://upstash.com/docs/common/help/announcements
Upstash Announcements!
Removal of GraphQL API and edge caching (Redis) (October 1, 2022) These two
features have been already deprecated. We are planning to deactivate them
completely on November 1st. We recommend use of REST API to replace GraphQL API
and Global databases instead of Edge caching.
Removal of strong consistency (Redis) (October 1, 2022) Upstash supported Strong
Consistency mode for the single region databases. We decided to deprecate this
feature because its effect on latency started to conflict with the performance
expectations of Redis use cases. Moreover, we improved the consistency of
replication to guarantee Read-Your-Writes consistency. Strong consistency will
be disabled on existing databases on November 1st.
#### Redis pay-as-you-go usage cap (October 1, 2022)
We are increasing the max usage cap to \$160 from \$120 as of October 1st. This
update is needed because of the increasing infrastructure cost due to
replicating all databases to multiple instances. After your database exceeds the
max usage cost, your database might be rate limited.
#### Replication is enabled (Sep 29, 2022)
All new and existing paid databases will be replicated to multiple replicas.
Replication enables high availability in case of system and infrastructure
failures. Starting from October 1st, we will gradually upgrade all databases
without downtime. Free databases will stay single replica.
#### QStash Price Decrease (Sep 15, 2022)
The price is \$1 per 100K requests.
#### [Pulumi Provider is available](https://upstash.com/blog/upstash-pulumi-provider) (August 4, 2022)
#### [QStash is released and announced](https://upstash.com/blog/qstash-announcement) (July 18, 2022)
#### [Announcing Upstash CLI](https://upstash.com/blog/upstash-cli) (May 16, 2022)
#### [Introducing Redis 6 Compatibility](https://upstash.com/blog/redis-6) (April 10, 2022)
#### Strong Consistency Deprecated (March 29, 2022)
We have deprecated Strong Consistency mode for Redis databases due to its
performance impact. This will not be available for new databases. We are
planning to disable it on existing databases before the end of 2023. The
database owners will be notified via email.
#### [Announcing Upstash Redis SDK v1.0.0](https://upstash.com/blog/upstash-redis-sdk-v1) (March 14, 2022)
#### Support for Google Cloud (June 8, 2021)
Google Cloud is available for Upstash Redis databases. We initially support
US-Central-1 (Iowa) region. Check the
[get started guide](https://docs.upstash.com/redis/howto/getstartedgooglecloudfunctions).
#### Support for AWS Japan (March 1, 2021)
こんにちは日本
Support for AWS Tokyo Region was the most requested feature by our users. Now
our users can create their database in AWS Asia Pacific (Tokyo) region
(ap-northeast-1). In addition to Japan, Upstash is available in the regions
us-west-1, us-east-1, eu-west-1.
Click [here](https://console.upstash.com) to start your database for free.
Click [here](https://roadmap.upstash.com) to request new regions to be
supported.
#### Vercel Integration (February 22, 2021)
Upstash\&Vercel integration has been released. Now you are able to integrate
Upstash to your project easily. We believe Upstash is the perfect database for
your applications thanks to its:
* Low latency data
* Per request pricing
* Durable storage
* Ease of use
Below are the resources about the integration:
See [how to guide](https://docs.upstash.com/redis/howto/vercelintegration).
See [integration page](https://vercel.com/integrations/upstash).
See
[Roadmap Voting app](https://github.com/upstash/roadmap)
as a showcase for the integration.
# Compliance
Source: https://upstash.com/docs/common/help/compliance
## Upstash Legal & Security Documents
* [Upstash Terms of Service](https://upstash.com/static/trust/terms.pdf)
* [Upstash Privacy Policy](https://upstash.com/static/trust/privacy.pdf)
* [Upstash Data Processing Agreement](https://upstash.com/static/trust/dpa.pdf)
* [Upstash Technical and Organizational Security Measures](https://upstash.com/static/trust/security-measures.pdf)
* [Upstash Subcontractors](https://upstash.com/static/trust/subprocessors.pdf)
## Is Upstash SOC2 Compliant?
Upstash Redis databases under Pro and Enterprise support plans are SOC2 compliant. Check our [trust page](https://trust.upstash.com/) for details.
## Is Upstash ISO-27001 Compliant?
We are in process of getting this certification. Contact us
([support@upstash.com](mailto:support@upstash.com)) to learn about the expected
date.
## Is Upstash GDPR Compliant?
Yes. For more information, see our
[Privacy Policy](https://upstash.com/static/trust/privacy.pdf). We acquire DPAs
from each [subcontractor](https://upstash.com/static/trust/subprocessors.pdf)
that we work with.
## Is Upstash HIPAA Compliant?
Yes. Upstash Redis is HIPAA compliant and we are in process of getting this compliance for our other products. See [Managing Healthcare Data](https://upstash.com/docs/redis/help/managing-healthcare-data) for more details.
## Is Upstash PCI Compliant?
Upstash does not store personal credit card information. We use Stripe for
payment processing. Stripe is a certified PCI Service Provider Level 1, which is
the highest level of certification in the payments industry.
## Does Upstash conduct vulnerability scanning and penetration tests?
Yes, we use third party tools and work with pen testers. We share the results
with Enterprise customers. Contact us
([support@upstash.com](mailto:support@upstash.com)) for more information.
## Does Upstash take backups?
Yes, we take regular snapshots of the data cluster to the AWS S3 platform.
## Does Upstash encrypt data?
Customers can enable TLS when creating a database or cluster, and we recommend this for production environments. Additionally, we encrypt data at rest upon customer request.
# Legal
Source: https://upstash.com/docs/common/help/legal
## Upstash Legal Documents
* [Upstash Terms of Service](https://upstash.com/trust/terms.pdf)
* [Upstash Privacy Policy](https://upstash.com/trust/privacy.pdf)
* [Upstash Subcontractors](https://upstash.com/trust/subprocessors.pdf)
* [Context7 Addendum](https://upstash.com/trust/context7addendum.pdf)
* [Data Processing Addendum](https://upstash.com/static/trust/dpa.pdf)
# Production Checklist
Source: https://upstash.com/docs/common/help/production-checklist
This checklist provides essential recommendations for securing and optimizing your Upstash databases for production workloads.
## Security Features
### Enable Prod Pack
Prod Pack provides enterprise-grade security and monitoring features:
* 99.99% uptime SLA
* SOC-2 Type 2 report available
* Role-Based Access Control (RBAC)
* Encryption at Rest
* Advanced monitoring (Prometheus, Datadog)
* High availability for read regions
Prod Pack is available as a \$200/month add-on per database for all paid plans except Free tier.
### Enable Credential Protection
Protect your database credentials (Prod Pack feature):
* Credentials are never stored in Upstash infrastructure
* Credentials are displayed only once during enablement
* Console features requiring database access are disabled
Disabling this feature will permanently revoke current credentials and generate new ones.
### Configure IP Allowlist
Restrict database access to specific IP addresses:
* Available on all plans except Free tier
* Supports IPv4 addresses and CIDR blocks
* Multiple IP ranges can be configured
### Implement Redis ACL
Use Redis Access Control Lists to restrict user access:
* Create users with minimal required permissions
* Available for both TCP connections and REST API
* Use `ACL RESTTOKEN` command to generate REST tokens
### Enable Multi-Factor Authentication
Enable MFA on your Upstash account for enhanced security:
* Use your existing authentication provider (Google, GitHub, Amazon)
* Consider using a dedicated email/password account for production
* Force MFA for all team members to ensure consistent security
* Regularly review account access and team member permissions
### Secure Credential Management
Follow these best practices:
* Never hardcode credentials in your application code
* Use environment variables or secret management systems
* Reset passwords immediately if credentials are compromised
* Use Read-Only tokens for public-facing applications
## Network Security
### TLS Encryption
TLS is always enabled on Upstash Redis databases.
### VPC Peering (Enterprise)
Connect databases to your VPCs using private IP:
* Database becomes inaccessible from public networks
* Minimizes data transfer costs
* Available for Enterprise customers
## Monitoring & Observability
### Enable Advanced Monitoring
Prod Pack includes comprehensive monitoring:
* Prometheus integration
* Datadog integration
* Extended console metrics (up to one month)
## High Availability & Backup
### Enable Daily Backups
Configure automated daily backups for data protection:
* Available on all paid plans
* Backup retention up to 3 days with Prod Pack
* Hourly backups with customizable retention (Enterprise)
### Global Replication
For global applications, consider using Global Database:
* Distribute data across multiple regions
* Minimize latency for users worldwide
* Enhanced disaster recovery capabilities
## Compliance & Governance
### SOC-2 Compliance
Prod Pack and Enterprise plans include SOC-2 Type 2 compliance:
* Request SOC-2 report from [trust.upstash.com](https://trust.upstash.com/)
* Available for production workloads
### Enterprise Features
For enterprise customers:
* HIPAA compliance available
* SAML SSO integration
* Access logs available
* Custom resource allocation
## Pre-Production Checklist
Before going live, ensure you have:
* [ ] Prod Pack enabled (recommended)
* [ ] Credential Protection enabled
* [ ] IP Allowlist configured
* [ ] MFA enabled on your account
* [ ] Daily backups enabled
* [ ] Monitoring and alerts configured
* [ ] Environment variables secured
* [ ] Error handling tested
## Additional Resources
* [Security Features](/redis/features/security)
* [Prod Pack & Enterprise](/redis/overall/enterprise)
* [Backup & Restore](/redis/features/backup)
* [Global Database](/redis/features/globaldatabase)
* [Monitoring & Metrics](/redis/howto/metricsandcharts)
* [Compliance Information](/common/help/compliance)
* [Professional Support](/common/help/prosupport)
For additional assistance with production deployment, contact our support team at [support@upstash.com](mailto:support@upstash.com).
# Professional Support
Source: https://upstash.com/docs/common/help/prosupport
For all Upstash products, we manage everything for you and let you focus on more important things. If you ever need further help, our dedicated Professional Support team are here to ensure you get the most out of our platform, whether you’re just starting or scaling to new heights.
Professional Support is strongly recommended especially for customers who use Upstash as part of their production systems.
# Expert Guidance
Get direct access to our team of specialists who can provide insights, troubleshooting, and best practices tailored to your unique use case. In any urgent incident you might have, our Support team will be standing by and ready to join you for troubleshooting.
Professional Support package includes:
* **Guaranteed Response Time:** Rapid Response Time SLA to urgent support requests, ensuring your concerns are addressed promptly with a **24/7 coverage**.
* **Customer Onboarding:** A personalized session to guide you through utilizing our support services and reviewing your specific use case for a seamless start.
* **Quarterly Use Case Review & Health Check:** On-request sessions every quarter to review your use case and ensure optimal performance.
* **Dedicated Slack Channel:** Direct access to our team via a private Slack channel, so you can reach out whenever you need assistance.
* **Incident Support:** Video call support during critical incidents to provide immediate help and resolution.
* **Root Cause Analysis:** Comprehensive investigation and post-mortem analysis of critical incidents to identify and address the root cause.
# Response Time SLA
We understand that timely assistance is critical for production workloads, so your access to our Support team comes with 24/7 coverage and below SLA:
| Severity | Response Time |
| ------------------------------- | ------------- |
| P1 - Production system down | 30 minutes |
| P2 - Production system impaired | 2 hours |
| P3 - Minor issue | 12 hours |
| P4 - General guidance | 24 hours |
## How to Reach Out?
As a Professional Support Customer, below are the **two methods** to reach out to Upstash Support Team, in case you need to utilize our services:
#### Starting a Chat
You will see a chatbox on the bottom right when viewing Upstash console, docs and website. Once you initiate a chat, Professional Support customers will be prompted to select a severity level:
To be able to see these options in chat, remember to sign into your Upstash Account first.
If you select "P1 - Production down, no workaround", or "P2 - Production impaired with workaround" options, you will be triggering an alert for our team to urgently step in.
#### Sending an Email
Sending an email with details to [support@upstash.com](mailto:support@upstash.com) is another way to submit a support request. In case of an urgency, sending an email with details by using "urgent" keyword in email subject is another alternative to alert our team about a possible incident.
# Pricing
For pricing and further details about Professional Support, please contact us at [support@upstash.com](mailto:support@upstash.com)
# Uptime SLA
Source: https://upstash.com/docs/common/help/sla
This Service Level Agreement ("SLA") applies to Upstash resources with the Prod Pack add-on or Enterprise plans. It is clarified that this SLA is subject to the [terms of the Agreement](https://upstash.com/trust/terms.pdf), and does not derogate therefrom (capitalized terms, unless otherwise indicated herein, have the meaning specified in the Agreement).
To receive uptime SLA guarantees, you need to enable the Prod Pack add-on or be on an Enterprise plan for your resource. Learn more about [Prod Pack and Enterprise features for Redis](/redis/overall/enterprise) or [QStash](/qstash/overall/enterprise).
Upstash reserves the right to change the terms of this SLA by publishing updated
terms on its website, such change to be effective as of the date of publication.
### Uptime Guarantee
Upstash will use commercially reasonable efforts to make resources with Prod Pack add-on or Enterprise plans available with a Monthly Uptime Percentage of at least **99.99%**.
In the event any of the services do not meet the SLA, you will be eligible to
receive a Service Credit as described below.
| Monthly Uptime Percentage | Service Credit Percentage |
| --------------------------------------------------- | ------------------------- |
| Less than 99.99% but equal to or greater than 99.0% | 10% |
| Less than 99.0% but equal to or greater than 95.0% | 30% |
| Less than 95.0% | 60% |
### SLA Credits
Service Credits are calculated as a percentage of the monthly bill (excluding
one-time payments such as upfront payments) for the resource in the affected
region that did not meet the SLA.
Uptime percentages are recorded and published in the
[Upstash Status Page](https://status.upstash.com).
To receive a Service Credit, you should submit a claim by sending an email to
[support@upstash.com](mailto:support@upstash.com). Your credit request should be
received by us before the end of the second billing cycle after the incident
occurred.
We will apply any service credits against future payments for the applicable
services. At our discretion, we may issue the Service Credit to the credit card
you used. Service Credits will not entitle you to any refund or other payment. A
Service Credit will be applicable and issued only if the credit amount for the
applicable monthly billing cycle is greater than one dollar (\$1 USD). Service
Credits may not be transferred or applied to any other account.
### Getting Uptime SLA Coverage
To receive uptime SLA guarantees for your resources, you need to upgrade to either:
* **Prod Pack**: An add-on per resource available to both pay-as-you-go and fixed-price plans
* **Enterprise Plan**: A custom plan that can cover one or more of your resources
You can activate Prod Pack on the resource details page in the console. For Enterprise plans, contact [support@upstash.com](mailto:support@upstash.com).
Learn more about [Prod Pack and Enterprise features for Redis](/redis/overall/enterprise) or [QStash](/qstash/overall/enterprise).
# Support & Contact Us
Source: https://upstash.com/docs/common/help/support
## Community
[Upstash Discord Channel](https://upstash.com/discord) is the best way to
interact with the community.
## Team
Regardless of your subscription plan, you can contact the team
via [support@upstash.com](mailto:support@upstash.com) for technical support as
well as questions and feedback.
## Follow Us
Follow us on [X](https://x.com/upstash).
## Enterprise Support
Get [Enterprise Support](/common/help/prosupport) for your organization from the Upstash team.
# Uptime Monitor
Source: https://upstash.com/docs/common/help/uptime
## Status Page
You can track the uptime status of Upstash databases in
[Upstash Status Page](https://status.upstash.com)
## Latency Monitor
You can see the average latencies for different regions in
[Upstash Latency Monitoring](https://latency.upstash.com) page
# Trials
Source: https://upstash.com/docs/common/trials
If you want to try Upstash paid and pro plans, we can offer **Free
Trials**. Email us at [support@upstash.com](mailto:support@upstash.com)
# Overview
Source: https://upstash.com/docs/devops/cli/overview
Manage Upstash resources in your terminal or CI.
You can find the Github Repository [here](https://github.com/upstash/cli).
# Installation
## npm
You can install upstash's cli directly from npm
```bash theme={"system"}
npm i -g @upstash/cli
```
It will be added as `upstash` to your system's path.
## Compiled binaries:
`upstash` is also available from the
[releases page](https://github.com/upstash/cli/releases/latest) compiled
for windows, linux and mac (both intel and m1).
# Usage
```bash theme={"system"}
> upstash
Usage: upstash
Version: development
Description:
Official cli for Upstash products
Options:
-h, --help - Show this help.
-V, --version - Show the version number for this program.
-c, --config - Path to .upstash.json file
Commands:
auth - Login and logout
redis - Manage redis database instances
team - Manage your teams and their members
Environment variables:
UPSTASH_EMAIL - The email you use on upstash
UPSTASH_API_KEY - The api key from upstash
```
## Authentication
When running `upstash` for the first time, you should log in using
`upstash auth login`. Provide your email and an api key.
[See here for how to get a key.](https://docs.upstash.com/redis/howto/developerapi#api-development)
As an alternative to logging in, you can provide `UPSTASH_EMAIL` and
`UPSTASH_API_KEY` as environment variables.
## Usage
Let's create a new redis database:
```
> upstash redis create --name=my-db --region=eu-west-1
Database has been created
database_id a3e25299-132a-45b9-b026-c73f5a807859
database_name my-db
database_type Pay as You Go
region eu-west-1
type paid
port 37090
creation_time 1652687630
state active
password 88ae6392a1084d1186a3da37fb5f5a30
user_email andreas@upstash.com
endpoint eu1-magnetic-lacewing-37090.upstash.io
edge false
multizone false
rest_token AZDiASQgYTNlMjUyOTktMTMyYS00NWI5LWIwMjYtYzczZjVhODA3ODU5ODhhZTYzOTJhMTA4NGQxMTg2YTNkYTM3ZmI1ZjVhMzA=
read_only_rest_token ApDiASQgYTNlMjUyOTktMTMyYS00NWI5LWIwMjYtYzczZjVhODA3ODU5O_InFjRVX1XHsaSjq1wSerFCugZ8t8O1aTfbF6Jhq1I=
You can visit your database details page: https://console.upstash.com/redis/a3e25299-132a-45b9-b026-c73f5a807859
Connect to your database with redis-cli: redis-cli -u redis://88ae6392a1084d1186a3da37fb5f5a30@eu1-magnetic-lacewing-37090.upstash.io:37090
```
## Output
Most commands support the `--json` flag to return the raw api response as json,
which you can parse and automate your system.
```bash theme={"system"}
> upstash redis create --name=test2113 --region=us-central1 --json | jq '.endpoint'
"gusc1-clean-gelding-30208.upstash.io"
```
# List Audit Logs
Source: https://upstash.com/docs/devops/developer-api/account/list_audit_logs
devops/developer-api/openapi.yml get /auditlogs
This endpoint lists all audit logs of user.
# Authentication
Source: https://upstash.com/docs/devops/developer-api/authentication
Authentication for the Upstash Developer API
The Upstash API requires API keys to authenticate requests. You can view and
manage API keys at the Upstash Console.
Upstash API uses HTTP Basic authentication. You should pass `EMAIL` and
`API_KEY` as basic authentication username and password respectively.
With a client such as `curl`, you can pass your credentials with the `-u`
option, as the following example shows:
```curl theme={"system"}
curl https://api.upstash.com/v2/redis/databases -u EMAIL:API_KEY
```
Replace `EMAIL` and `API_KEY` with your email and API key.
# HTTP Status Codes
Source: https://upstash.com/docs/devops/developer-api/http_status_codes
The Upstash API uses the following HTTP Status codes:
| Code | Description | |
| ---- | ------------------------- | ------------------------------------------------------------------------------- |
| 200 | **OK** | Indicates that a request completed successfully and the response contains data. |
| 400 | **Bad Request** | Your request is invalid. |
| 401 | **Unauthorized** | Your API key is wrong. |
| 403 | **Forbidden** | The kitten requested is hidden for administrators only. |
| 404 | **Not Found** | The specified kitten could not be found. |
| 405 | **Method Not Allowed** | You tried to access a kitten with an invalid method. |
| 406 | **Not Acceptable** | You requested a format that isn't JSON. |
| 429 | **Too Many Requests** | You're requesting too many kittens! Slow down! |
| 500 | **Internal Server Error** | We had a problem with our server. Try again later. |
| 503 | **Service Unavailable** | We're temporarily offline for maintenance. Please try again later. |
# Getting Started
Source: https://upstash.com/docs/devops/developer-api/introduction
Using Upstash API, you can develop applications that can create and manage
Upstash products and resources. You can automate everything that
you can do in the console. To use developer API, you need to create an API key
in the console.
The Developer API is only available to native Upstash accounts. Accounts created via third-party platforms like Vercel or Fly.io are not supported.
### Create an API key
1. Log in to the console then in the left menu click the
`Account > Management API` link.
2. Click the `Create API Key` button.
3. Enter a name for your key. You can not use the same name for multiple keys.
You need to download or copy/save your API key. Upstash does not remember or
keep your API for security reasons. So if you forget your API key, it becomes
useless; you need to create a new one.
You can create multiple keys. It is recommended to use different keys in
different applications. By default one user can create up to 37 API keys. If you
need more than that, please send us an email at
[support@upstash.com](mailto:support@upstash.com)
### Deleting an API key
When an API key is exposed (e.g. accidentally shared in a public repository) or
not being used anymore; you should delete it. You can delete the API keys in
`Account > API Keys` screen.
### Roadmap
**Role based access:** You will be able to create API keys with specific
privileges. For example you will be able to create a key with read-only access.
**Stats:** We will provide reports based on usage of your API keys.
# Create Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/create_backup
devops/developer-api/openapi.yml post /redis/create-backup/{id}
This endpoint creates a backup for a Redis database.
# Delete Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/delete_backup
devops/developer-api/openapi.yml delete /redis/delete-backup/{id}/{backup_id}
This endpoint deletes a backup of a Redis database.
# Disable Daily Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/disable_dailybackup
devops/developer-api/openapi.yml patch /redis/disable-dailybackup/{id}
This endpoint disables daily backup for a Redis database.
# Enable Daily Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/enable_dailybackup
devops/developer-api/openapi.yml patch /redis/enable-dailybackup/{id}
This endpoint enables daily backup for a Redis database.
# List Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/list_backup
devops/developer-api/openapi.yml get /redis/list-backup/{id}
This endpoint lists all backups for a Redis database.
# Restore Backup
Source: https://upstash.com/docs/devops/developer-api/redis/backup/restore_backup
devops/developer-api/openapi.yml post /redis/restore-backup/{id}
This endpoint restores data from an existing backup.
# Change Database Plan
Source: https://upstash.com/docs/devops/developer-api/redis/change_plan
devops/developer-api/openapi.yml post /redis/change-plan/{id}
This endpoint changes the plan of a Redis database.
# Create Redis Database
Source: https://upstash.com/docs/devops/developer-api/redis/create_database_global
devops/developer-api/openapi.yml post /redis/database
This endpoint creates a new Redis database.
# Delete Database
Source: https://upstash.com/docs/devops/developer-api/redis/delete_database
devops/developer-api/openapi.yml delete /redis/database/{id}
This endpoint deletes a database.
# Disable Auto Upgrade
Source: https://upstash.com/docs/devops/developer-api/redis/disable_autoscaling
devops/developer-api/openapi.yml post /redis/disable-autoupgrade/{id}
This endpoint disables Auto Upgrade for given database.
# Disable Eviction
Source: https://upstash.com/docs/devops/developer-api/redis/disable_eviction
devops/developer-api/openapi.yml post /redis/disable-eviction/{id}
This endpoint disables eviction for given database.
# Enable Auto Upgrade
Source: https://upstash.com/docs/devops/developer-api/redis/enable_autoscaling
devops/developer-api/openapi.yml post /redis/enable-autoupgrade/{id}
This endpoint enables Auto Upgrade for given database.
# Enable Eviction
Source: https://upstash.com/docs/devops/developer-api/redis/enable_eviction
devops/developer-api/openapi.yml post /redis/enable-eviction/{id}
This endpoint enables eviction for given database.
# Enable TLS
Source: https://upstash.com/docs/devops/developer-api/redis/enable_tls
devops/developer-api/openapi.yml post /redis/enable-tls/{id}
This endpoint enables tls on a database.
# Get Database
Source: https://upstash.com/docs/devops/developer-api/redis/get_database
devops/developer-api/openapi.yml get /redis/database/{id}
This endpoint gets details of a database.
# Get Database Stats
Source: https://upstash.com/docs/devops/developer-api/redis/get_database_stats
devops/developer-api/openapi.yml get /redis/stats/{id}
This endpoint gets detailed stats of a database.
# List Databases
Source: https://upstash.com/docs/devops/developer-api/redis/list_databases
devops/developer-api/openapi.yml get /redis/databases
This endpoint list all databases of user.
# Move To Team
Source: https://upstash.com/docs/devops/developer-api/redis/moveto_team
devops/developer-api/openapi.yml post /redis/move-to-team
This endpoint moves database under a target team
# Rename Database
Source: https://upstash.com/docs/devops/developer-api/redis/rename_database
devops/developer-api/openapi.yml post /redis/rename/{id}
This endpoint renames a database.
# Reset Password
Source: https://upstash.com/docs/devops/developer-api/redis/reset_password
devops/developer-api/openapi.yml post /redis/reset-password/{id}
This endpoint updates the password of a database.
# Update Database Budget
Source: https://upstash.com/docs/devops/developer-api/redis/update_budget
devops/developer-api/openapi.yml patch /redis/update-budget/{id}
This endpoint updates the monthly budget of a Redis database.
# Update Regions
Source: https://upstash.com/docs/devops/developer-api/redis/update_regions
devops/developer-api/openapi.yml post /redis/update-regions/{id}
Update the regions of a database
# Add Team Member
Source: https://upstash.com/docs/devops/developer-api/teams/add_team_member
devops/developer-api/openapi.yml post /teams/member
This endpoint adds a new team member to the specified team.
# Create Team
Source: https://upstash.com/docs/devops/developer-api/teams/create_team
devops/developer-api/openapi.yml post /team
This endpoint creates a new team.
# Delete Team
Source: https://upstash.com/docs/devops/developer-api/teams/delete_team
devops/developer-api/openapi.yml delete /team/{id}
This endpoint deletes a team.
# Delete Team Member
Source: https://upstash.com/docs/devops/developer-api/teams/delete_team_member
devops/developer-api/openapi.yml delete /teams/member
This endpoint deletes a team member from the specified team.
# Get Team Members
Source: https://upstash.com/docs/devops/developer-api/teams/get_team_members
devops/developer-api/openapi.yml get /teams/{team_id}
This endpoint list all members of a team.
# List Teams
Source: https://upstash.com/docs/devops/developer-api/teams/list_teams
devops/developer-api/openapi.yml get /teams
This endpoint lists all teams of user.
# Create Index
Source: https://upstash.com/docs/devops/developer-api/vector/create_index
devops/developer-api/openapi.yml post /vector/index
This endpoint creates an index.
# Delete Index
Source: https://upstash.com/docs/devops/developer-api/vector/delete_index
devops/developer-api/openapi.yml delete /vector/index/{id}
This endpoint deletes an index.
# Get Index
Source: https://upstash.com/docs/devops/developer-api/vector/get_index
devops/developer-api/openapi.yml get /vector/index/{id}
This endpoint returns the data associated to a index.
# List Indices
Source: https://upstash.com/docs/devops/developer-api/vector/list_indices
devops/developer-api/openapi.yml get /vector/index
This endpoint returns the data related to all indices of an account as a list.
# Rename Index
Source: https://upstash.com/docs/devops/developer-api/vector/rename_index
devops/developer-api/openapi.yml post /vector/index/{id}/rename
This endpoint is used to change the name of an index.
# Reset Index Passwords
Source: https://upstash.com/docs/devops/developer-api/vector/reset_index_passwords
devops/developer-api/openapi.yml post /vector/index/{id}/reset-password
This endpoint is used to reset regular and readonly tokens of an index.
# Set Index Plan
Source: https://upstash.com/docs/devops/developer-api/vector/set_index_plan
devops/developer-api/openapi.yml post /vector/index/{id}/setplan
This endpoint is used to change the plan of an index.
# Transfer Index
Source: https://upstash.com/docs/devops/developer-api/vector/transfer_index
devops/developer-api/openapi.yml post /vector/index/{id}/transfer
This endpoint is used to transfer an index to another team.
Transferring to a personal account is not supported. However, transferring an index from a personal account to a team is allowed.
# Overview
Source: https://upstash.com/docs/devops/pulumi/overview
The Upstash Pulumi Provider lets you manage [Upstash](https://upstash.com) Redis resources programmatically.
You can find the Github Repository [here](https://github.com/upstash/pulumi-upstash).
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash theme={"system"}
npm install @upstash/pulumi
```
or `yarn`:
```bash theme={"system"}
yarn add @upstash/pulumi
```
### Python
To use from Python, install using `pip`:
```bash theme={"system"}
pip install upstash_pulumi
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash theme={"system"}
go get github.com/upstash/pulumi-upstash/sdk/go/...
```
## Configuration
The following configuration points are available for the `upstash` provider:
* `upstash:apiKey` (environment: `UPSTASH_API_KEY`) - the API key for `upstash`. Can be obtained from the [console](https://console.upstash.com).
* `upstash:email` (environment: `UPSTASH_EMAIL`) - owner email of the resources
## Some Examples
### TypeScript:
```typescript theme={"system"}
import * as pulumi from "@pulumi/pulumi";
import * as upstash from "@upstash/pulumi";
// multiple redis databases in a single for loop
for (let i = 0; i < 5; i++) {
new upstash.RedisDatabase("mydb" + i, {
databaseName: "pulumi-ts-db" + i,
region: "eu-west-1",
tls: true,
});
}
```
### Go:
```go theme={"system"}
package main
import (
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/upstash/pulumi-upstash/sdk/go/upstash"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
createdTeam, err := upstash.NewTeam(ctx, "exampleTeam", &upstash.TeamArgs{
TeamName: pulumi.String("pulumi go team"),
CopyCc: pulumi.Bool(false),
TeamMembers: pulumi.StringMap{
"": pulumi.String("owner"),
"": pulumi.String("dev"),
},
})
if err != nil {
return err
}
return nil
})
}
```
# null
Source: https://upstash.com/docs/devops/terraform
# upstash_qstash_endpoint_data
Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_endpoint_data
```hcl example.tf theme={"system"}
data "upstash_qstash_endpoint_data" "exampleQStashEndpointData" {
endpoint_id = resource.upstash_qstash_endpoint.exampleQStashEndpoint.endpoint_id
}
```
## Schema
### Required
Topic Id that the endpoint is added to
### Read-Only
Unique QStash Endpoint ID
The ID of this resource.
Unique QStash Topic Name for Endpoint
# upstash_qstash_schedule_data
Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_schedule_data
```hcl example.tf theme={"system"}
data "upstash_qstash_schedule_data" "exampleQStashScheduleData" {
schedule_id = resource.upstash_qstash_schedule.exampleQStashSchedule.schedule_id
}
```
## Schema
### Required
Unique QStash Schedule ID for requested schedule
### Read-Only
Body to send for the POST request in string format. Needs escaping () double
quotes.
Creation time for QStash Schedule
Cron string for QStash Schedule
Destination for QStash Schedule. Either Topic ID or valid URL
Forward headers to your API
The ID of this resource.
Start time for QStash Scheduling.
Retries for QStash Schedule requests.
# upstash_qstash_topic_data
Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_qstash_topic_data
```hcl example.tf theme={"system"}
data "upstash_qstash_topic_data" "exampleQstashTopicData" {
topic_id = resource.upstash_qstash_topic.exampleQstashTopic.topic_id
}
```
## Schema
### Required
Unique QStash Topic ID for requested topic
### Read-Only
Endpoints for the QStash Topic
The ID of this resource.
Name of the QStash Topic
# upstash_redis_database_data
Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_redis_database_data
```hcl example.tf theme={"system"}
data "upstash_redis_database_data" "exampleDBData" {
database_id = resource.upstash_redis_database.exampleDB.database_id
}
```
## Schema
### Required
Unique Database ID for created database
### Read-Only
Upgrade to higher plans automatically when it hits quotas
Creation time of the database
Name of the database
Type of the database
Daily bandwidth limit for the database
Disk threshold for the database
Max clients for the database
Max commands per second for the database
Max entry size for the database
Max request size for the database
Memory threshold for the database
Database URL for connection
The ID of this resource.
Password of the database
Port of the endpoint
Primary region for the database (Only works if region='global'. Can be one of
\[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1,
ap-southeast-1, ap-southeast-2])
Rest Token for the database.
Read regions for the database (Only works if region='global' and
primary\_region is set. Can be any combination of \[us-east-1, us-west-1,
us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1,
ap-southeast-2], excluding the one given as primary.)
Region of the database. Possible values are: `global`, `eu-west-1`,
`us-east-1`, `us-west-1`, `ap-northeast-1` , `eu-central1`
Rest Token for the database.
State of the database
When enabled, data is encrypted in transit. (If changed to false from true,
results in deletion and recreation of the resource)
User email for the database
# upstash_team_data
Source: https://upstash.com/docs/devops/terraform/data_sources/upstash_team_data
```hcl example.tf theme={"system"}
data "upstash_team_data" "teamData" {
team_id = resource.upstash_team.exampleTeam.team_id
}
```
## Schema
### Required
Unique Cluster ID for created cluster
### Read-Only
Whether Credit Card is copied
The ID of this resource.
Members of the team. (Owner must be specified, which is the owner of the api
key.)
Name of the team
# Overview
Source: https://upstash.com/docs/devops/terraform/overview
The Upstash Terraform Provider lets you manage Upstash Redis resources programmatically.
You can find the Github Repository for the Terraform Provider [here](https://github.com/upstash/terraform-provider-upstash).
## Installation
```hcl theme={"system"}
terraform {
required_providers {
upstash = {
source = "upstash/upstash"
version = "x.x.x"
}
}
}
provider "upstash" {
email = var.email
api_key = var.api_key
}
```
`email` is your registered email in Upstash.
`api_key` can be generated from Upstash Console. For more information please check our [docs](https://docs.upstash.com/howto/developerapi).
## Create Database Using Terraform
Here example code snippet that creates database:
```hcl theme={"system"}
resource "upstash_redis_database" "redis" {
database_name = "db-name"
region = "eu-west-1"
tls = "true"
multi_zone = "false"
}
```
## Import Resources From Outside of Terraform
To import resources created outside of the terraform provider, simply create the resource in .tf file as follows:
```hcl theme={"system"}
resource "upstash_redis_database" "redis" {}
```
after this, you can run the command:
```
terraform import upstash_redis_database.redis
```
Above example is given for an Upstash Redis database. You can import all of the resources by changing the resource type and providing the resource id.
You can check full spec and [doc from here](https://registry.terraform.io/providers/upstash/upstash/latest/docs).
## Support, Bugs Reports, Feature Requests
If you need support then you can ask your questions Upstash Team in [upstash.com](https://upstash.com) chat widget.
There is also discord channel available for community. [Please check here](https://docs.upstash.com/help/support) for more information.
# upstash_qstash_endpoint
Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_endpoint
Create and manage QStash endpoints.
```hcl example.tf theme={"system"}
resource "upstash_qstash_endpoint" "exampleQStashEndpoint" {
url = "https://***.***"
topic_id = resource.upstash_qstash_topic.exampleQstashTopic.topic_id
}
```
## Schema
### Required
Topic ID that the endpoint is added to
URL of the endpoint
### Read-Only
Unique QStash endpoint ID
The ID of this resource.
Unique QStash topic name for endpoint
# upstash_qstash_schedule
Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_schedule
Create and manage QStash schedules.
```hcl example.tf theme={"system"}
resource "upstash_qstash_schedule" "exampleQStashSchedule" {
destination = resource.upstash_qstash_topic.exampleQstashTopic.topic_id
cron = "* * * * */2"
# or simply provide a link
# destination = "https://***.***"
}
```
## Schema
### Required
Cron string for QStash Schedule
Destination for QStash Schedule. Either Topic ID or valid URL
### Optional
Body to send for the POST request in string format. Needs escaping () double
quotes.
Callback URL for QStash Schedule.
Content based deduplication for QStash Scheduling.
Content type for QStash Scheduling.
Deduplication ID for QStash Scheduling.
Delay for QStash Schedule.
Forward headers to your API
Start time for QStash Scheduling.
Retries for QStash Schedule requests.
### Read-Only
Creation time for QStash Schedule.
The ID of this resource.
Unique QStash Schedule ID for requested schedule
# upstash_qstash_topic
Source: https://upstash.com/docs/devops/terraform/resources/upstash_qstash_topic
Create and manage QStash topics
```hcl example.tf theme={"system"}
resource "upstash_qstash_topic" "exampleQStashTopic" {
name = "exampleQStashTopicName"
}
```
## Schema
### Required
Name of the QStash topic
### Read-Only
Endpoints for the QStash topic
The ID of this resource.
Unique QStash topic ID for requested topic
# upstash_redis_database
Source: https://upstash.com/docs/devops/terraform/resources/upstash_redis_database
Create and manage Upstash Redis databases.
```hcl example.tf theme={"system"}
resource "upstash_redis_database" "exampleDB" {
database_name = "Terraform DB6"
region = "eu-west-1"
tls = "true"
multizone = "true"
}
```
## Schema
### Required
Name of the database
Region of the database. Possible values are: `global`, `eu-west-1`,
`us-east-1`, `us-west-1`, `ap-northeast-1` , `eu-central1`
### Optional
Upgrade to higher plans automatically when it hits quotas
Enable eviction, to evict keys when your database reaches the max size
Primary region for the database (Only works if region='global'. Can be one of
\[us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, sa-east-1,
ap-southeast-1, ap-southeast-2])
Read regions for the database (Only works if region='global' and
primary\_region is set. Can be any combination of \[us-east-1, us-west-1,
us-west-2, eu-central-1, eu-west-1, sa-east-1, ap-southeast-1,
ap-southeast-2], excluding the one given as primary.)
When enabled, data is encrypted in transit. (If changed to false from true,
results in deletion and recreation of the resource)
### Read-Only
Creation time of the database
Unique Database ID for created database
Type of the database
Daily bandwidth limit for the database
Disk threshold for the database
Max clients for the database
Max commands per second for the database
Max entry size for the database
Max request size for the database
Memory threshold for the database
Database URL for connection
The ID of this resource.
Password of the database
Port of the endpoint
Rest Token for the database.
Rest Token for the database.
State of the database
User email for the database
# upstash_team
Source: https://upstash.com/docs/devops/terraform/resources/upstash_team
Create and manage teams on Upstash.
```hcl example.tf theme={"system"}
resource "upstash_team" "exampleTeam" {
team_name = "TerraformTeam"
copy_cc = false
team_members = {
# Owner is the owner of the api_key.
"X@Y.Z": "owner",
"A@B.C": "dev",
"E@E.F": "finance",
}
}
```
## Schema
### Required
Whether Credit Card is copied
Members of the team. (Owner must be specified, which is the owner of the api
key.)
Name of the team
### Read-Only
The ID of this resource.
Unique Cluster ID for created cluster
# null
Source: https://upstash.com/docs/img/bg-color-codes
Recommended Background Color Transition:
Primary: #34D399 (Emerald Green)
Secondary: #00E9A3 (Cyan Green)
# Get Started
Source: https://upstash.com/docs/introduction
Create a Redis Database within seconds
Create a Vector Database for AI & LLMs
Publish your first message
Write durable serverless functions
## Concepts
Upstash is serverless. You don't need to provision any infrastructure. Just
create a database and start using it.
Price scales to zero. You don't pay for idle or unused resources. You pay
only for what you use.
Upstash Redis replicates your data for the best latency all over the world.
Upstash REST APIs enable access from all types of runtimes.
## Get In touch
Follow us on X for the latest news and updates.
Join our Discord Community and ask your questions to the team and other
developers.
# API Rate Limit Response
Source: https://upstash.com/docs/qstash/api/api-ratelimiting
This page documents the rate limiting behavior of our API and explains how to handle different types of rate limit errors.
## Overview
There is no request per second limit for operational API's as listed below:
* trigger, publish, enqueue, notify, wait, batch
* Other endpoints (like logs,listing flow-controls, queues, schedules etc) have rps limit. This is a short-term limit **per second** to prevent rapid bursts of requests.
**Headers**:
* `Burst-RateLimit-Limit`: Maximum number of requests allowed in the burst window (1 second)
* `Burst-RateLimit-Remaining`: Remaining number of requests in the burst window (1 second)
* `Burst-RateLimit-Reset`: Time (in unix timestamp) when the burst limit will reset
### Example Rate Limit Error Handling
```typescript Handling Daily Rate Limit Error theme={"system"}
import { QstashDailyRatelimitError } from "@upstash/qstash";
try {
// Example of a publish request that could hit the daily rate limit
const result = await client.publishJSON({
url: "https://my-api...",
// or urlGroup: "the name or id of a url group"
body: {
hello: "world",
},
});
} catch (error) {
if (error instanceof QstashDailyRatelimitError) {
console.log("Daily rate limit exceeded. Retry after:", error.reset);
// Implement retry logic or notify the user
} else {
console.error("An unexpected error occurred:", error);
}
}
```
```typescript Handling Burst Rate Limit Error theme={"system"}
import { QstashRatelimitError } from "@upstash/qstash";
try {
// Example of a request that could hit the burst rate limit
const result = await client.publishJSON({
url: "https://my-api...",
// or urlGroup: "the name or id of a url group"
body: {
hello: "world",
},
});
} catch (error) {
if (error instanceof QstashRatelimitError) {
console.log("Burst rate limit exceeded. Retry after:", error.reset);
// Implement exponential backoff or delay before retrying
} else {
console.error("An unexpected error occurred:", error);
}
}
```
# Authentication
Source: https://upstash.com/docs/qstash/api/authentication
Authentication for the QStash API
You'll need to authenticate your requests to access any of the endpoints in the
QStash API. In this guide, we'll look at how authentication works.
## Bearer Token
When making requests to QStash, you will need your `QSTASH_TOKEN` — you will
find it in the [console](https://console.upstash.com/qstash). Here's how to add
the token to the request header using cURL:
```bash theme={"system"}
curl https://qstash.upstash.io/v2/publish/... \
-H "Authorization: Bearer "
```
## Query Parameter
In environments where setting the header is not possible, you can use the `qstash_token` query parameter instead.
```bash theme={"system"}
curl https://qstash.upstash.io/v2/publish/...?qstash_token=
```
Always keep your token safe and reset it if you suspect it has been compromised.
# Delete a message from the DLQ
Source: https://upstash.com/docs/qstash/api/dlq/deleteMessage
DELETE https://qstash.upstash.io/v2/dlq/{dlqId}
Manually remove a message
Delete a message from the DLQ.
## Request
The dlq id of the message you want to remove. You will see this id when
listing all messages in the dlq with the [/v2/dlq](/qstash/api/dlq/listMessages) endpoint.
## Response
The endpoint doesn't return anything, a status code of 200 means the message is removed from the DLQ.
If the message is not found in the DLQ, (either is has been removed by you, or automatically), the endpoint returns a 404 status code.
```sh theme={"system"}
curl -X DELETE https://qstash.upstash.io/v2/dlq/my-dlq-id \
-H "Authorization: Bearer "
```
# Delete multiple messages from the DLQ
Source: https://upstash.com/docs/qstash/api/dlq/deleteMessages
DELETE https://qstash.upstash.io/v2/dlq
Manually remove messages
Delete multiple messages from the DLQ.
You can get the `dlqId` from the [list DLQs endpoint](/qstash/api/dlq/listMessages).
## Request
The list of DLQ message IDs to remove.
## Response
A deleted object with the number of deleted messages.
```JSON theme={"system"}
{
"deleted": number
}
```
```json 200 OK theme={"system"}
{
"deleted": 3
}
```
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/dlq \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"dlqIds": ["11111-0", "22222-0", "33333-0"]
}'
```
```js Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/dlq", {
method: "DELETE",
headers: {
Authorization: "Bearer ",
"Content-Type": "application/json",
},
body: {
dlqIds: [
"11111-0",
"22222-0",
"33333-0",
],
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
}
data = {
"dlqIds": [
"11111-0",
"22222-0",
"33333-0"
]
}
response = requests.delete(
'https://qstash.upstash.io/v2/dlq',
headers=headers,
data=data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{
"dlqIds": [
"11111-0",
"22222-0",
"33333-0"
]
}`)
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/dlq", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Get a message from the DLQ
Source: https://upstash.com/docs/qstash/api/dlq/getMessage
GET https://qstash.upstash.io/v2/dlq/{dlqId}
Get a message from the DLQ
Get a message from DLQ.
## Request
The dlq id of the message you want to retrieve. You will see this id when
listing all messages in the dlq with the [/v2/dlq](/qstash/api/dlq/listMessages) endpoint,
as well as in the content of [the failure callback](https://docs.upstash.com/qstash/features/callbacks#what-is-a-failure-callback)
## Response
If the message is not found in the DLQ, (either is has been removed by you, or automatically), the endpoint returns a 404 status code.
```sh theme={"system"}
curl -X GET https://qstash.upstash.io/v2/dlq/my-dlq-id \
-H "Authorization: Bearer "
```
# List messages in the DLQ
Source: https://upstash.com/docs/qstash/api/dlq/listMessages
GET https://qstash.upstash.io/v2/dlq
List and paginate through all messages currently inside the DLQ
List all messages currently inside the DLQ
## Request
By providing a cursor you can paginate through all of the messages in the DLQ
Filter DLQ messages by message id.
Filter DLQ messages by url.
Filter DLQ messages by url group.
Filter DLQ messages by schedule id.
Filter DLQ messages by queue name.
Filter DLQ messages by API name.
Filter DLQ messages by starting date, in milliseconds (Unix timestamp). This is inclusive.
Filter DLQ messages by ending date, in milliseconds (Unix timestamp). This is inclusive.
Filter DLQ messages by HTTP response status code.
Filter DLQ messages by IP address of the publisher.
The number of messages to return. Default and maximum is 100.
The sorting order of DLQ messages by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "earliestFirst".
Filter DLQ messages by the label of the message assigned by the user.
## Response
A cursor which you can use in subsequent requests to paginate through all
messages. If no cursor is returned, you have reached the end of the messages.
```sh theme={"system"}
curl https://qstash.upstash.io/v2/dlq \
-H "Authorization: Bearer "
```
```sh with cursor theme={"system"}
curl https://qstash.upstash.io/v2/dlq?cursor=xxx \
-H "Authorization: Bearer "
```
```json 200 OK theme={"system"}
{
"messages": [
{
"messageId": "msg_123",
"topicId": "tpc_123",
"url":"https://example.com",
"method": "POST",
"header": {
"My-Header": ["my-value"]
},
"body": "{\"foo\":\"bar\"}",
"createdAt": 1620000000000,
"state": "failed"
}
]
}
```
# Enqueue a Message
Source: https://upstash.com/docs/qstash/api/enqueue
POST https://qstash.upstash.io/v2/enqueue/{queueName}/{destination}
Enqueue a message
## Request
The name of the queue that message will be enqueued on.
If doesn't exist, it will be created automatically.
Destination can either be a topic name or id that you configured in the
Upstash console, a valid url where the message gets sent to, or a valid
QStash API name like `api/llm`. If the destination is a URL, make sure
the URL is prefixed with a valid protocol (`http://` or `https://`)
Id to use while deduplicating messages, so that only one message with
the given deduplication id is published.
When set to true, automatically deduplicates messages based on their content,
so that only one message with the same content is published.
Content based deduplication creates unique deduplication ids based on the
following message fields:
* Destination
* Body
* Headers
## Response
```sh curl theme={"system"}
curl -X POST "https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-H "Upstash-Method: POST" \
-H "Upstash-Retries: 3" \
-H "Upstash-Forward-Custom-Header: custom-value" \
-d '{"message":"Hello, World!"}'
```
```js Node theme={"system"}
const response = await fetch(
"https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com",
{
method: "POST",
headers: {
Authorization: "Bearer ",
"Content-Type": "application/json",
"Upstash-Method": "POST",
"Upstash-Retries": "3",
"Upstash-Forward-Custom-Header": "custom-value",
},
body: JSON.stringify({
message: "Hello, World!",
}),
}
);
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
'Upstash-Method': 'POST',
'Upstash-Retries': '3',
'Upstash-Forward-Custom-Header': 'custom-value',
}
json_data = {
'message': 'Hello, World!',
}
response = requests.post(
'https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com',
headers=headers,
json=json_data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{"message":"Hello, World!"}`)
req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/enqueue/myQueue/https://www.example.com", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Upstash-Method", "POST")
req.Header.Set("Upstash-Retries", "3")
req.Header.Set("Upstash-Forward-Custom-Header", "custom-value")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json URL theme={"system"}
{
"messageId": "msd_1234",
"url": "https://www.example.com"
}
```
```json URL Group theme={"system"}
[
{
"messageId": "msd_1234",
"url": "https://www.example.com"
},
{
"messageId": "msd_5678",
"url": "https://www.somewhere-else.com",
"deduplicated": true
}
]
```
# List Events
Source: https://upstash.com/docs/qstash/api/events/list
GET https://qstash.upstash.io/v2/events
List all events that happened, such as message creation or delivery
QStash events are being renamed to [Logs](/qstash/api/logs/list) to better reflect their purpose and to not get confused with [Workflow Events](/workflow/howto/events).
## Request
By providing a cursor you can paginate through all of the events.
Filter events by message id.
Filter events by [state](/qstash/howto/debug-logs)
| Value | Description |
| ------------------ | ---------------------------------------------------------------------------------------- |
| `CREATED` | The message has been accepted and stored in QStash |
| `ACTIVE` | The task is currently being processed by a worker. |
| `RETRY` | The task has been scheduled to retry. |
| `ERROR` | The execution threw an error and the task is waiting to be retried or failed. |
| `IN_PROGRESS` | The task is in one of `ACTIVE`, `RETRY` or `ERROR` state. |
| `DELIVERED` | The message was successfully delivered. |
| `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. |
| `CANCEL_REQUESTED` | The cancel request from the user is recorded. |
| `CANCELLED` | The cancel request from the user is honored. |
Filter events by url.
Filter events by URL Group (topic) name.
Filter events by schedule id.
Filter events by queue name.
Filter events by starting date, in milliseconds (Unix timestamp). This is inclusive.
Filter events by ending date, in milliseconds (Unix timestamp). This is inclusive.
The number of events to return. Default and max is 1000.
The sorting order of events by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "latestFirst".
## Response
A cursor which you can use in subsequent requests to paginate through all events.
If no cursor is returned, you have reached the end of the events.
Timestamp of this log entry, in milliseconds
The associated message id
The headers of the message.
Base64 encoded body of the message.
The current state of the message at this point in time.
| Value | Description |
| ------------------ | ---------------------------------------------------------------------------------------- |
| `CREATED` | The message has been accepted and stored in QStash |
| `ACTIVE` | The task is currently being processed by a worker. |
| `RETRY` | The task has been scheduled to retry. |
| `ERROR` | The execution threw an error and the task is waiting to be retried or failed. |
| `DELIVERED` | The message was successfully delivered. |
| `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. |
| `CANCEL_REQUESTED` | The cancel request from the user is recorded. |
| `CANCELLED` | The cancel request from the user is honored. |
An explanation what went wrong
The next scheduled time of the message.
(Unix timestamp in milliseconds)
The destination url
The name of the URL Group (topic) if this message was sent through a topic
The name of the endpoint if this message was sent through a URL Group
The scheduleId of the message if the message is triggered by a schedule
The name of the queue if this message is enqueued on a queue
The headers that are forwarded to the users endpoint
Base64 encoded body of the message
The status code of the response. Only set if the state is `ERROR`
The base64 encoded body of the response. Only set if the state is `ERROR`
The headers of the response. Only set if the state is `ERROR`
The timeout(in milliseconds) of the outgoing http requests, after which Qstash cancels the request
Method is the HTTP method of the message for outgoing request
Callback is the URL address where QStash sends the response of a publish
The headers that are passed to the callback url
Failure Callback is the URL address where QStash sends the response of a publish
The headers that are passed to the failure callback url
The number of retries that should be attempted in case of delivery failure
The mathematical expression used to calculate delay between retry attempts. If not set, [the default backoff](/qstash/features/retry) is used.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/events \
-H "Authorization: Bearer "
```
```javascript Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/events", {
headers: {
Authorization: "Bearer ",
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/events',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/events", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"cursor": "1686652644442-12",
"events":[
{
"time": "1686652644442",
"messageId": "msg_123",
"state": "delivered",
"url": "https://example.com",
"header": { "Content-Type": [ "application/x-www-form-urlencoded" ] },
"body": "bWVyaGFiYSBiZW5pbSBhZGltIHNhbmNhcg=="
}
]
}
```
# Get Flow-Control Keys
Source: https://upstash.com/docs/qstash/api/flow-control/get
GET https://qstash.upstash.io/v2/flowControl/{flowControlKey}
Get Information on Flow-Control
## Request
The key of the flow control. See the [flow control](/qstash/features/flowcontrol) for more details.
## Response
The key of of the flow control.
The number of messages in the wait list that waits for `parallelism`/`rate` set in the flow control.
```sh theme={"system"}
curl -X GET https://qstash.upstash.io/v2/flowControl/YOUR_FLOW_CONTROL_KEY -H "Authorization: Bearer "
```
# List Flow-Control Keys
Source: https://upstash.com/docs/qstash/api/flow-control/list
GET https://qstash.upstash.io/v2/flowControl/
List all Flow Control keys
## Response
The key of the flow control. See the [flow control](/qstash/features/flowcontrol) for more details.
The number of messages in the wait list that waits for `parallelism`/`rate` set in the flow control.
```sh theme={"system"}
curl -X GET https://qstash.upstash.io/v2/flowControl/ -H "Authorization: Bearer "
```
# List Logs
Source: https://upstash.com/docs/qstash/api/logs/list
GET https://qstash.upstash.io/v2/logs
Paginate through logs of published messages
## Request
By providing a cursor you can paginate through all of the logs.
Filter logs by message id.
Filter logs by [state](/qstash/howto/debug-logs)
| Value | Description |
| ------------------ | ---------------------------------------------------------------------------------------- |
| `CREATED` | The message has been accepted and stored in QStash |
| `ACTIVE` | The task is currently being processed by a worker. |
| `RETRY` | The task has been scheduled to retry. |
| `ERROR` | The execution threw an error and the task is waiting to be retried or failed. |
| `IN_PROGRESS` | The task is in one of `ACTIVE`, `RETRY` or `ERROR` state. |
| `DELIVERED` | The message was successfully delivered. |
| `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. |
| `CANCEL_REQUESTED` | The cancel request from the user is recorded. |
| `CANCELLED` | The cancel request from the user is honored. |
Filter logs by url.
Filter logs by URL Group (topic) name.
Filter logs by schedule id.
Filter logs by queue name.
Filter logs by starting date, in milliseconds (Unix timestamp). This is inclusive.
Filter logs by ending date, in milliseconds (Unix timestamp). This is inclusive.
The number of logs to return. Default and max is 1000.
The sorting order of logs by timestamp. Valid values are "earliestFirst" and "latestFirst". The default is "latestFirst".
Filter event by the label of the message assigned by the user.
## Response
A cursor which you can use in subsequent requests to paginate through all logs.
If no cursor is returned, you have reached the end of the logs.
Timestamp of this log entry, in milliseconds
The associated message id
The headers of the message.
Base64 encoded body of the message.
The current state of the message at this point in time.
| Value | Description |
| ------------------ | ---------------------------------------------------------------------------------------- |
| `CREATED` | The message has been accepted and stored in QStash |
| `ACTIVE` | The task is currently being processed by a worker. |
| `RETRY` | The task has been scheduled to retry. |
| `ERROR` | The execution threw an error and the task is waiting to be retried or failed. |
| `DELIVERED` | The message was successfully delivered. |
| `FAILED` | The task has errored too many times or encountered an error that it cannot recover from. |
| `CANCEL_REQUESTED` | The cancel request from the user is recorded. |
| `CANCELLED` | The cancel request from the user is honored. |
An explanation what went wrong
The next scheduled time of the message.
(Unix timestamp in milliseconds)
The destination url
The name of the URL Group (topic) if this message was sent through a topic
The name of the endpoint if this message was sent through a URL Group
The scheduleId of the message if the message is triggered by a schedule
The name of the queue if this message is enqueued on a queue
The headers that are forwarded to the users endpoint
Base64 encoded body of the message
The status code of the response. Only set if the state is `ERROR`
The base64 encoded body of the response. Only set if the state is `ERROR`
The headers of the response. Only set if the state is `ERROR`
The timeout(in milliseconds) of the outgoing http requests, after which Qstash cancels the request
Method is the HTTP method of the message for outgoing request
Callback is the URL address where QStash sends the response of a publish
The headers that are passed to the callback url
Failure Callback is the URL address where QStash sends the response of a publish
The headers that are passed to the failure callback url
The number of retries that should be attempted in case of delivery failure
The mathematical expression used to calculate delay between retry attempts. If not set, [the default backoff](/qstash/features/retry) is used.
The label of the message assigned by the user.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/logs \
-H "Authorization: Bearer "
```
```javascript Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/logs", {
headers: {
Authorization: "Bearer ",
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/logs',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/logs", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"cursor": "1686652644442-12",
"events":[
{
"time": "1686652644442",
"messageId": "msg_123",
"state": "delivered",
"url": "https://example.com",
"header": { "Content-Type": [ "application/x-www-form-urlencoded" ] },
"body": "bWVyaGFiYSBiZW5pbSBhZGltIHNhbmNhcg=="
}
]
}
```
# Batch Messages
Source: https://upstash.com/docs/qstash/api/messages/batch
POST https://qstash.upstash.io/v2/batch
Send multiple messages in a single request
You can learn more about batching in the [batching section](/qstash/features/batch).
API playground is not available for this endpoint. You can use the cURL example below.
You can publish to destination, URL Group or queue in the same batch request.
## Request
The endpoint is `POST https://qstash.upstash.io/v2/batch` and the body is an array of
messages. Each message has the following fields:
```
destination: string
headers: headers object
body: string
```
The headers are identical to the headers in the [create](/qstash/api/publish#request) endpoint.
```shell cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch -H "Authorization: Bearer XXX" \
-H "Content-Type: application/json" \
-d '
[
{
"destination": "myUrlGroup",
"headers":{
"Upstash-Delay":"5s",
"Upstash-Forward-Hello":"123456"
},
"body": "Hello World"
},
{
"queue": "test",
"destination": "https://example.com/destination",
"headers":{
"Upstash-Forward-Hello":"789"
}
},
{
"destination": "https://example.com/destination1",
"headers":{
"Upstash-Delay":"7s",
"Upstash-Forward-Hello":"789"
}
},
{
"destination": "https://example.com/destination2",
"headers":{
"Upstash-Delay":"9s",
"Upstash-Forward-Hello":"again"
}
}
]'
```
## Response
```json theme={"system"}
[
[
{
"messageId": "msg_...",
"url": "https://myUrlGroup-endpoint1.com"
},
{
"messageId": "msg_...",
"url": "https://myUrlGroup-endpoint2.com"
}
],
{
"messageId": "msg_...",
},
{
"messageId": "msg_..."
},
{
"messageId": "msg_..."
}
]
```
# Bulk Cancel Messages
Source: https://upstash.com/docs/qstash/api/messages/bulk-cancel
DELETE https://qstash.upstash.io/v2/messages
Stop delivery of multiple messages at once
Bulk cancel allows you to cancel multiple messages at once.
Cancelling a message will remove it from QStash and stop it from being delivered
in the future. If a message is in flight to your API, it might be too late to
cancel.
If you provide a set of message IDs in the body of the request, only those messages will be cancelled.
If you include filter parameters in the request body, only the messages that match the filters will be canceled.
If the `messageIds` array is empty, QStash will cancel all of your messages.
If no body is sent, QStash will also cancel all of your messages.
This operation scans all your messages and attempts to cancel them.
If an individual message cannot be cancelled, it will not continue and will return an error message.
Therefore, some messages may not be cancelled at the end.
In such cases, you can run the bulk cancel operation multiple times.
You can filter the messages to cancel by including filter parameters in the request body.
## Request
The list of message IDs to cancel.
Filter messages to cancel by queue name.
Filter messages to cancel by destination URL.
Filter messages to cancel by URL Group (topic) name.
Filter messages to cancel by starting date, in milliseconds (Unix timestamp). This is inclusive.
Filter messages to cancel by ending date, specified in milliseconds (Unix timestamp). This is inclusive.
Filter messages to cancel by schedule ID.
Filter messages to cancel by IP address of publisher.
## Response
A cancelled object with the number of cancelled messages.
```JSON theme={"system"}
{
"cancelled": number
}
```
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/messages/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer " \
-d '{"messageIds": ["msg_id_1", "msg_id_2", "msg_id_3"]}'
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/messages', {
method: 'DELETE',
headers: {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
body: {
messageIds: [
"msg_id_1",
"msg_id_2",
"msg_id_3",
],
},
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
}
data = {
"messageIds": [
"msg_id_1",
"msg_id_2",
"msg_id_3"
]
}
response = requests.delete(
'https://qstash.upstash.io/v2/messages',
headers=headers,
data=data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{
"messageIds": [
"msg_id_1",
"msg_id_2",
"msg_id_3"
]
}`)
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/messages", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 202 Accepted theme={"system"}
{
"cancelled": 10
}
```
# Cancel Message
Source: https://upstash.com/docs/qstash/api/messages/cancel
DELETE https://qstash.upstash.io/v2/messages/{messageId}
Stop delivery of an existing message
Cancelling a message will remove it from QStash and stop it from being delivered
in the future. If a message is in flight to your API, it might be too late to
cancel.
## Request
The id of the message to cancel.
## Response
This endpoint only returns `202 OK`
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/messages/msg_123 \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/messages/msg_123', {
method: 'DELETE',
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.delete(
'https://qstash.upstash.io/v2/messages/msg_123',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/messages/msg_123", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```text 202 Accepted theme={"system"}
OK
```
# Get Message
Source: https://upstash.com/docs/qstash/api/messages/get
GET https://qstash.upstash.io/v2/messages/{messageId}
Retrieve a message by its id
## Request
The id of the message to retrieve.
Messages are removed from the database shortly after they're delivered, so you
will not be able to retrieve a message after. This endpoint is intended to be used
for accessing messages that are in the process of being delivered/retried.
## Response
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/messages/msg_123 \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/messages/msg_123", {
headers: {
Authorization: "Bearer ",
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/messages/msg_123',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/messages/msg_123", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"messageId": "msg_123",
"topicName": "myTopic",
"url":"https://example.com",
"method": "POST",
"header": {
"My-Header": ["my-value"]
},
"body": "{\"foo\":\"bar\"}",
"createdAt": 1620000000000
}
```
# Publish a Message
Source: https://upstash.com/docs/qstash/api/publish
POST https://qstash.upstash.io/v2/publish/{destination}
Publish a message
## Request
Destination can either be a topic name or id that you configured in the
Upstash console, a valid url where the message gets sent to, or a valid
QStash API name like `api/llm`. If the destination is a URL, make sure
the URL is prefixed with a valid protocol (`http://` or `https://`)
Delay the message delivery.
Format for this header is a number followed by duration abbreviation, like
`10s`. Available durations are `s` (seconds), `m` (minutes), `h` (hours), `d`
(days).
example: "50s" | "3m" | "10h" | "1d"
Delay the message delivery until a certain time in the future.
The format is a unix timestamp in seconds, based on the UTC timezone.
When both `Upstash-Not-Before` and `Upstash-Delay` headers are provided,
`Upstash-Not-Before` will be used.
Id to use while deduplicating messages, so that only one message with
the given deduplication id is published.
When set to true, automatically deduplicates messages based on their content,
so that only one message with the same content is published.
Content based deduplication creates unique deduplication ids based on the
following message fields:
* Destination
* Body
* Headers
## Response
```sh curl theme={"system"}
curl -X POST "https://qstash.upstash.io/v2/publish/https://www.example.com" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-H "Upstash-Method: POST" \
-H "Upstash-Delay: 10s" \
-H "Upstash-Retries: 3" \
-H "Upstash-Retry-Delay: pow(2, retried) * 1000" \
-H "Upstash-Forward-Custom-Header: custom-value" \
-d '{"message":"Hello, World!"}'
```
```js Node theme={"system"}
const response = await fetch(
"https://qstash.upstash.io/v2/publish/https://www.example.com",
{
method: "POST",
headers: {
Authorization: "Bearer ",
"Content-Type": "application/json",
"Upstash-Method": "POST",
"Upstash-Delay": "10s",
"Upstash-Retries": "3",
"Upstash-Retry-Delay": "pow(2, retried) * 1000",
"Upstash-Forward-Custom-Header": "custom-value",
},
body: JSON.stringify({
message: "Hello, World!",
}),
}
);
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
'Upstash-Method': 'POST',
'Upstash-Delay': '10s',
'Upstash-Retries': '3',
'Upstash-Retry-Delay': 'pow(2, retried) * 1000',
'Upstash-Forward-Custom-Header': 'custom-value',
}
json_data = {
'message': 'Hello, World!',
}
response = requests.post(
'https://qstash.upstash.io/v2/publish/https://www.example.com',
headers=headers,
json=json_data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{"message":"Hello, World!"}`)
req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/publish/https://www.example.com", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Upstash-Method", "POST")
req.Header.Set("Upstash-Delay", "10s")
req.Header.Set("Upstash-Retries", "3")
req.Header.Set("Upstash-Retry-Delay", "pow(2, retried) * 1000")
req.Header.Set("Upstash-Forward-Custom-Header", "custom-value")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json URL theme={"system"}
{
"messageId": "msd_1234",
"url": "https://www.example.com"
}
```
```json URL Group theme={"system"}
[
{
"messageId": "msd_1234",
"url": "https://www.example.com"
},
{
"messageId": "msd_5678",
"url": "https://www.somewhere-else.com",
"deduplicated": true
}
]
```
# Get a Queue
Source: https://upstash.com/docs/qstash/api/queues/get
GET https://qstash.upstash.io/v2/queues/{queueName}
Retrieves a queue
## Request
The name of the queue to retrieve.
## Response
The creation time of the queue. UnixMilli
The update time of the queue. UnixMilli
The name of the queue.
The number of parallel consumers consuming from [the queue](/qstash/features/queues).
The number of unprocessed messages that exist in [the queue](/qstash/features/queues).
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/queues/my-queue \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/queue/my-queue', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/queue/my-queue',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/queue/my-queue", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"createdAt": 1623345678001,
"updatedAt": 1623345678001,
"name": "my-queue",
"parallelism" : 5,
"lag" : 100
}
```
# List Queues
Source: https://upstash.com/docs/qstash/api/queues/list
GET https://qstash.upstash.io/v2/queues
List all your queues
## Request
No parameters
## Response
The creation time of the queue. UnixMilli
The update time of the queue. UnixMilli
The name of the queue.
The number of parallel consumers consuming from [the queue](/qstash/features/queues).
The number of unprocessed messages that exist in [the queue](/qstash/features/queues).
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/queues \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/queues", {
headers: {
Authorization: "Bearer ",
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/queues',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/queues", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
[
{
"createdAt": 1623345678001,
"updatedAt": 1623345678001,
"name": "my-queue",
"parallelism" : 5,
"lag" : 100
},
// ...
]
```
# Pause Queue
Source: https://upstash.com/docs/qstash/api/queues/pause
POST https://qstash.upstash.io/v2/queues/{queueName}/pause
Pause an active queue
Pausing a queue stops the delivery of enqueued messages.
The queue will still accept new messages, but they will wait until the queue becomes active for delivery.
If the queue is already paused, this action has no effect.
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
## Request
The name of the queue to pause.
## Response
This endpoint simply returns 200 OK if the queue is paused successfully.
```sh curl theme={"system"}
curl -X POST https://qstash.upstash.io/v2/queues/queue_1234/pause \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
import { Client } from "@upstash/qstash";
/**
* Import a fetch polyfill only if you are using node prior to v18.
* This is not necessary for nextjs, deno or cloudflare workers.
*/
import "isomorphic-fetch";
const c = new Client({
token: "",
});
c.queue({ queueName: "" }).pause()
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.queue.pause("")
```
```go Go theme={"system"}
package main
import (
"github.com/upstash/qstash-go"
)
func main() {
client := qstash.NewClient("")
// error checking is omitted for brevity
err := client.Queues().Pause("")
}
```
# Remove a Queue
Source: https://upstash.com/docs/qstash/api/queues/remove
DELETE https://qstash.upstash.io/v2/queues/{queueName}
Removes a queue
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
## Request
The name of the queue to remove.
## Response
This endpoint returns 200 if the queue is removed successfully,
or it doesn't exist.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/queues/my-queue \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/queue/my-queue', {
method: "DELETE",
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.delete(
'https://qstash.upstash.io/v2/queue/my-queue',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/queue/my-queue", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Resume Queue
Source: https://upstash.com/docs/qstash/api/queues/resume
POST https://qstash.upstash.io/v2/queues/{queueName}/resume
Resume a paused queue
Resuming a queue starts the delivery of enqueued messages from the earliest undelivered message.
If the queue is already active, this action has no effect.
## Request
The name of the queue to resume.
## Response
This endpoint simply returns 200 OK if the queue is resumed successfully.
```sh curl theme={"system"}
curl -X POST https://qstash.upstash.io/v2/queues/queue_1234/resume \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
import { Client } from "@upstash/qstash";
/**
* Import a fetch polyfill only if you are using node prior to v18.
* This is not necessary for nextjs, deno or cloudflare workers.
*/
import "isomorphic-fetch";
const c = new Client({
token: "",
});
c.queue({ queueName: "" }).resume()
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.queue.resume("")
```
```go Go theme={"system"}
package main
import (
"github.com/upstash/qstash-go"
)
func main() {
client := qstash.NewClient("")
// error checking is omitted for brevity
err := client.Queues().Resume("")
}
```
# Upsert a Queue
Source: https://upstash.com/docs/qstash/api/queues/upsert
POST https://qstash.upstash.io/v2/queues/
Updates or creates a queue
## Request
The name of the queue.
The number of parallel consumers consuming from [the queue](/qstash/features/queues).
For the parallelism limit, we introduced an easier and less limited API with publish.
Please check the [Flow Control](/qstash/features/flowcontrol) page for the detailed information.
Setting parallelism with queues will be deprecated at some point.
## Response
This endpoint returns
* 200 if the queue is added successfully.
* 412 if it fails because of the the allowed number of queues limit
```sh curl theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/queues/ \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"queueName": "my-queue" ,
"parallelism" : 5,
}'
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/queues/', {
method: 'POST',
headers: {
'Authorization': 'Bearer ',
'Content-Type': 'application/json'
},
body: JSON.stringify({
"queueName": "my-queue" ,
"parallelism" : 5,
})
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
}
json_data = {
"queueName": "my-queue" ,
"parallelism" : 5,
}
response = requests.post(
'https://qstash.upstash.io/v2/queues/',
headers=headers,
json=json_data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{
"queueName": "my-queue" ,
"parallelism" : 5,
}`)
req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/queues/", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Create Schedule
Source: https://upstash.com/docs/qstash/api/schedules/create
POST https://qstash.upstash.io/v2/schedules/{destination}
Create a schedule to send messages periodically
## Request
Destination can either be a topic name or id that you configured in the
Upstash console or a valid url where the message gets sent to.
If the destination is a URL, make sure
the URL is prefixed with a valid protocol (`http://` or `https://`)
Cron allows you to send this message periodically on a schedule.
Add a Cron expression and we will requeue this message automatically whenever
the Cron expression triggers. We offer an easy to use UI for creating Cron
expressions in our [console](https://console.upstash.com/qstash) or you can
check out [Crontab.guru](https://crontab.guru).
Note: it can take up to 60 seconds until the schedule is registered on an
available qstash node.
Example: `*/5 * * * *`
Timezones are also supported. You can specify timezone together with cron expression
as follows:
Example: `CRON_TZ=America/New_York 0 4 * * *`
Delay the message delivery.
Delay applies to the delivery of the scheduled messages.
For example, with the delay set to 10 minutes for a schedule
that runs everyday at 00:00, the scheduled message will be
created at 00:00 and it will be delivered at 00:10.
Format for this header is a number followed by duration abbreviation, like
`10s`. Available durations are `s` (seconds), `m` (minutes), `h` (hours), `d`
(days).
example: "50s" | "3m" | "10h" | "1d"
Assign a schedule id to the created schedule.
This header allows you to set the schedule id yourself instead of QStash assigning
a random id.
If a schedule with the provided id exists, the settings of the existing schedule
will be updated with the new settings.
## Response
The unique id of this schedule. You can use it to delete the schedule later.
```sh curl theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint \
-H "Authorization: Bearer " \
-H "Upstash-Cron: */5 * * * *"
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint', {
method: 'POST',
headers: {
'Authorization': 'Bearer ',
'Upstash-Cron': '*/5 * * * *'
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Upstash-Cron': '*/5 * * * *'
}
response = requests.post(
'https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/schedules/https://www.example.com/endpoint", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Upstash-Cron", "*/5 * * * *")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"scheduleId": "scd_1234"
}
```
# Get Schedule
Source: https://upstash.com/docs/qstash/api/schedules/get
GET https://qstash.upstash.io/v2/schedules/{scheduleId}
Retrieves a schedule by id.
## Request
The id of the schedule to retrieve.
## Response
The id of the schedule.
The cron expression used to schedule the message.
The creation time of the object. UnixMilli
Url or URL Group name
The HTTP method to use for the message.
The headers of the message.
The body of the message.
The base64 encoded body of the message.
The number of retries that should be attempted in case of delivery failure.
The delay in seconds before the message is delivered.
The url where we send a callback to after the message is delivered
The url where we send a callback to after the message delivery fails
IP address where this schedule was created from.
Whether the schedule is paused or not.
The flow control key for rate limiting.
The maximum number of parallel executions.
The rate limit for this schedule.
The time interval during which the specified rate of requests can be activated using the same flow control key. In seconds.
The retry delay expression for this schedule, if retry\_delay was set when creating the schedule.
The label assigned to the schedule for filtering purposes.
The timestamp of the last scheduled execution.
The timestamp of the next scheduled execution.
The states of the last scheduled messages. Maps message id to state (IN\_PROGRESS, SUCCESS, FAIL).
The IP address of the caller who created the schedule.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/schedules/scd_1234 \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/schedules/scd_1234', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/schedules/scd_1234',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/schedules/scd_1234", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
createdAt: 1754565618803,
scheduleId: "schedule-id",
cron: "* * * * *",
destination: "https://your-website/api",
method: "GET",
header: {
"Content-Type": [ "application/json" ],
},
retries: 3,
delay: 25,
lastScheduleTime: 1755095280020,
nextScheduleTime: 1759909800000,
lastScheduleStates: {
msg_7YoJxFpwk: "SUCCESS",
},
callerIP: "127.43.12.54",
isPaused: true,
parallelism: 0,
}
```
# List Schedules
Source: https://upstash.com/docs/qstash/api/schedules/list
GET https://qstash.upstash.io/v2/schedules
List all your schedules
## Response
The id of the schedule.
The cron expression used to schedule the message.
The creation time of the object. UnixMilli
Url or URL Group (topic) name
The HTTP method to use for the message.
The headers of the message.
The body of the message.
The number of retries that should be attempted in case of delivery failure.
The delay in seconds before the message is delivered.
The url where we send a callback to after the message is delivered
The url where we send a callback to after the message delivery fails
IP address where this schedule was created from.
Whether the schedule is paused or not.
The flow control key for rate limiting.
The maximum number of parallel executions.
The rate limit for this schedule.
The time interval during which the specified rate of requests can be activated using the same flow control key. In seconds.
The retry delay expression for this schedule, if retry\_delay was set when creating the schedule.
The label assigned to the schedule for filtering purposes.
The timestamp of the last scheduled execution.
The timestamp of the next scheduled execution.
The states of the last scheduled messages. Maps message id to state (IN\_PROGRESS, SUCCESS, FAIL).
The IP address of the caller who created the schedule.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/schedules \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/schedules', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/schedules',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/schedules", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
[
{
createdAt: 1754565618803,
scheduleId: "schedule-id",
cron: "* * * * *",
destination: "https://your-website/api",
method: "GET",
header: {
"Content-Type": [ "application/json" ],
},
retries: 3,
delay: 25,
lastScheduleTime: 1755095280020,
nextScheduleTime: 1759909800000,
lastScheduleStates: {
msg_7YoJxFpwk: "SUCCESS",
},
callerIP: "127.43.12.54",
isPaused: true,
parallelism: 0,
}
]
```
# Pause Schedule
Source: https://upstash.com/docs/qstash/api/schedules/pause
POST https://qstash.upstash.io/v2/schedules/{scheduleId}/pause
Pause an active schedule
Pausing a schedule will not change the next delivery time, but the delivery will be ignored.
If the schedule is already paused, this action has no effect.
## Request
The id of the schedule to pause.
## Response
This endpoint simply returns 200 OK if the schedule is paused successfully.
```sh curl theme={"system"}
curl -X POST https://qstash.upstash.io/v2/schedules/scd_1234/pause \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
import { Client } from "@upstash/qstash";
/**
* Import a fetch polyfill only if you are using node prior to v18.
* This is not necessary for nextjs, deno or cloudflare workers.
*/
import "isomorphic-fetch";
const c = new Client({
token: "",
});
c.schedules.pause({
schedule: ""
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.pause("")
```
```go Go theme={"system"}
package main
import "github.com/upstash/qstash-go"
func main() {
client := qstash.NewClient("")
// error checking is omitted for brevity
err := client.Schedules().Pause("")
}
```
# Remove Schedule
Source: https://upstash.com/docs/qstash/api/schedules/remove
DELETE https://qstash.upstash.io/v2/schedules/{scheduleId}
Remove a schedule
## Request
The schedule id to remove
## Response
This endpoint simply returns 200 OK if the schedule is removed successfully.
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/schedules/scd_123 \
-H "Authorization: Bearer "
```
```javascript Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/schedules/scd_123', {
method: 'DELETE',
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.delete(
'https://qstash.upstash.io/v2/schedules/scd_123',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/schedules/scd_123", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Resume Schedule
Source: https://upstash.com/docs/qstash/api/schedules/resume
POST https://qstash.upstash.io/v2/schedules/{scheduleId}/resume
Resume a paused schedule
Resuming a schedule marks the schedule as active.
This means the upcoming messages will be delivered and will not be ignored.
If the schedule is already active, this action has no effect.
## Request
The id of the schedule to resume.
## Response
This endpoint simply returns 200 OK if the schedule is resumed successfully.
```sh curl theme={"system"}
curl -X POST https://qstash.upstash.io/v2/schedules/scd_1234/resume \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
import { Client } from "@upstash/qstash";
/**
* Import a fetch polyfill only if you are using node prior to v18.
* This is not necessary for nextjs, deno or cloudflare workers.
*/
import "isomorphic-fetch";
const c = new Client({
token: "",
});
c.schedules.resume({
schedule: ""
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.resume("")
```
```go Go theme={"system"}
package main
import "github.com/upstash/qstash-go"
func main() {
client := qstash.NewClient("")
// error checking is omitted for brevity
err := client.Schedules().Resume("")
}
```
# Get Signing Keys
Source: https://upstash.com/docs/qstash/api/signingKeys/get
GET https://qstash.upstash.io/v2/keys
Retrieve your signing keys
## Response
Your current signing key.
The next signing key.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/keys \
-H "Authorization: Bearer "
```
```javascript Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/keys', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/keys',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/keys", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{ "current": "sig_123", "next": "sig_456" }
```
# Rotate Signing Keys
Source: https://upstash.com/docs/qstash/api/signingKeys/rotate
POST https://qstash.upstash.io/v2/keys/rotate
Rotate your signing keys
## Response
Your current signing key.
The next signing key.
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/keys/rotate \
-H "Authorization: Bearer "
```
```javascript Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/keys/rotate', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/keys/rotate',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/keys/rotate", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{ "current": "sig_123", "next": "sig_456" }
```
# Upsert URL Group and Endpoint
Source: https://upstash.com/docs/qstash/api/url-groups/add-endpoint
POST https://qstash.upstash.io/v2/topics/{urlGroupName}/endpoints
Add an endpoint to a URL Group
If the URL Group does not exist, it will be created. If the endpoint does not exist, it will be created.
## Request
The name of your URL Group (topic). If it doesn't exist yet, it will be created.
The endpoints to add to the URL Group.
The name of the endpoint
The URL of the endpoint
## Response
This endpoint returns 200 if the endpoints are added successfully.
```sh curl theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"endpoints": [
{
"name": "endpoint1",
"url": "https://example.com"
},
{
"name": "endpoint2",
"url": "https://somewhere-else.com"
}
]
}'
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints', {
method: 'POST',
headers: {
'Authorization': 'Bearer ',
'Content-Type': 'application/json'
},
body: JSON.stringify({
'endpoints': [
{
'name': 'endpoint1',
'url': 'https://example.com'
},
{
'name': 'endpoint2',
'url': 'https://somewhere-else.com'
}
]
})
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
}
json_data = {
'endpoints': [
{
'name': 'endpoint1',
'url': 'https://example.com',
},
{
'name': 'endpoint2',
'url': 'https://somewhere-else.com',
},
],
}
response = requests.post(
'https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints',
headers=headers,
json=json_data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{
"endpoints": [
{
"name": "endpoint1",
"url": "https://example.com"
},
{
"name": "endpoint2",
"url": "https://somewhere-else.com"
}
]
}`)
req, err := http.NewRequest("POST", "https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Get a URL Group
Source: https://upstash.com/docs/qstash/api/url-groups/get
GET https://qstash.upstash.io/v2/topics/{urlGroupName}
Retrieves a URL Group
## Request
The name of the URL Group (topic) to retrieve.
## Response
The creation time of the URL Group. UnixMilli
The update time of the URL Group. UnixMilli
The name of the URL Group.
The name of the endpoint
The URL of the endpoint
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/topics/my-url-group \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/topics/my-url-group', {
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/topics/my-url-group',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/topics/my-url-group", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
{
"createdAt": 1623345678001,
"updatedAt": 1623345678001,
"name": "my-url-group",
"endpoints": [
{
"name": "my-endpoint",
"url": "https://my-endpoint.com"
}
]
}
```
# List URL Groups
Source: https://upstash.com/docs/qstash/api/url-groups/list
GET https://qstash.upstash.io/v2/topics
List all your URL Groups
## Request
No parameters
## Response
The creation time of the URL Group. UnixMilli
The update time of the URL Group. UnixMilli
The name of the URL Group.
The name of the endpoint.
The URL of the endpoint
```sh curl theme={"system"}
curl https://qstash.upstash.io/v2/topics \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/topics", {
headers: {
Authorization: "Bearer ",
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.get(
'https://qstash.upstash.io/v2/topics',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("GET", "https://qstash.upstash.io/v2/topics", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
```json 200 OK theme={"system"}
[
{
"createdAt": 1623345678001,
"updatedAt": 1623345678001,
"name": "my-url-group",
"endpoints": [
{
"name": "my-endpoint",
"url": "https://my-endpoint.com"
}
]
},
// ...
]
```
# Remove URL Group
Source: https://upstash.com/docs/qstash/api/url-groups/remove
DELETE https://qstash.upstash.io/v2/topics/{urlGroupName}
Remove a URL group and all its endpoints
The URL Group and all its endpoints are removed. In flight messages in the URL Group are not removed but you will not be able to send messages to the topic anymore.
## Request
The name of the URL Group to remove.
## Response
This endpoint returns 200 if the URL Group is removed successfully.
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/topics/my-url-group \
-H "Authorization: Bearer "
```
```js Node theme={"system"}
const response = await fetch('https://qstash.upstash.io/v2/topics/my-url-group', {
method: 'DELETE',
headers: {
'Authorization': 'Bearer '
}
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
}
response = requests.delete(
'https://qstash.upstash.io/v2/topics/my-url-group',
headers=headers
)
```
```go Go theme={"system"}
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/topics/my-url-group", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Remove Endpoints
Source: https://upstash.com/docs/qstash/api/url-groups/remove-endpoint
DELETE https://qstash.upstash.io/v2/topics/{urlGroupName}/endpoints
Remove one or more endpoints
Remove one or multiple endpoints from a URL Group. If all endpoints have been removed, the URL Group will be deleted.
## Request
The name of your URL Group. If it doesn't exist, we return an error.
The endpoints to be removed from to the URL Group.
Either `name` or `url` must be provided
The name of the endpoint
The URL of the endpoint
## Response
This endpoint simply returns 200 OK if the endpoints have been removed successfully.
```sh curl theme={"system"}
curl -XDELETE https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"endpoints": [
{
"name": "endpoint1",
},
{
"url": "https://somewhere-else.com"
}
]
}'
```
```js Node theme={"system"}
const response = await fetch("https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", {
method: "DELETE",
headers: {
Authorization: "Bearer ",
"Content-Type": "application/json",
},
body: {
endpoints: [
{
name: "endpoint1",
},
{
url: "https://somewhere-else.com",
},
],
},
});
```
```python Python theme={"system"}
import requests
headers = {
'Authorization': 'Bearer ',
'Content-Type': 'application/json',
}
data = {
"endpoints": [
{
"name": "endpoint1",
},
{
"url": "https://somewhere-else.com"
}
]
}
response = requests.delete(
'https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints',
headers=headers,
data=data
)
```
```go Go theme={"system"}
var data = strings.NewReader(`{
"endpoints": [
{
"name": "endpoint1",
},
{
"url": "https://somewhere-else.com"
}
]
}`)
req, err := http.NewRequest("DELETE", "https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints", data)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", "Bearer ")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
```
# Background Jobs
Source: https://upstash.com/docs/qstash/features/background-jobs
## When do you need background jobs
Background jobs are essential for executing tasks that are too time-consuming to run in the
main execution thread without affecting the user experience.
These tasks might include data processing, sending batch emails, performing scheduled maintenance,
or any other operations that are not immediately required to respond to user requests.
Utilizing background jobs allows your application to remain responsive and scalable, handling more requests simultaneously by offloading
heavy lifting to background processes.
In Serverless frameworks, your hosting provider will likely have a limit for how long each task can last. Try searching
for the maximum execution time for your hosting provider to find out more.
## How to use QStash for background jobs
QStash provides a simple and efficient way to run background jobs, you can understand it as a 2 step process:
1. **Public API** Create a public API endpoint within your application. The endpoint should contain the logic for the background job.
QStash requires a public endpoint to trigger background jobs, which means it cannot directly access localhost APIs.
To get around this, you have two options:
* Run QStash [development server](/qstash/howto/local-development) locally
* Set up a [local tunnel](/qstash/howto/local-tunnel) for your API
2. **QStash Request** Invoke QStash to start/schedule the execution of the API endpoint.
Here's what this looks like in a simple Next.js application:
```tsx app/page.tsx theme={"system"}
"use client"
export default function Home() {
async function handleClick() {
// Send a request to our server to start the background job.
// For proper error handling, refer to the quick start.
// Note: This can also be a server action instead of a route handler
await fetch("/api/start-email-job", {
method: "POST",
body: JSON.stringify({
users: ["a@gmail.com", "b@gmail.com", "c.gmail.com"]
}),
})
}
return (
);
}
```
```typescript app/api/start-email-job/route.ts theme={"system"}
import { Client } from "@upstash/qstash";
const qstashClient = new Client({
token: "YOUR_TOKEN",
});
export async function POST(request: Request) {
const body = await request.json();
const users: string[] = body.users;
// If you know the public URL of the email API, you can use it directly
const rootDomain = request.url.split('/').slice(0, 3).join('/');
const emailAPIURL = `${rootDomain}/api/send-email`; // ie: https://yourapp.com/api/send-email
// Tell QStash to start the background job.
// For proper error handling, refer to the quick start.
await qstashClient.publishJSON({
url: emailAPIURL,
body: {
users
}
});
return new Response("Job started", { status: 200 });
}
```
```typescript app/api/send-email/route.ts theme={"system"}
// This is a public API endpoint that will be invoked by QStash.
// It contains the logic for the background job and may take a long time to execute.
import { sendEmail } from "your-email-library";
export async function POST(request: Request) {
const body = await request.json();
const users: string[] = body.users;
// Send emails to the users
for (const user of users) {
await sendEmail(user);
}
return new Response("Job started", { status: 200 });
}
```
To better understand the application, let's break it down:
1. **Client**: The client application contains a button that, when clicked, sends a request to the server to start the background job.
2. **Next.js server**: The first endpoint, `/api/start-email-job`, is invoked by the client to start the background job.
3. **QStash**: The QStash client is used to invoke the `/api/send-email` endpoint, which contains the logic for the background job.
Here is a visual representation of the process:
To view a more detailed Next.js quick start guide for setting up QStash, refer to the [quick start](/qstash/quickstarts/vercel-nextjs) guide.
It's also possible to schedule a background job to run at a later time using [schedules](/qstash/features/schedules).
If you'd like to invoke another endpoint when the background job is complete, you can use [callbacks](/qstash/features/callbacks).
# Batching
Source: https://upstash.com/docs/qstash/features/batch
[Publishing](/qstash/howto/publishing) is great for sending one message
at a time, but sometimes you want to send a batch of messages at once.
This can be useful to send messages to a single or multiple destinations.
QStash provides the `batch` endpoint to help
you with this.
If the format of the messages are valid, the response will be an array of
responses for each message in the batch. When batching URL Groups, the response
will be an array of responses for each destination in the URL Group. If one
message fails to be sent, that message will have an error response, but the
other messages will still be sent.
You can publish to destination, URL Group or queue in the same batch request.
## Batching messages with destinations
You can also send messages to the same destination!
```shell cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '
[
{
"destination": "https://example.com/destination1"
},
{
"destination": "https://example.com/destination2"
}
]'
```
```typescript TypeScript theme={"system"}
import { Client } from "@upstash/qstash";
// Each message is the same as the one you would send with the publish endpoint
const client = new Client({ token: "" });
const res = await client.batchJSON([
{
url: "https://example.com/destination1",
},
{
url: "https://example.com/destination2",
},
]);
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.batch_json(
[
{"url": "https://example.com/destination1"},
{"url": "https://example.com/destination2"},
]
)
```
## Batching messages with URL Groups
If you have a [URL Group](/qstash/howto/url-group-endpoint), you can batch send with
the URL Group as well.
```shell cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '
[
{
"destination": "myUrlGroup"
},
{
"destination": "https://example.com/destination2"
}
]'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
// Each message is the same as the one you would send with the publish endpoint
const res = await client.batchJSON([
{
urlGroup: "myUrlGroup",
},
{
url: "https://example.com/destination2",
},
]);
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.batch_json(
[
{"url_group": "my-url-group"},
{"url": "https://example.com/destination2"},
]
)
```
## Batching messages with queue
If you have a [queue](/qstash/features/queues), you can batch send with
the queue. It is the same as publishing to a destination, but you need to set the queue name.
```shell cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '
[
{
"queue": "my-queue",
"destination": "https://example.com/destination1"
},
{
"queue": "my-second-queue",
"destination": "https://example.com/destination2"
}
]'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
const res = await client.batchJSON([
{
queueName: "my-queue",
url: "https://example.com/destination1",
},
{
queueName: "my-second-queue",
url: "https://example.com/destination2",
},
]);
```
```python Python theme={"system"}
from upstash_qstash import QStash
from upstash_qstash.message import BatchRequest
qstash = QStash("")
messages = [
BatchRequest(
queue="my-queue",
url="https://httpstat.us/200",
body=f"hi 1",
retries=0
),
BatchRequest(
queue="my-second-queue",
url="https://httpstat.us/200",
body=f"hi 2",
retries=0
),
]
qstash.message.batch(messages)
```
## Batching messages with headers and body
You can provide custom headers and a body for each message in the batch.
```shell cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch -H "Authorization: Bearer XXX" \
-H "Content-Type: application/json" \
-d '
[
{
"destination": "myUrlGroup",
"headers":{
"Upstash-Delay":"5s",
"Upstash-Forward-Hello":"123456"
},
"body": "Hello World"
},
{
"destination": "https://example.com/destination1",
"headers":{
"Upstash-Delay":"7s",
"Upstash-Forward-Hello":"789"
}
},
{
"destination": "https://example.com/destination2",
"headers":{
"Upstash-Delay":"9s",
"Upstash-Forward-Hello":"again"
}
}
]'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
// Each message is the same as the one you would send with the publish endpoint
const msgs = [
{
urlGroup: "myUrlGroup",
delay: 5,
body: "Hello World",
headers: {
hello: "123456",
},
},
{
url: "https://example.com/destination1",
delay: 7,
headers: {
hello: "789",
},
},
{
url: "https://example.com/destination2",
delay: 9,
headers: {
hello: "again",
},
body: {
Some: "Data",
},
},
];
const res = await client.batchJSON(msgs);
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.batch_json(
[
{
"url_group": "my-url-group",
"delay": "5s",
"body": {"hello": "world"},
"headers": {"random": "header"},
},
{
"url": "https://example.com/destination1",
"delay": "1m",
},
{
"url": "https://example.com/destination2",
"body": {"hello": "again"},
},
]
)
```
#### The response for this will look like
```json theme={"system"}
[
[
{
"messageId": "msg_...",
"url": "https://myUrlGroup-endpoint1.com"
},
{
"messageId": "msg_...",
"url": "https://myUrlGroup-endpoint2.com"
}
],
{
"messageId": "msg_..."
},
{
"messageId": "msg_..."
}
]
```
# Callbacks
Source: https://upstash.com/docs/qstash/features/callbacks
All serverless function providers have a maximum execution time for each
function. Usually you can extend this time by paying more, but it's still
limited. QStash provides a way to go around this problem by using callbacks.
## What is a callback?
A callback allows you to call a long running function without having to wait for
its response. Instead of waiting for the request to finish, you can add a
callback url to your published message and when the request finishes, we will
call your callback URL with the response.
1. You publish a message to QStash using the `/v2/publish` endpoint
2. QStash will enqueue the message and deliver it to the destination
3. QStash waits for the response from the destination
4. When the response is ready, QStash calls your callback URL with the response
Callbacks publish a new message with the response to the callback URL. Messages
created by callbacks are charged as any other message.
## How do I use Callbacks?
You can add a callback url in the `Upstash-Callback` header when publishing a
message. The value must be a valid URL.
```bash cURL theme={"system"}
curl -X POST \
https://qstash.upstash.io/v2/publish/https://my-api... \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ' \
-H 'Upstash-Callback: ' \
-d '{ "hello": "world" }'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
callback: "https://my-callback...",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
callback="https://my-callback...",
)
```
The callback body sent to you will be a JSON object with the following fields:
```json theme={"system"}
{
"status": 200,
"header": { "key": ["value"] }, // Response header
"body": "YmFzZTY0IGVuY29kZWQgcm9keQ==", // base64 encoded response body
"retried": 2, // How many times we retried to deliver the original message
"maxRetries": 3, // Number of retries before the message assumed to be failed to delivered.
"sourceMessageId": "msg_xxx", // The ID of the message that triggered the callback
"topicName": "myTopic", // The name of the URL Group (topic) if the request was part of a URL Group
"endpointName": "myEndpoint", // The endpoint name if the endpoint is given a name within a topic
"url": "http://myurl.com", // The destination url of the message that triggered the callback
"method": "GET", // The http method of the message that triggered the callback
"sourceHeader": { "key": "value" }, // The http header of the message that triggered the callback
"sourceBody": "YmFzZTY0kZWQgcm9keQ==", // The base64 encoded body of the message that triggered the callback
"notBefore": "1701198458025", // The unix timestamp of the message that triggered the callback is/will be delivered in milliseconds
"createdAt": "1701198447054", // The unix timestamp of the message that triggered the callback is created in milliseconds
"scheduleId": "scd_xxx", // The scheduleId of the message if the message is triggered by a schedule
"callerIP": "178.247.74.179" // The IP address where the message that triggered the callback is published from
}
```
In Next.js you could use the following code to handle the callback:
```js theme={"system"}
// pages/api/callback.js
import { verifySignature } from "@upstash/qstash/nextjs";
function handler(req, res) {
// responses from qstash are base64-encoded
const decoded = atob(req.body.body);
console.log(decoded);
return res.status(200).end();
}
export default verifySignature(handler);
export const config = {
api: {
bodyParser: false,
},
};
```
We may truncate the response body if it exceeds your plan limits. You can check
your `Max Message Size` in the
[console](https://console.upstash.com/qstash?tab=details).
Make sure you verify the authenticity of the callback request made to your API
by
[verifying the signature](/qstash/features/security/#request-signing-optional).
# What is a Failure-Callback?
Failure callbacks are similar to callbacks but they are called only when all the retries are exhausted and still
the message can not be delivered to the given endpoint.
This is designed to be an serverless alternative to [List messages to DLQ](/qstash/api/dlq/listMessages).
You can add a failure callback URL in the `Upstash-Failure-Callback` header when publishing a
message. The value must be a valid URL.
```bash cURL theme={"system"}
curl -X POST \
https://qstash.upstash.io/v2/publish/ \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ' \
-H 'Upstash-Failure-Callback: ' \
-d '{ "hello": "world" }'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
failureCallback: "https://my-callback...",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
failure_callback="https://my-callback...",
)
```
The callback body sent to you will be a JSON object with the following fields:
```json theme={"system"}
{
"status": 400,
"header": { "key": ["value"] }, // Response header
"body": "YmFzZTY0IGVuY29kZWQgcm9keQ==", // base64 encoded response body
"retried": 3, // How many times we retried to deliver the original message
"maxRetries": 3, // Number of retries before the message assumed to be failed to delivered.
"dlqId": "1725323658779-0", // Dead Letter Queue id. This can be used to retrieve/remove the related message from DLQ.
"sourceMessageId": "msg_xxx", // The ID of the message that triggered the callback
"topicName": "myTopic", // The name of the URL Group (topic) if the request was part of a topic
"endpointName": "myEndpoint", // The endpoint name if the endpoint is given a name within a topic
"url": "http://myurl.com", // The destination url of the message that triggered the callback
"method": "GET", // The http method of the message that triggered the callback
"sourceHeader": { "key": "value" }, // The http header of the message that triggered the callback
"sourceBody": "YmFzZTY0kZWQgcm9keQ==", // The base64 encoded body of the message that triggered the callback
"notBefore": "1701198458025", // The unix timestamp of the message that triggered the callback is/will be delivered in milliseconds
"createdAt": "1701198447054", // The unix timestamp of the message that triggered the callback is created in milliseconds
"scheduleId": "scd_xxx", // The scheduleId of the message if the message is triggered by a schedule
"callerIP": "178.247.74.179" // The IP address where the message that triggered the callback is published from
}
```
You can also use a callback and failureCallback together!
## Configuring Callbacks
Publishes/enqueues for callbacks can also be configured with the same HTTP headers that are used to configure direct publishes/enqueues.
You can refer to headers that are used to configure `publishes` [here](/qstash/api/publish) and for `enqueues`
[here](/qstash/api/enqueue)
Instead of the `Upstash` prefix for headers, the `Upstash-Callback`/`Upstash-Failure-Callback` prefix can be used to configure callbacks as follows:
```
Upstash-Callback-Timeout
Upstash-Callback-Retries
Upstash-Callback-Delay
Upstash-Callback-Method
Upstash-Failure-Callback-Timeout
Upstash-Failure-Callback-Retries
Upstash-Failure-Callback-Delay
Upstash-Failure-Callback-Method
```
You can also forward headers to your callback endpoints as follows:
```
Upstash-Callback-Forward-MyCustomHeader
Upstash-Failure-Callback-Forward-MyCustomHeader
```
# Deduplication
Source: https://upstash.com/docs/qstash/features/deduplication
Messages can be deduplicated to prevent duplicate messages from being sent. When
a duplicate message is detected, it is accepted by QStash but not enqueued. This
can be useful when the connection between your service and QStash fails, and you
never receive the acknowledgement. You can simply retry publishing and can be
sure that the message will enqueued only once.
In case a message is a duplicate, we will accept the request and return the
messageID of the existing message. The only difference will be the response
status code. We'll send HTTP `202 Accepted` code in case of a duplicate message.
## Deduplication ID
To deduplicate a message, you can send the `Upstash-Deduplication-Id` header
when publishing the message.
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Deduplication-Id: abcdef" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://my-api..."'
```
```typescript TypeScript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
deduplicationId: "abcdef",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
deduplication_id="abcdef",
)
```
## Content Based Deduplication
If you want to deduplicate messages automatically, you can set the
`Upstash-Content-Based-Deduplication` header to `true`.
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Content-Based-Deduplication: true" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/...'
```
```typescript TypeScript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
contentBasedDeduplication: true,
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
content_based_deduplication=True,
)
```
Content based deduplication creates a unique deduplication ID for the message
based on the following fields:
* **Destination**: The URL Group or endpoint you are publishing the message to.
* **Body**: The body of the message.
* **Header**: This includes the `Content-Type` header and all headers, that you
forwarded with the `Upstash-Forward-` prefix. See
[custom HTTP headers section](/qstash/howto/publishing#sending-custom-http-headers).
The deduplication window is 10 minutes. After that, messages with the same ID or content can be sent again.
# Delay
Source: https://upstash.com/docs/qstash/features/delay
When publishing a message, you can delay it for a certain amount of time before
it will be delivered to your API. See the [pricing table](https://upstash.com/pricing/qstash) for more information
For free: The maximum allowed delay is **7 days**.
For pay-as-you-go: The maximum allowed delay is **1 year**.
For fixed pricing: The maximum allowed delay is **Custom(you may delay as much as needed)**.
## Relative Delay
Delay a message by a certain amount of time relative to the time the message was
published.
The format for the duration is ``. Here are some examples:
* `10s` = 10 seconds
* `1m` = 1 minute
* `30m` = half an hour
* `2h` = 2 hours
* `7d` = 7 days
You can send this duration inside the `Upstash-Delay` header.
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Delay: 1m" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://my-api...'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
delay: 60,
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
headers={
"test-header": "test-value",
},
delay="60s",
)
```
`Upstash-Delay` will get overridden by `Upstash-Not-Before` header when both are
used together.
## Absolute Delay
Delay a message until a certain time in the future. The format is a unix
timestamp in seconds, based on the UTC timezone.
You can send the timestamp inside the `Upstash-Not-Before` header.
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Not-Before: 1657104947" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://my-api...'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
notBefore: 1657104947,
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
headers={
"test-header": "test-value",
},
not_before=1657104947,
)
```
`Upstash-Not-Before` will override the `Upstash-Delay` header when both are used
together.
## Delays in Schedules
Adding a delay in schedules is only possible via `Upstash-Delay`. The
delay will affect the messages that will be created by the schedule and not the
schedule itself.
For example when you create a new schedule with a delay of `30s`, the messages
will be created when the schedule triggers but only delivered after 30 seconds.
# Dead Letter Queues
Source: https://upstash.com/docs/qstash/features/dlq
At times, your API may fail to process a request. This could be due to a bug in your code, a temporary issue with a third-party service, or even network issues.
QStash automatically retries messages that fail due to a temporary issue but eventually stops and moves the message to a dead letter queue to be handled manually.
Read more about retries [here](/qstash/features/retry).
## How to Use the Dead Letter Queue
You can manually republish messages from the dead letter queue in the console.
1. **Retry** - Republish the message and remove it from the dead letter queue. Republished messages are just like any other message and will be retried automatically if they fail.
2. **Delete** - Delete the message from the dead letter queue.
## Limitations
Dead letter queues are subject only to a retention period that depends on your plan. Messages are deleted when their retention period expires. See the “Max DLQ Retention” row on the [QStash Pricing](https://upstash.com/pricing/qstash) page.
# Flow Control
Source: https://upstash.com/docs/qstash/features/flowcontrol
FlowControl enables you to limit the number of messages sent to your endpoint via delaying the delivery.
There are two limits that you can set with the FlowControl feature: [Rate](#rate-limit) and [Parallelism](#parallelism-limit).
And if needed both parameters can be [combined](#rate-and-parallelism-together).
For the `FlowControl`, you need to choose a key first. This key is used to count the number of calls made to your endpoint.
There are not limits to number of keys you can use.
The rate/parallelism limits are not applied per `url`, they are applied per `Flow-Control-Key`.
Keep in mind that rate/period and parallelism info are kept on each publish separately. That means
if you change the rate/period or parallelism on a new publish, the old fired ones will not be affected. They will keep their flowControl config.
During the period that old `publishes` has not delivered but there are also the `publishes` with the new rates, QStash will effectively allow
the highest rate/period or highest parallelism. Eventually(after the old publishes are delivered), the new rate/period and parallelism will be used.
## Rate and Period Parameters
The `rate` parameter specifies the maximum number of calls allowed within a given period. The `period` parameter allows you to specify the time window over which the rate limit is enforced. By default, the period is set to 1 second, but you can adjust it to control how frequently calls are allowed. For example, you can set a rate of 10 calls per minute as follows:
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: { hello: "world" },
flowControl: { key: "USER_GIVEN_KEY", rate: 10, period: "1m" },
});
```
```bash cURL theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \
-H "Upstash-Flow-Control-Value:rate=10,period=1m" \
'https://qstash.upstash.io/v2/publish/https://example.com' \
-d '{"message":"Hello, World!"}'
```
## Parallelism Limit
The parallelism limit is the number of calls that can be active at the same time.
Active means that the call is made to your endpoint and the response is not received yet.
You can set the parallelism limit to 10 calls active at the same time as follows:
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: { hello: "world" },
flowControl: { key: "USER_GIVEN_KEY", parallelism: 10 },
});
```
```bash cURL theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \
-H "Upstash-Flow-Control-Value:parallelism=10" \
'https://qstash.upstash.io/v2/publish/https://example.com' \
-d '{"message":"Hello, World!"}'
```
You can also use the Rest API to get information how many messages waiting for parallelism limit.
See the [API documentation](/qstash/api/flow-control/get) for more details.
## Rate, Parallelism, and Period Together
All three parameters can be combined. For example, with a rate of 10 per minute, parallelism of 20, and a period of 1 minute, QStash will trigger 10 calls in the first minute and another 10 in the next. Since none of them will have finished, the system will wait until one completes before triggering another.
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: { hello: "world" },
flowControl: { key: "USER_GIVEN_KEY", rate: 10, parallelism: 20, period: "1m" },
});
```
```bash cURL theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Flow-Control-Key:USER_GIVEN_KEY" \
-H "Upstash-Flow-Control-Value:rate=10,parallelism=20,period=1m" \
'https://qstash.upstash.io/v2/publish/https://example.com' \
-d '{"message":"Hello, World!"}'
```
## Monitor
You can monitor wait list size of your flow control key's from the console `FlowControl` tab.
Also you can get the same info using the REST API.
* [List All Flow Control Keys](/qstash/api/flow-control/list).
* [Single Flow Control Key](/qstash/api/flow-control/get).
# Queues
Source: https://upstash.com/docs/qstash/features/queues
The queue concept in QStash allows ordered delivery (FIFO).
See the [API doc](/qstash/api/queues/upsert) for the full list of related Rest APIs.
Here we list common use cases for Queue and how to use them.
## Ordered Delivery
With Queues, the ordered delivery is guaranteed by default.
This means:
* Your messages will be queued without blocking the REST API and sent one by one in FIFO order. Queued means [CREATED](/qstash/howto/debug-logs) event will be logged.
* The next message will wait for retries of the current one if it can not be delivered because your endpoint returns non-2xx code.
In other words, the next message will be [ACTIVE](/qstash/howto/debug-logs) only after the last message is either [DELIVERED](/qstash/howto/debug-logs) or
[FAILED](/qstash/howto/debug-logs).
* Next message will wait for [callbacks](/qstash/features/callbacks#what-is-a-callback) or [failure callbacks](/qstash/features/callbacks#what-is-a-failure-callback) to finish.
```bash cURL theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com' -d '{"message":"Hello, World!"}'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
const queue = client.queue({
queueName: "my-queue"
})
await queue.enqueueJSON({
url: "https://example.com",
body: {
"Hello": "World"
}
})
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.enqueue_json(
queue="my-queue",
url="https://example.com",
body={
"Hello": "World",
},
)
```
## Controlled Parallelism
For the parallelism limit, we introduced an easier and less limited API with publish.
Please check the [Flow Control](/qstash/features/flowcontrol) page for the detailed information.
Setting parallelism with queues will be deprecated at some point.
To ensure that your endpoint is not overwhelmed and also you want more than one-by-one delivery for better throughput,
you can achieve controlled parallelism with queues.
By default, queues have parallelism 1.
Depending on your [plan](https://upstash.com/pricing/qstash), you can configure the parallelism of your queues as follows:
```bash cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/queues/ \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"queueName": "my-queue",
"parallelism": 5,
}'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
const queue = client.queue({
queueName: "my-queue"
})
await queue.upsert({
parallelism: 1,
})
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.queue.upsert("my-queue", parallelism=5)
```
After that, you can use the `enqueue` path to send your messages.
```bash cURL theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com' -d '{"message":"Hello, World!"}'
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
const queue = QStashClient.queue({
queueName: "my-queue"
})
await queue.enqueueJSON({
url: "https://example.com",
body: {
"Hello": "World"
}
})
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.enqueue_json(
queue="my-queue",
url="https://example.com",
body={
"Hello": "World",
},
)
```
You can check the parallelism of your queues with the following API:
```bash cURL theme={"system"}
curl https://qstash.upstash.io/v2/queues/my-queue \
-H "Authorization: Bearer "
```
```typescript TypeScript theme={"system"}
const client = new Client({ token: "" });
const queue = client.queue({
queueName: "my-queue"
})
const res = await queue.get()
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.queue.get("my-queue")
```
# Retry
Source: https://upstash.com/docs/qstash/features/retry
QStash will abort a delivery attempt if **the HTTP call to your endpoint does not return within the plan-specific Max HTTP Response Duration**.\
See the current limits on the QStash pricing page.
Many things can go wrong in a serverless environment. If your API does not
respond with a success status code (2XX), we retry the request to ensure every
message will be delivered.
The maximum number of retries depends on your current plan. By default, we retry
the maximum amount of times, but you can set it lower by sending the
`Upstash-Retries` header:
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Retries: 2" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://my-api...'
```
```typescript TypeScript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
retries: 2,
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
retries=2,
)
```
The backoff algorithm calculates the retry delay based on the number of retries.
Each delay is capped at 1 day.
```
n = how many times this request has been retried
delay = min(86400, e ** (2.5*n)) // in seconds
```
| n | delay |
| - | ------ |
| 1 | 12s |
| 2 | 2m28s |
| 3 | 30m8ss |
| 4 | 6h7m6s |
| 5 | 24h |
| 6 | 24h |
## Custom Retry Delay
You can customize the delay between retry attempts by using the `Upstash-Retry-Delay` header when publishing a message. This allows you to override the default exponential backoff with your own mathematical expressions.
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Retries: 3" \
-H "Upstash-Retry-Delay: pow(2, retried) * 1000" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://my-api...'
```
```typescript TypeScript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
retries: 3,
retryDelay: "pow(2, retried) * 1000", // 2^retried * 1000ms
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
retries=3,
retry_delay="pow(2, retried) * 1000", # 2^retried * 1000ms
)
```
The `retryDelay` expression can use mathematical functions and the special variable `retried` (current retry attempt count starting from 0).
**Supported functions:**
* `pow` - Power function
* `sqrt` - Square root
* `abs` - Absolute value
* `exp` - Exponential
* `floor` - Floor function
* `ceil` - Ceiling function
* `round` - Rounding function
* `min` - Minimum of values
* `max` - Maximum of values
**Examples:**
* `1000` - Fixed 1 second delay
* `1000 * (1 + retried)` - Linear backoff: 1s, 2s, 3s, 4s...
* `pow(2, retried) * 1000` - Exponential backoff: 1s, 2s, 4s, 8s...
* `max(1000, pow(2, retried) * 100)` - Exponential with minimum 1s delay
## Retry-After Headers
Instead of using the default backoff algorithm, you can specify when QStash should retry your message.
To do this, include one of the following headers in your response to QStash request.
* Retry-After
* X-RateLimit-Reset
* X-RateLimit-Reset-Requests
* X-RateLimit-Reset-Tokens
These headers can be set to a value in seconds, the RFC1123 date format, or a duration format (e.g., 6m5s).
For the duration format, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Note that you can only delay retries up to the maximum value of the default backoff algorithm, which is one day.
If you specify a value beyond this limit, the backoff algorithm will be applied.
This feature is particularly useful if your application has rate limits, ensuring retries are scheduled appropriately without wasting attempts during restricted periods.
```
Retry-After: 0 // Next retry will be scheduled immediately without any delay.
Retry-After: 10 // Next retry will be scheduled after a 10-second delay.
Retry-After: 6m5s // Next retry will be scheduled after 6 minutes 5 seconds delay.
Retry-After: Sun, 27 Jun 2024 12:16:24 GMT // Next retry will be scheduled for the specified date, within the allowable limits.
```
## Upstash-Retried Header
QStash adds the `Upstash-Retried` header to requests sent to your API. This
indicates how many times the request has been retried.
```
Upstash-Retried: 0 // This is the first attempt
Upstash-Retried: 1 // This request has been sent once before and now is the second attempt
Upstash-Retried: 2 // This request has been sent twice before and now is the third attempt
```
## Non-Retryable Error
By default, QStash retries requests for any response that does not return a successful 2XX status code.
To explicitly disable retries for a given message, respond with a 489 status code and include the header `Upstash-NonRetryable-Error: true`.
When this header is present, QStash will immediately mark the message as failed and skip any further retry attempts. The message will then be forwarded to the Dead Letter Queue (DLQ) for manual review and resolution.
This mechanism is particularly useful in scenarios where retries are generally enabled but should be bypassed for specific known errors—such as invalid payloads or non-recoverable conditions.
# Schedules
Source: https://upstash.com/docs/qstash/features/schedules
In addition to sending a message once, you can create a schedule, and we will
publish the message in the given period. To create a schedule, you simply need
to add the `Upstash-Cron` header to your `publish` request.
Schedules can be configured using `cron` expressions.
[crontab.guru](https://crontab.guru/) is a great tool for understanding and
creating cron expressions.
By default, we evaluate cron expressions in `UTC`.\
If you want to run your schedule in a specific timezone, see the section on
[Timezones](#timezones).
The following request would create a schedule that will automatically publish
the message every minute:
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
cron: "* * * * *",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="https://example.com",
cron="* * * * *",
)
```
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Cron: * * * * *" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/https://example.com'
```
All of the [other config options](/qstash/howto/publishing#optional-parameters-and-configuration)
can still be used.
It can take up to 60 seconds for the schedule to be loaded on an active node and
triggered for the first time.
You can see and manage your schedules in the
[Upstash Console](https://console.upstash.com/qstash).
### Scheduling to a URL Group
Instead of scheduling a message to a specific URL, you can also create a
schedule, that publishes to a URL Group. Simply use either the URL Group name or its id:
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "urlGroupName",
cron: "* * * * *",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="url-group-name",
cron="* * * * *",
)
```
```bash cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Cron: * * * * *" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/'
```
### Scheduling to a Queue
You can schedule an item to be added to a queue at a specified time.
```bash typescript theme={"system"}
curl -XPOST \
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
cron: "* * * * *",
queueName: "yourQueueName",
});
```
```bash cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Cron: * * * * *" \
-H "Upstash-Queue-Name: yourQueueName" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/https://example.com'
```
### Overwriting an existing schedule
You can pass scheduleId explicitly to overwrite an existing schedule or just simply create the schedule
with the given schedule id.
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
scheduleId: "existingScheduleId",
cron: "* * * * *",
});
```
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Cron: * * * * *" \
-H "Upstash-Schedule-Id: existingScheduleId" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/https://example.com'
```
### Timezones
By default, cron expressions are evaluated in `UTC`.\
You can specify a different timezone using the `CRON_TZ` prefix directly inside
the cron expression. All [IANA timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
are supported.
For example, this schedule runs every day at `04:00 AM` in New York time:
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
cron: "CRON_TZ=America/New_York 0 4 * * *",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="https://example.com",
cron="CRON_TZ=America/New_York 0 4 * * *",
)
```
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Cron: CRON_TZ=America/New_York 0 4 * * *" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/https://example.com'
```
# Security
Source: https://upstash.com/docs/qstash/features/security
### Request Authorization
When interacting with the QStash API, you will need an authorization token. You
can get your token from the [Console](https://console.upstash.com/qstash).
Send this token along with every request made to `QStash` inside the
`Authorization` header like this:
```
"Authorization": "Bearer "
```
### Request Signing (optional)
Because your endpoint needs to be publicly available, we recommend you verify
the authenticity of each incoming request.
#### The `Upstash-Signature` header
With each request we are sending a JWT inside the `Upstash-Signature` header.
You can learn more about them [here](https://jwt.io).
An example token would be:
**Header**
```json theme={"system"}
{
"alg": "HS256",
"typ": "JWT"
}
```
**Payload**
```json theme={"system"}
{
"iss": "Upstash",
"sub": "https://qstash-remote.requestcatcher.com/test",
"exp": 1656580612,
"nbf": 1656580312,
"iat": 1656580312,
"jti": "jwt_67kxXD6UBAk7DqU6hzuHMDdXFXfP",
"body": "qK78N0k3pNKI8zN62Fq2Gm-_LtWkJk1z9ykio3zZvY4="
}
```
The JWT is signed using `HMAC SHA256` algorithm with your current signing key
and includes the following claims:
#### Claims
##### `iss`
The issuer field is always `Upstash`.
##### `sub`
The url of your endpoint, where this request is sent to.
For example when you are using a nextjs app on vercel, this would look something
like `https://my-app.vercel.app/api/endpoint`
##### `exp`
A unix timestamp in seconds after which you should no longer accept this
request. Our JWTs have a lifetime of 5 minutes by default.
##### `iat`
A unix timestamp in seconds when this JWT was created.
##### `nbf`
A unix timestamp in seconds before which you should not accept this request.
##### `jti`
A unique id for this token.
##### `body`
The body field is a base64 encoded sha256 hash of the request body. We use url
encoding as specified in
[RFC 4648](https://datatracker.ietf.org/doc/html/rfc4648#section-5).
#### Verifying the signature
See [how to verify the signature](/qstash/howto/signature).
# URL Groups
Source: https://upstash.com/docs/qstash/features/url-groups
Sending messages to a single endpoint and not having to worry about retries is
already quite useful, but we also added the concept of URL Groups to QStash.
In short, a URL Group is just a namespace where you can publish messages to, the
same way as publishing a message to an endpoint directly.
After creating a URL Group, you can create one or multiple endpoints. An endpoint is
defined by a publicly available URL where the request will be sent to each
endpoint after it is published to the URL Group.
When you publish a message to a URL Group, it will be fanned out and sent to all the
subscribed endpoints.
## When should I use URL Groups?
URL Groups decouple your message producers from consumers by grouping one or more
endpoints into a single namespace.
Here's an example: You have a serverless function which is invoked with each
purchase in your e-commerce site. You want to send email to the customer after
the purchase. Inside the function, you submit the URL `api/sendEmail` to the
QStash. Later, if you want to send a Slack notification, you need to update the
serverless function adding another call to QStash to submit
`api/sendNotification`. In this example, you need to update and redeploy the
Serverless function at each time you change (or add) the endpoints.
If you create a URL Group `product-purchase` and produce messages to that URL Group in
the function, then you can add or remove endpoints by only updating the URL Group.
URL Groups give you freedom to modify endpoints without touching the backend
implementation.
Check [here](/qstash/howto/publishing#publish-to-url-group) to learn how to publish
to URL Groups.
## How URL Groups work
When you publish a message to a URL Group, we will enqueue a unique task for each
subscribed endpoint and guarantee successful delivery to each one of them.
[](https://mermaid.live/edit#pako:eNp1kl1rgzAUhv9KyOWoddXNtrkYVNdf0F0U5ijRHDVMjctHoRT_-2KtaztUQeS8j28e8JxxKhhggpWmGt45zSWtnKMX13GN7PX59IUc5w19iIanBDUmKbkq-qwfXuKdSVQqeQLssK1ZI3itVQ9dekdzdO6Ja9ntKKq-DxtEoP4xYGCIr-OOGCoOG4IYlPwIcqBu0V0XQRK0PE0w9lyCvP1-iB1n1CgcNwofjcJpo_Cua8ooHDWadIrGnaJHp2jaKbrrmnKK_jl1d9s98AxXICvKmd2fy8-MsS6gghgT-5oJCUrH2NKWNA2zi7BlXAuJSUZLBTNMjRa7U51ioqWBAbpu4R9VCsrAfnTG-tR0u5pzpW1lKuqM593cyNKOC60bRVy3i-c514VJ5qmoXMVZQaUujuvADbxgRT0fgqVPX32fpclivcq8l0XGls8Lj-K2bX8Bx2nzPg)
Consider this scenario: You have a URL Group and 3 endpoints that are subscribed to
it. Now when you publish a message to the URL Group, internally we will create a
task for each subscribed endpoint and handle all retry mechanism isolated from
each other.
## How to create a URL Group
Please refer to the howto [here](/qstash/howto/url-group-endpoint).
# Debug Logs
Source: https://upstash.com/docs/qstash/howto/debug-logs
To debug the logs, first you need to understand the different states a message can
be in.
Only the last 10.000 logs are kept and older logs are removed automatically.
## Lifecycle of a Message
To understand the lifecycle of each message, we'll look at the following chart:
[comment]: # "https://mermaid.live/edit#pako:eNptU9uO2jAQ_RXLjxVXhyTED5UQpBUSZdtAK7VNtfLGTmIpsZHjrEoR_17HBgLdztPMmXPm4ssJZpIyiGGjiWYrTgpF6uErSgUw9vPdLzAcvgfLJF7s45UDL4FNbEnN6FLWB9lwzVz-EbO0xXK__hb_L43Bevv8OXn6mMS7nSPYSf6tcgIXc5zOkniffH9TvrM4SZ4Sm3GcXne-rLDYLuPNcxJ_-Rrvrrs4cGMiRxLS9K1YroHM3yowqFnTkIKBjIiMVYA3xqsqRp3azWQLu3EwaFUFFNOtEg3ICa9uU91xV_HGuIltcM9v2iwz_fpN-u0_LNYbyzdcdQQVr7k2PsnK6yx90Y5vLtXBF-ED1h_CA5wKOICF4hRirVo2gDVTNelCeOoYKdQlq1kKsXEpy0lb6RSm4mxkByJ-SFlflUq2RQlxTqrGRO2B9u_uhpJWy91RZFeNY8WUa6lupEoSykx4gvp46J5wwRtt-mVS5LzocHOABi61PjR4PO7So4Lrsn0ZZbIeN5yWROnyNQrGAQrmBHksCD3iex7NXqbRPEezaU7DyRQReD4PILP9P7n_Yr-N2YYJM8RStkJDHHqRXbfr_RviaDbyQg9NJz7yg9ksCAfwCHGARn6AfC9CKJqiiT83lf_Y85mM5uEsurfzX7VrENs"
Either you or a previously setup schedule will create a message.
When a message is ready for execution, it will be become `ACTIVE` and a delivery to
your API is attempted.
If you API responds with a status code between `200 - 299`, the task is
considered successful and will be marked as `DELIVERED`.
Otherwise the message is being retried if there are any retries left and moves to `RETRY`. If all retries are exhausted, the task has `FAILED` and the message will be moved to the DLQ.
During all this a message can be cancelled via [DELETE /v2/messages/:messageId](https://docs.upstash.com/qstash/api/messages/cancel). When the request is received, `CANCEL_REQUESTED` will be logged first.
If retries are not exhausted yet, in the next deliver time, the message will be marked as `CANCELLED` and will be completely removed from the system.
## Console
Head over to the [Upstash Console](https://console.upstash.com/qstash) and go to
the `Logs` tab, where you can see the latest status of your messages.
# Delete Schedules
Source: https://upstash.com/docs/qstash/howto/delete-schedule
Deleting schedules can be done using the [schedules api](/qstash/api/schedules/remove).
```shell cURL theme={"system"}
curl -XDELETE \
-H 'Authorization: Bearer XXX' \
'https://qstash.upstash.io/v2/schedules/'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.delete("");
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.delete("")
```
Deleting a schedule does not stop existing messages from being delivered. It
only stops the schedule from creating new messages.
## Schedule ID
If you don't know the schedule ID, you can get a list of all of your schedules
from [here](/qstash/api/schedules/list).
```shell cURL theme={"system"}
curl \
-H 'Authorization: Bearer XXX' \
'https://qstash.upstash.io/v2/schedules'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const allSchedules = await client.schedules.list();
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.list()
```
# Handling Failures
Source: https://upstash.com/docs/qstash/howto/handling-failures
Sometimes, endpoints fail due to various reasons such as network issues or server issues.
In such cases, QStash offers a few options to handle these failures.
## Failure Callbacks
When publishing a message, you can provide a failure callback that will be called if the message fails to be published.
You can read more about callbacks [here](/qstash/features/callbacks).
With the failure callback, you can add custom logic such as logging the failure or sending an alert to the team.
Once you handle the failure, you can [delete it from the dead letter queue](/qstash/api/dlq/deleteMessage).
```bash cURL theme={"system"}
curl -X POST \
https://qstash.upstash.io/v2/publish/ \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ' \
-H 'Upstash-Failure-Callback: ' \
-d '{ "hello": "world" }'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
failureCallback: "https://my-callback...",
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
failure_callback="https://my-callback...",
)
```
## Dead Letter Queue
If you don't want to handle the failure immediately, you can use the dead letter queue (DLQ) to store the failed messages.
You can read more about the dead letter queue [here](/qstash/features/dlq).
Failed messages are automatically moved to the dead letter queue upon failure, and can be retried from the console or
the API by [retrieving the message](/qstash/api/dlq/getMessage) then [publishing it](/qstash/api/publish).
# Local Development
Source: https://upstash.com/docs/qstash/howto/local-development
QStash requires a publicly available API to send messages to.
During development when applications are not yet deployed, developers typically need to expose their local API by creating a public tunnel.
While local tunneling works seamlessly, it requires code changes between development and production environments and increase friction for developers.
To simplify the development process, Upstash provides QStash CLI, which allows you to run a development server locally for testing and development.
The development server fully supports all QStash features including Schedules, URL Groups, Workflows, and Event Logs.Since the development server operates entirely in-memory, all data is reset when the server restarts.
You can download and run the QStash CLI executable binary in several ways:
## NPX (Node Package Executable)
Install the binary via the `@upstash/qstash-cli` NPM package:
```javascript theme={"system"}
npx @upstash/qstash-cli dev
// Start on a different port
npx @upstash/qstash-cli dev -port=8081
```
Once you start the local server, you can go to the QStash tab on Upstash Console and enable local mode, which will allow you to publish requests and monitor messages with the local server.
## Docker
QStash CLI is available as a Docker image through our public AWS ECR repository:
```javascript theme={"system"}
// Pull the image
docker pull public.ecr.aws/upstash/qstash:latest
// Run the image
docker run -p 8080:8080 public.ecr.aws/upstash/qstash:latest qstash dev
```
## Artifact Repository
You can download the binary directly from our artifact repository without using a package manager:
[https://artifacts.upstash.com/#qstash/versions/](https://artifacts.upstash.com/#qstash/versions/)
Select the appropriate version, architecture, and operating system for your platform.
After extracting the archive file, run the executable:
```
$ ./qstash dev
```
## QStash CLI
Currently, the only available command for QStash CLI is `dev`, which starts a development server instance.
```
$ ./qstash dev --help
Usage of dev:
-port int
The port to start HTTP server at [env QSTASH_DEV_PORT] (default 8080)
-quota string
The quota of users [env QSTASH_DEV_QUOTA] (default "payg")
```
There are predefined test users available. You can configure the quota type of users using the `-quota` option, with available options being `payg` and `pro`.
These quotas don't affect performance but allow you to simulate different server limits based on the subscription tier.
After starting the development server using any of the methods above, it will display the necessary environment variables.
Select and copy the credentials from one of the following test users:
```javascript User 1 theme={"system"}
QSTASH_URL="http://localhost:8080"
QSTASH_TOKEN="eyJVc2VySUQiOiJkZWZhdWx0VXNlciIsIlBhc3N3b3JkIjoiZGVmYXVsdFBhc3N3b3JkIn0="
QSTASH_CURRENT_SIGNING_KEY="sig_7kYjw48mhY7kAjqNGcy6cr29RJ6r"
QSTASH_NEXT_SIGNING_KEY="sig_5ZB6DVzB1wjE8S6rZ7eenA8Pdnhs"
```
```javascript User 2 theme={"system"}
QSTASH_URL="http://localhost:8080"
QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjEiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9"
QSTASH_CURRENT_SIGNING_KEY="sig_7GVPjvuwsfqF65iC8fSrs1dfYruM"
QSTASH_NEXT_SIGNING_KEY="sig_5NoELc3EFnZn4DVS5bDs2Nk4b7Ua"
```
```javascript User 3 theme={"system"}
QSTASH_URL="http://localhost:8080"
QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjIiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9"
QSTASH_CURRENT_SIGNING_KEY="sig_6jWGaWRxHsw4vMSPJprXadyvrybF"
QSTASH_NEXT_SIGNING_KEY="sig_7qHbvhmahe5GwfePDiS5Lg3pi6Qx"
```
```javascript User 4 theme={"system"}
QSTASH_URL="http://localhost:8080"
QSTASH_TOKEN="eyJVc2VySUQiOiJ0ZXN0VXNlcjMiLCJQYXNzd29yZCI6InRlc3RQYXNzd29yZCJ9"
QSTASH_CURRENT_SIGNING_KEY="sig_5T8FcSsynBjn9mMLBsXhpacRovJf"
QSTASH_NEXT_SIGNING_KEY="sig_7GFR4YaDshFcqsxWRZpRB161jguD"
```
Currently, there is no GUI client available for the development server. You can use QStash SDKs to fetch resources like event logs.
## License
The QStash development server is licensed under the [Development Server License](/qstash/misc/license), which restricts its use to development and testing purposes only.
It is not permitted to use it in production environments. Please refer to the full license text for details.
# Local Tunnel
Source: https://upstash.com/docs/qstash/howto/local-tunnel
QStash requires a publicly available API to send messages to.
The recommended approach is to run a [development server](/qstash/howto/local-development) locally and use it for development purposes.
Alternatively, you can set up a local tunnel to expose your API, enabling QStash to send requests directly to your application during development.
## localtunnel.me
[localtunnel.me](https://github.com/localtunnel/localtunnel) is a free service to provide
a public endpoint for your local development.
It's as simple as running
```
npx localtunnel --port 3000
```
replacing `3000` with the port your application is running on.
This will give you a public URL like `https://good-months-leave.loca.lt` which can be used
as your QStash URL.
If you run into issues, you may need to set the `Upstash-Forward-bypass-tunnel-reminder` header to
any value to bypass the reminder message.
## ngrok
[ngrok](https://ngrok.com) is a free service, that provides you with a public
endpoint and forwards all traffic to your localhost.
### Sign up
Create a new account on
[dashboard.ngrok.com/signup](https://dashboard.ngrok.com/signup) and follow the
[instructions](https://dashboard.ngrok.com/get-started/setup) to download the
ngrok CLI and connect your account:
```bash theme={"system"}
ngrok config add-authtoken XXX
```
### Start the tunnel
Choose the port where your application is running. Here I'm forwarding to port
3000, because Next.js is using it.
```bash theme={"system"}
$ ngrok http 3000
Session Status online
Account Andreas Thomas (Plan: Free)
Version 3.1.0
Region Europe (eu)
Latency -
Web Interface http://127.0.0.1:4040
Forwarding https://e02f-2a02-810d-af40-5284-b139-58cc-89df-b740.eu.ngrok.io -> http://localhost:3000
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
```
### Publish a message
Now copy the `Forwarding` url and use it as destination in QStash. Make sure to
add the path of your API at the end. (`/api/webhooks` in this case)
```
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://e02f-2a02-810d-af40-5284-b139-58cc-89df-b740.eu.ngrok.io/api/webhooks'
```
### Debug
In case messages are not delivered or something else doesn't work as expected,
you can go to [http://127.0.0.1:4040](http://127.0.0.1:4040) to see what ngrok
is doing.
# Publish Messages
Source: https://upstash.com/docs/qstash/howto/publishing
Publishing a message is as easy as sending a HTTP request to the `/publish`
endpoint. All you need is a valid url of your destination.
Destination URLs must always include the protocol (`http://` or `https://`)
## The message
The message you want to send is passed in the request body. Upstash does not
use, parse, or validate the body, so you can send any kind of data you want. We
suggest you add a `Content-Type` header to your request to make sure your
destination API knows what kind of data you are sending.
## Sending custom HTTP headers
In addition to sending the message itself, you can also forward HTTP headers.
Simply add them prefixed with `Upstash-Forward-` and we will include them in the
message.
#### Here's an example
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H 'Upstash-Forward-My-Header: my-value' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://example.com",
body: { "hello": "world" },
headers: { "my-header": "my-value" },
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
headers={
"my-header": "my-value",
},
)
```
In this case, we would deliver a `POST` request to `https://example.com` with
the following body and headers:
```json theme={"system"}
// body
{ "hello": "world" }
// headers
My-Header: my-value
Content-Type: application/json
```
#### What happens after publishing?
When you publish a message, it will be durably stored in an
[Upstash Redis database](https://upstash.com/redis). Then we try to deliver the
message to your chosen destination API. If your API is down or does not respond
with a success status code (200-299), the message will be retried and delivered
when it comes back online. You do not need to worry about retrying messages or
ensuring that they are delivered.
By default, the multiple messages published to QStash are sent to your API in parallel.
## Publish to URL Group
URL Groups allow you to publish a single message to more than one API endpoints. To
learn more about URL Groups, check [URL Groups section](/qstash/features/url-groups).
Publishing to a URL Group is very similar to publishing to a single destination. All
you need to do is replace the `URL` in the `/publish` endpoint with the URL Group
name.
```
https://qstash.upstash.io/v2/publish/https://example.com
https://qstash.upstash.io/v2/publish/my-url-group
```
```shell cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/my-url-group'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
urlGroup: "my-url-group",
body: { "hello": "world" },
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url_group="my-url-group",
body={
"hello": "world",
},
)
```
## Optional parameters and configuration
QStash supports a number of optional parameters and configuration that you can
use to customize the delivery of your message. All configuration is done using
HTTP headers.
# Receiving Messages
Source: https://upstash.com/docs/qstash/howto/receiving
What do we send to your API?
When you publish a message, QStash will deliver it to your chosen destination. This is a brief overview of how a request to your API looks like.
## Headers
We are forwarding all headers that have been prefixed with `Upstash-Forward-` to your API. [Learn more](/qstash/howto/publishing#sending-custom-http-headers)
In addition to your custom headers, we're sending these headers as well:
| Header | Description |
| --------------------- | -------------------------------------------------------------------- |
| `User-Agent` | Will be set to `Upstash-QStash` |
| `Content-Type` | The original `Content-Type` header |
| `Upstash-Topic-Name` | The URL Group (topic) name if sent to a URL Group |
| `Upstash-Signature` | The signature you need to verify [See here](/qstash/howto/signature) |
| `Upstash-Retried` | How often the message has been retried so far. Starts with 0. |
| `Upstash-Message-Id` | The message id of the message. |
| `Upstash-Schedule-Id` | The schedule id of the message if it is related to a schedule. |
| `Upstash-Caller-Ip` | The IP address of the publisher of this message. |
## Body
The body is passed as is, we do not modify it at all. If you send a JSON body, you will receive a JSON body. If you send a string, you will receive a string.
## Verifying the signature
[See here](/qstash/howto/signature)
# Reset Token
Source: https://upstash.com/docs/qstash/howto/reset-token
Your token is used to interact with the QStash API. You need it to publish
messages as well as create, read, update or delete other resources, such as
URL Groups and endpoints.
Resetting your token will invalidate your current token and all future requests
with the old token will be rejected.
To reset your token, simply click on the "Reset token" button at the bottom in
the [QStash UI](https://console.upstash.com/qstash) and confirm the dialog.
Afterwards you should immediately update your token in all your applications.
# Roll Your Signing Keys
Source: https://upstash.com/docs/qstash/howto/roll-signing-keys
Because your API needs to be publicly accessible from the internet, you should
make sure to verify the authenticity of each request.
Upstash provides a JWT with each request. This JWT is signed by your individual
secret signing keys. [Read more](/qstash/howto/signature).
We are using 2 signing keys:
* current: This is the key used to sign the JWT.
* next: This key will be used to sign after you have rolled your keys.
If we were using only a single key, there would be some time between when you
rolled your keys and when you can edit the key in your applications. In order to
minimize downtime, we use 2 keys and you should always try to verify with both
keys.
## What happens when I roll my keys?
When you roll your keys, the current key will be replaced with the next key and
a new next key will be generated.
```
currentKey = nextKey
nextKey = generateNewKey()
```
Rolling your keys twice without updating your applications will cause your apps
to reject all requests, because both the current and next keys will have been
replaced.
## How to roll your keys
Rolling your keys can be done by going to the
[QStash UI](https://console.upstash.com/qstash) and clicking on the "Roll keys"
button.
# Verify Signatures
Source: https://upstash.com/docs/qstash/howto/signature
We send a JWT with each request. This JWT is signed by your individual secret
signing key and sent in the `Upstash-Signature` HTTP header.
You can use this signature to verify the request is coming from QStash.
You need to keep your signing keys in a secure location.
Otherwise some malicious actor could use them to send requests to your API as if they were coming from QStash.
## Verifying
You can use the official QStash SDKs or implement a custom verifier either by using [an open source library](https://jwt.io/libraries) or by processing the JWT manually.
### Via SDK (Recommended)
QStash SDKs provide a `Receiver` type that simplifies signature verification.
```typescript Typescript theme={"system"}
import { Receiver } from "@upstash/qstash";
const receiver = new Receiver({
currentSigningKey: "YOUR_CURRENT_SIGNING_KEY",
nextSigningKey: "YOUR_NEXT_SIGNING_KEY",
});
// ... in your request handler
const signature = req.headers["Upstash-Signature"];
const body = req.body;
const isValid = await receiver.verify({
body,
signature,
url: "YOUR-SITE-URL",
});
```
```python Python theme={"system"}
from qstash import Receiver
receiver = Receiver(
current_signing_key="YOUR_CURRENT_SIGNING_KEY",
next_signing_key="YOUR_NEXT_SIGNING_KEY",
)
# ... in your request handler
signature, body = req.headers["Upstash-Signature"], req.body
receiver.verify(
body=body,
signature=signature,
url="YOUR-SITE-URL",
)
```
```go Golang theme={"system"}
import "github.com/qstash/qstash-go"
receiver := qstash.NewReceiver("", "NEXT_SIGNING_KEY")
// ... in your request handler
signature := req.Header.Get("Upstash-Signature")
body, err := io.ReadAll(req.Body)
// handle err
err := receiver.Verify(qstash.VerifyOptions{
Signature: signature,
Body: string(body),
Url: "YOUR-SITE-URL", // optional
})
// handle err
```
Depending on the environment, the body might be parsed into an object by the HTTP handler if it is JSON.
Ensure you use the raw body string as is. For example, converting the parsed object back to a string (e.g., JSON.stringify(object)) may cause inconsistencies and result in verification failure.
### Manual verification
If you don't want to use the SDKs, you can implement your own verifier either by using an open-source library or by manually processing the JWT.
The exact implementation depends on the language of your choice and the library if you use one.
Instead here are the steps you need to follow:
1. Split the JWT into its header, payload and signature
2. Verify the signature
3. Decode the payload and verify the claims
* `iss`: The issuer must be`Upstash`.
* `sub`: The subject must the url of your API.
* `exp`: Verify the token has not expired yet.
* `nbf`: Verify the token is already valid.
* `body`: Hash the raw request body using `SHA-256` and compare it with the
`body` claim.
You can also reference the implementation in our
[Typescript SDK](https://github.com/upstash/sdk-qstash-ts/blob/main/src/receiver.ts#L82).
After you have verified the signature and the claims, you can be sure the
request came from Upstash and process it accordingly.
## Claims
All claims in the JWT are listed [here](/qstash/features/security#claims)
# Create URL Groups and Endpoints
Source: https://upstash.com/docs/qstash/howto/url-group-endpoint
QStash allows you to group multiple APIs together into a single namespace,
called a `URL Group` (Previously, it was called `Topics`).
Read more about URL Groups [here](/qstash/features/url-groups).
There are two ways to create endpoints and URL Groups: The UI and the REST API.
## UI
Go to [console.upstash.com/qstash](https://console.upstash.com/qstash) and click
on the `URL Groups` tab. Afterwards you can create a new URL Group by giving it a name.
Keep in mind that URL Group names are restricted to alphanumeric, underscore, hyphen
and dot characters.
After creating the URL Group, you can add endpoints to it:
## API
You can create a URL Group and endpoint using the [console](https://console.upstash.com/qstash) or [REST API](/qstash/api/url-groups/add-endpoint).
```bash cURL theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/topics/:urlGroupName/endpoints \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{
"endpoints": [
{
"name": "endpoint1",
"url": "https://example.com"
},
{
"name": "endpoint2",
"url": "https://somewhere-else.com"
}
]
}'
```
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const urlGroups = client.urlGroups;
await urlGroups.addEndpoints({
name: "urlGroupName",
endpoints: [
{ name: "endpoint1", url: "https://example.com" },
{ name: "endpoint2", url: "https://somewhere-else.com" },
],
});
```
```python Python theme={"system"}
from qstash import QStash
client = QStash("")
client.url_group.upsert_endpoints(
url_group="url-group-name",
endpoints=[
{"name": "endpoint1", "url": "https://example.com"},
{"name": "endpoint2", "url": "https://somewhere-else.com"},
],
)
```
# Use as Webhook Receiver
Source: https://upstash.com/docs/qstash/howto/webhook
You can configure QStash to receive and process your webhook calls.
Instead of having the webhook service call your endpoint directly, QStash acts as an intermediary, receiving the request and forwarding it to your endpoint.
QStash provides additional control over webhook requests, allowing you to configure properties such as delay, retries, timeouts, callbacks, and flow control.
There are multiple ways to configure QStash to receive webhook requests.
## 1. Publish
You can configure your webhook URL as a QStash publish request.
For example, if your webhook endpoint is:
`https://example.com/api/webhook`
Instead of using this URL directly as the webhook address, use:
`https://qstash.upstash.io/v2/publish/https://example.com/api/webhook?qstash_token=`
Request configurations such as custom retries, timeouts, and other settings can be specified using HTTP headers in the publish request.
Refer to the [REST API documentation](/qstash/api/publish) for a full list of available configuration headers.
It’s also possible to pass configuration via query parameters. You can use the lowercase format of headers as the key, such as ?upstash-retries=3\&upstash-delay=100s. This makes it easier to configure webhook messages.
By default, any headers in the publish request that are prefixed with `Upstash-Forward-` will be forwarded to your endpoint.
However, since most webhook services do not allow header prefixing, we introduced a configuration option to enable forwarding all incoming request headers.
To enable this, set `Upstash-Header-Forward: true` in the publish request or append the query parameter `?upstash-header-forward=true` to the request URL. This ensures that all headers are forwarded to your endpoint without requiring the `Upstash-Forward-` prefix.
## 2. URL Group
URL Groups allow you to define server-side templates for publishing messages. You can create a URL Group either through the UI or programmatically.
For example, if your webhook endpoint is:
`https://example.com/api/webhook`
Instead of using this URL directly, you can create a URL Group and add this URL as an endpoint.
`https://qstash.upstash.io/v2/publish/?qstash_token=`
You can define default headers for a URL Group, which will automatically apply to all requests sent to that group.
```
curl -X PATCH https://qstash.upstash.io/v2/topics/ \
-H "Authorizarion: Bearer "
-d '{
"headers": {
"Upstash-Header-Forward": ["true"],
"Upstash-Retries": "3"
}
}'
```
When you save this header for your URL Group, it ensures that all headers are forwarded as needed for your webhook processing.
A URL Group also enables you to define multiple endpoints within group.
When a publish request is made to a URL Group, all associated endpoints will be triggered, allowing you to fan-out a single webhook call to multiple destinations.
# LLM with Anthropic
Source: https://upstash.com/docs/qstash/integrations/anthropic
QStash integrates smoothly with Anthropic's API, allowing you to send LLM requests and leverage QStash features like retries, callbacks, and batching. This is especially useful when working in serverless environments where LLM response times vary and traditional timeouts may be limiting. QStash provides an HTTP timeout of up to 2 hours, which is ideal for most LLM cases.
### Example: Publishing and Enqueueing Requests
Specify the `api` as `llm` with the provider set to `anthropic()` when publishing requests. Use the `Upstash-Callback` header to handle responses asynchronously, as streaming completions aren’t supported for this integration.
#### Publishing a Request
```typescript theme={"system"}
import { anthropic, Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.publishJSON({
api: { name: "llm", provider: anthropic({ token: "" }) },
body: {
model: "claude-3-5-sonnet-20241022",
messages: [{ role: "user", content: "Summarize recent tech trends." }],
},
callback: "https://example.com/callback",
});
```
### Enqueueing a Chat Completion Request
Use `enqueueJSON` with Anthropic as the provider to enqueue requests for asynchronous processing.
```typescript theme={"system"}
import { anthropic, Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const result = await client.queue({ queueName: "your-queue-name" }).enqueueJSON({
api: { name: "llm", provider: anthropic({ token: "" }) },
body: {
model: "claude-3-5-sonnet-20241022",
messages: [
{
role: "user",
content: "Generate ideas for a marketing campaign.",
},
],
},
callback: "https://example.com/callback",
});
console.log(result);
```
### Sending Chat Completion Requests in Batches
Use `batchJSON` to send multiple requests at once. Each request in the batch specifies the same Anthropic provider and includes a callback URL.
```typescript theme={"system"}
import { anthropic, Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const result = await client.batchJSON([
{
api: { name: "llm", provider: anthropic({ token: "" }) },
body: {
model: "claude-3-5-sonnet-20241022",
messages: [
{
role: "user",
content: "Describe the latest in AI research.",
},
],
},
callback: "https://example.com/callback1",
},
{
api: { name: "llm", provider: anthropic({ token: "" }) },
body: {
model: "claude-3-5-sonnet-20241022",
messages: [
{
role: "user",
content: "Outline the future of remote work.",
},
],
},
callback: "https://example.com/callback2",
},
// Add more requests as needed
]);
console.log(result);
```
#### Analytics with Helicone
To monitor usage, include Helicone analytics by passing your Helicone API key under `analytics`:
```typescript theme={"system"}
await client.publishJSON({
api: {
name: "llm",
provider: anthropic({ token: "" }),
analytics: { name: "helicone", token: process.env.HELICONE_API_KEY! },
},
body: { model: "claude-3-5-sonnet-20241022", messages: [{ role: "user", content: "Hello!" }] },
callback: "https://example.com/callback",
});
```
With this setup, Anthropic can be used seamlessly in any LLM workflows in QStash.
# Datadog - Upstash QStash Integration
Source: https://upstash.com/docs/qstash/integrations/datadog
This guide walks you through connecting your Datadog account with Upstash QStash for monitoring and analytics of your message delivery, retries, DLQ, and schedules.
**Integration Scope**
Upstash Datadog Integration covers Prod Pack.
## **Step 1: Log in to Your Datadog Account**
1. Go to [Datadog](https://www.datadoghq.com/) and sign in.
## **Step 2: Install Upstash Application**
1. In Datadog, open the Integrations page.
2. Search for "Upstash" and open the integration.
Click "Install" to add Upstash to your Datadog account.
## **Step 3: Connect Accounts**
After installing Upstash, click "Connect Accounts". Datadog will redirect you to Upstash to complete account linking.
## **Step 4: Select Account to Integrate**
1. On Upstash, select the Datadog account to integrate.
2. Personal and team accounts are supported.
**Caveats**
* The integration can be established once at a time. To change the account scope (e.g., add/remove teams), re-establish the integration from scratch.
## **Step 5: Wait for Metrics Availability**
Once the integration is completed, metrics from QStash (publish counts, success/error rates, retries, DLQ, schedule executions) will start appearing in Datadog dashboards shortly.
## **Step 6: Datadog Integration Removal Process**
From Datadog → Integrations → Upstash, press "Remove" to break the connection.
### Confirm Removal
Upstash will stop publishing metrics after removal. Ensure any Datadog API keys/configurations for this integration are also removed on the Datadog side.
## **Conclusion**
You’ve connected Datadog with Upstash QStash. Explore Datadog dashboards to monitor message delivery performance and reliability.
If you need help, contact support.
# LLM - OpenAI
Source: https://upstash.com/docs/qstash/integrations/llm
QStash has built-in support for calling LLM APIs. This allows you to take advantage of QStash features such as retries, callbacks, and batching while using LLM APIs.
QStash is especially useful for LLM processing because LLM response times are often highly variable. When accessing LLM APIs from serverless runtimes, invocation timeouts are a common issue. QStash offers an HTTP timeout of 2 hours, which is sufficient for most LLM use cases. By using callbacks and the workflows, you can easily manage the asynchronous nature of LLM APIs.
## QStash LLM API
You can publish (or enqueue) single LLM request or batch LLM requests using all existing QStash features natively. To do this, specify the destination `api` as `llm` with a valid provider. The body of the published or enqueued message should contain a valid chat completion request. For these integrations, you must specify the `Upstash-Callback` header so that you can process the response asynchronously. Note that streaming chat completions cannot be used with them. Use [the chat API](#chat-api) for streaming completions.
All the examples below can be used with **OpenAI-compatible LLM providers**.
### Publishing a Chat Completion Request
```js JavaScript theme={"system"}
import { Client, upstash } from "@upstash/qstash";
const client = new Client({
token: "",
});
const result = await client.publishJSON({
api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_"}) },
body: {
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Write a hello world program in Rust.",
},
],
},
callback: "https://abc.requestcatcher.com/",
});
console.log(result);
```
```python Python theme={"system"}
from qstash import QStash
from qstash.chat import upstash
q = QStash("")
result = q.message.publish_json(
api={"name": "llm", "provider": openai("")},
body={
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Write a hello world program in Rust.",
}
],
},
callback="https://abc.requestcatcher.com/",
)
print(result)
```
### Enqueueing a Chat Completion Request
```js JavaScript theme={"system"}
import { Client, upstash } from "@upstash/qstash";
const client = new Client({
token: "",
});
const result = await client.queue({ queueName: "queue-name" }).enqueueJSON({
api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_"}) },
body: {
"model": "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Write a hello world program in Rust.",
},
],
},
callback: "https://abc.requestcatcher.com",
});
console.log(result);
```
```python Python theme={"system"}
from qstash import QStash
from qstash.chat import upstash
q = QStash("")
result = q.message.enqueue_json(
queue="queue-name",
api={"name": "llm", "provider": openai("")},
body={
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Write a hello world program in Rust.",
}
],
},
callback="https://abc.requestcatcher.com",
)
print(result)
```
### Sending Chat Completion Requests in Batches
```js JavaScript theme={"system"}
import { Client, upstash } from "@upstash/qstash";
const client = new Client({
token: "",
});
const result = await client.batchJSON([
{
api: { name: "llm", provider: openai({ token: "_OPEN_AI_TOKEN_" }) },
body: { ... },
callback: "https://abc.requestcatcher.com",
},
...
]);
console.log(result);
```
```python Python theme={"system"}
from qstash import QStash
from qstash.chat import upstash
q = QStash("")
result = q.message.batch_json(
[
{
"api":{"name": "llm", "provider": openai("")},
"body": {...},
"callback": "https://abc.requestcatcher.com",
},
...
]
)
print(result)
```
```shell curl theme={"system"}
curl "https://qstash.upstash.io/v2/batch" \
-X POST \
-H "Authorization: Bearer QSTASH_TOKEN" \
-H "Content-Type: application/json" \
-d '[
{
"destination": "api/llm",
"body": {...},
"callback": "https://abc.requestcatcher.com"
},
...
]'
```
### Retrying After Rate Limit Resets
When the rate limits are exceeded, QStash automatically schedules the retry of
publish or enqueue of chat completion tasks depending on the reset time
of the rate limits. That helps with not doing retries prematurely
when it is definitely going to fail due to exceeding rate limits.
## Analytics via Helicone
Helicone is a powerful observability platform that provides valuable insights into your LLM usage. Integrating Helicone with QStash is straightforward.
To enable Helicone observability in QStash, you simply need to pass your Helicone API key when initializing your model. Here's how to do it for both custom models and OpenAI:
```ts theme={"system"}
import { Client, custom } from "@upstash/qstash";
const client = new Client({
token: "",
});
await client.publishJSON({
api: {
name: "llm",
provider: custom({
token: "XXX",
baseUrl: "https://api.together.xyz",
}),
analytics: { name: "helicone", token: process.env.HELICONE_API_KEY! },
},
body: {
model: "meta-llama/Llama-3-8b-chat-hf",
messages: [
{
role: "user",
content: "hello",
},
],
},
callback: "https://oz.requestcatcher.com/",
});
```
# n8n with QStash
Source: https://upstash.com/docs/qstash/integrations/n8n
Leverage your n8n workflow with Upstash Qstash, here is how you can make those requests using HTTP Request node.
### Step 1: Set Up an n8n Project
1. Go to [https://n8n.io](https://n8n.io) and create a new project
2. Create a Trigger as Webhook with default settings, this will be our entry point.
3. Create a HTTP Request Node
***
### Step 2: Import QStash Configurations to HTTP Node
1. Go to Upstash Console and open QStash Request Builder Tab.
2. Fill out the fields to create an QStash Request. (Publish, Enqueue, Schedule)
3. Copy the cURL snippet created for you, representing your request.
4. Back to the n8n, in HTTP Request Parameters tab, use import cURL.
5. Paste the cURL snippet that you copied in the console, and let n8n to fill out the form for you.
***
### Step 3: Test the Workflow
1. Execute workflow.
2. Visit the Webhook URL.
3. That's it! You can check the logs in the Qstash Console to confirm your QStash Request is working.
# Pipedream
Source: https://upstash.com/docs/qstash/integrations/pipedream
Build and run workflows with 1000s of open source triggers and actions across 900+ apps.
[Pipedream](https://pipedream.com) allows you to build and run workflows with
1000s of open source triggers and actions across 900+ apps.
Check out the [official integration](https://pipedream.com/apps/qstash).
## Trigger a Pipedream workflow from a QStash topic message
This is a step by step guide on how to trigger a Pipedream workflow from a
QStash topic message.
Alternatively [click here](https://pipedream.com/new?h=tch_3egfAX) to create a
new workflow with this QStash topic trigger added.
### 1. Create a Topic in QStash
If you haven't yet already, create a **Topic** in the
[QStash dashboard](https://console.upstash.com/qstash?tab=topics).
### 2. Create a new Pipedream workflow
Sign into [Pipedream](https://pipedream.com) and create a new workflow.
### 3. Add QStash Topic Message as a trigger
In the workflow **Trigger** search for QStash and select the **Create Topic
Endpoint** trigger.

Then, connect your QStash account by clicking the QStash prop and retrieving
your token from the
[QStash dashboard](https://console.upstash.com/qstash?tab=details).
After connecting your QStash account, click the **Topic** prop, a dropdown will
appear containing the QStash topics on your account.
Then *click* on a specific topic to listen for new messages on.

Finally, *click* **Continue**. Pipedream will create a unique HTTP endpoint and
add it to your QStash topic.
### 4. Test with a sample message
Use the *Request Builder* in the
[QStash dashboard](https://console.upstash.com/qstash?tab=details) to publish a
test message to your topic.
Alternatively, you can use the **Create topic message** action in a Pipedream
workflow to send a message to your topic.
*Don't forget* to use this action in a separate workflow, otherwise you might
cause an infinite loop of messages between QStash and Pipedream.
### 5. Add additional steps
Add additional steps to the workflow by clicking the plus icon beneath the
Trigger step.
Build a workflow with the 1,000+ pre-built components available in Pipedream,
including [Airtable](https://pipedream.com/apps/airtable),
[Google Sheets](https://pipedream.com/apps/google-sheets),
[Slack](https://pipedream.com/apps/slack) and many more.
Alternatively, use [Node.js](https://pipedream.com/docs/code/nodejs) or
[Python](https://pipedream.com/docs/code/python) code steps to retrieve,
transform, or send data to other services.
### 6. Deploy your Pipedream workflow
After you're satisfied with your changes, click the **Deploy** button in the
top right of your Pipedream workflow. Your deployed workflow will not
automatically process new messages to your QStash topic. Collapse
quickstart-trigger-pipedream-workflow-from-topic.md 3 KB
### Video tutorial
If you prefer video, you can check out this tutorial by
[pipedream](https://pipedream.com).
[](https://www.youtube.com/watch?v=-oXlWuxNG5A)
## Trigger a Pipedream workflow from a QStash topic message
This is a step by step guide on how to trigger a Pipedream workflow from a
QStash endpoint message.
Alternatively [click here](https://pipedream.com/new?h=tch_m5ofX6) to create a
pre-configured workflow with the HTTP trigger and QStash webhook verification
step already added.
### 1. Create a new Pipedream workflow
Sign into [Pipedream](https://pipedream.com) and create a new workflow.
### 2. Configure the workflow with an HTTP trigger
In the workflow **Trigger** select the **New HTTP / Webhook Requests** option.

Pipedream will create a unique HTTP endpoint for your workflow.
Then configure the HTTP trigger to *return a custom response*. By default
Pipedream will always return a 200 response, which allows us to return a non-200
response to QStash to retry the workflow again if there's an error during the
execution of the QStash message.

Lastly, set the **Event Body** to be a **Raw request**. This will make sure the
QStash verify webhook action receives the data in the correct format.

### 3. Test with a sample message
Use the *Request Builder* in the
[QStash dashboard](https://console.upstash.com/qstash?tab=details) to publish a
test message to your topic.
Alternatively, you can use the **Create topic message** action in a Pipedream
workflow to send a message to your topic.
*Don't forget* to use this action in a separate workflow, otherwise you might
cause an infinite loop of messages between QStash and Pipedream.
### 4. Verify the QStash webhook
Pipedream has a pre-built QStash action that will verify the content of incoming
webhooks from QStash.
First, search for **QStash** in the step search bar, then select the QStash app.
Of the available actions, select the **Verify Webhook** action.
Then connect your QStash account and select the **HTTP request** prop. In the
dropdown, click **Enter custom expression** and then paste in
`{{ steps.trigger.event }}`.
This step will automatically verify the incoming HTTP requests and exit the
workflow early if requests are not from QStash.
### 5. Add additional steps
Add additional steps to the workflow by clicking the plus icon beneath the
Trigger step.
Build a workflow with the 1,000+ pre-built components available in Pipedream,
including [Airtable](https://pipedream.com/apps/airtable),
[Google Sheets](https://pipedream.com/apps/google-sheets),
[Slack](https://pipedream.com/apps/slack) and many more.
Alternatively, use [Node.js](https://pipedream.com/docs/code/nodejs) or
[Python](https://pipedream.com/docs/code/python) code steps to retrieve,
transform, or send data to other services.
### 6. Return a 200 response
In the final step of your workflow, return a 200 response by adding a new step
and selecting **Return an HTTP Response**.

This will generate Node.js code to return an HTTP response to QStash using the
`$.respond` helper in Pipedream.
### 7. Deploy your Pipedream workflow
After you're satisfied with your changes, click the **Deploy** button in the
top right of your Pipedream workflow. Your deployed workflow will not
automatically process new messages to your QStash topic.
### Video tutorial
If you prefer video, you can check out this tutorial by
[pipedream](https://pipedream.com).
[](https://youtu.be/uG8eO7BNok4)
# Prometheus - Upstash QStash Integration
Source: https://upstash.com/docs/qstash/integrations/prometheus
To monitor your QStash metrics in Prometheus and visualize in Grafana, follow these steps:
**Integration Scope**
Upstash Prometheus Integration covers Prod Pack.
## **Step 1: Enable Prometheus in Upstash Console**
1. Open the Upstash Console and navigate to QStash.
2. Go to Settings → Monitoring.
3. Enable Prometheus to allow scraping QStash metrics.
## **Step 2: Copy Monitoring Token**
1. After enabling, a monitoring token is generated and displayed.
2. Copy the token. It will be used to authenticate Prometheus requests.
**Header Format**
Send the token as `Authorization: Bearer `.
## **Step 3: Configure Prometheus (via Grafana Data Source)**
1. In Grafana, add a Prometheus data source.
2. Set the address to `https://api.upstash.com/monitoring/prometheus`.
3. In HTTP headers, add the monitoring token.
Click Test and Save.
## **Step 4: Import Dashboard**
You can use the Upstash Grafana dashboard to visualize QStash metrics.
Open the import dialog and use: Upstash QStash Dashboard
## **Conclusion**
You’ve integrated QStash with Prometheus. Use Grafana to explore message throughput, retries, DLQ, schedules, and Upstash Workflows.
If you encounter issues, contact support.
# Email - Resend
Source: https://upstash.com/docs/qstash/integrations/resend
The `qstash-js` SDK offers an integration to easily send emails using [Resend](https://resend.com/), streamlining email delivery in your applications.
## Basic Email Sending
To send a single email, use the `publishJSON` method with the `resend` provider. Ensure your `QSTASH_TOKEN` and `RESEND_TOKEN` are set for authentication.
```typescript theme={"system"}
import { Client, resend } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.publishJSON({
api: {
name: "email",
provider: resend({ token: "" }),
},
body: {
from: "Acme ",
to: ["delivered@resend.dev"],
subject: "Hello World",
html: "
It works!
",
},
});
```
In the `body` field, specify any parameters supported by [the Resend Send Email API](https://resend.com/docs/api-reference/emails/send-email), such as `from`, `to`, `subject`, and `html`.
## Sending Batch Emails
To send multiple emails at once, use Resend’s [Batch Email API](https://resend.com/docs/api-reference/emails/send-batch-emails). Set the `batch` option to `true` to enable batch sending. Each email configuration is defined as an object within the `body` array.
```typescript theme={"system"}
await client.publishJSON({
api: {
name: "email",
provider: resend({ token: "", batch: true }),
},
body: [
{
from: "Acme ",
to: ["foo@gmail.com"],
subject: "Hello World",
html: "
",
},
],
});
```
Each entry in the `body` array represents an individual email, allowing customization of `from`, `to`, `subject`, `html`, and any other Resend-supported fields.
# Development Server License Agreement
Source: https://upstash.com/docs/qstash/misc/license
## 1. Purpose and Scope
This software is a development server implementation of QStash API ("Development Server") provided for testing and development purposes only. It is not intended for production use, commercial deployment, or as a replacement for the official QStash service.
## 2. Usage Restrictions
By using this Development Server, you agree to the following restrictions:
a) The Development Server may only be used for:
* Local development and testing
* Continuous Integration (CI) testing
* Educational purposes
* API integration development
b) The Development Server may NOT be used for:
* Production environments
* Commercial service offerings
* Public-facing applications
* Operating as a Software-as-a-Service (SaaS)
* Reselling or redistributing as a service
## 3. Restrictions on Modification and Reverse Engineering
You may not:
* Decompile, reverse engineer, disassemble, or attempt to derive the source code of the Development Server
* Modify, adapt, translate, or create derivative works based upon the Development Server
* Remove, obscure, or alter any proprietary rights notices within the Development Server
* Attempt to bypass or circumvent any technical limitations or security measures in the Development Server
## 4. Technical Limitations
Users acknowledge that the Development Server:
* Operates entirely in-memory without persistence
* Provides limited functionality compared to the official service
* Offers no data backup or recovery mechanisms
* Has no security guarantees
* May have performance limitations
* Does not implement all features of the official service
## 5. Warranty Disclaimer
THE DEVELOPMENT SERVER IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. THE AUTHORS OR COPYRIGHT HOLDERS SHALL NOT BE LIABLE FOR ANY CLAIMS, DAMAGES, OR OTHER LIABILITY ARISING FROM THE USE OF THE SOFTWARE IN VIOLATION OF THIS LICENSE.
## 6. Termination
Your rights under this license will terminate automatically if you fail to comply with any of its terms. Upon termination, you must cease all use of the Development Server.
## 7. Acknowledgment
By using the Development Server, you acknowledge that you have read this license, understand it, and agree to be bound by its terms.
# API Examples
Source: https://upstash.com/docs/qstash/overall/apiexamples
### Use QStash via:
* cURL
* [Typescript SDK](https://github.com/upstash/sdk-qstash-ts)
* [Python SDK](https://github.com/upstash/qstash-python)
Below are some examples to get you started. You can also check the [how to](/qstash/howto/publishing) section for
more technical details or the [API reference](/qstash/api/messages) to test the API.
### Publish a message to an endpoint
Simple example to [publish](/qstash/howto/publishing) a message to an endpoint.
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
)
# Async version is also available
```
### Publish a message to a URL Group
The [URL Group](/qstash/features/url-groups) is a way to publish a message to multiple endpoints in a
fan out pattern.
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/myUrlGroup'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
urlGroup: "myUrlGroup",
body: {
hello: "world",
},
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url_group="my-url-group",
body={
"hello": "world",
},
)
# Async version is also available
```
### Publish a message with 5 minutes delay
Add a delay to the message to be published. After QStash receives the message,
it will wait for the specified time (5 minutes in this example) before sending the message to the endpoint.
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Delay: 5m" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
delay: 300,
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
delay="5m",
)
# Async version is also available
```
### Send a custom header
Add a custom header to the message to be published.
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H 'Upstash-Forward-My-Header: my-value' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
headers: {
"My-Header": "my-value",
},
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
headers={
"My-Header": "my-value",
},
)
# Async version is also available
```
### Schedule to run once a day
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Upstash-Cron: 0 0 * * *" \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/schedules/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
cron: "0 0 * * *",
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="https://example.com",
cron="0 0 * * *",
)
# Async version is also available
```
### Publish messages to a FIFO queue
By default, messges are published concurrently. With a [queue](/qstash/features/queues), you can enqueue messages in FIFO order.
```shell theme={"system"}
curl -XPOST -H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
'https://qstash.upstash.io/v2/enqueue/my-queue/https://example.com'
-d '{"message":"Hello, World!"}'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
const queue = client.queue({
queueName: "my-queue"
})
await queue.enqueueJSON({
url: "https://example.com",
body: {
"Hello": "World"
}
})
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.enqueue_json(
queue="my-queue",
url="https://example.com",
body={
"Hello": "World",
},
)
# Async version is also available
```
### Publish messages in a [batch](/qstash/features/batch)
Publish multiple messages in a single request.
```shell theme={"system"}
curl -XPOST https://qstash.upstash.io/v2/batch \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-d '
[
{
"destination": "https://example.com/destination1"
},
{
"destination": "https://example.com/destination2"
}
]'
```
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.batchJSON([
{
url: "https://example.com/destination1",
},
{
url: "https://example.com/destination2",
},
]);
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.batch_json(
[
{
"url": "https://example.com/destination1",
},
{
"url": "https://example.com/destination2",
},
]
)
# Async version is also available
```
### Set max retry count to 3
Configure how many times QStash should retry to send the message to the endpoint before
sending it to the [dead letter queue](/qstash/features/dlq).
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Upstash-Retries: 3" \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
retries: 3,
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
retries=3,
)
# Async version is also available
```
### Set custom retry delay
Configure the delay between retry attempts when message delivery fails. [By default, QStash uses exponential backoff](/qstash/features/retry). You can customize this using mathematical expressions with the special variable `retried` (current retry attempt count starting from 0).
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Upstash-Retries: 3" \
-H "Upstash-Retry-Delay: pow(2, retried) * 1000" \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
retries: 3,
retryDelay: "pow(2, retried) * 1000", // 2^retried * 1000ms
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
retries=3,
retry_delay="pow(2, retried) * 1000", # 2^retried * 1000ms
)
# Async version is also available
```
**Supported functions for retry delay expressions:**
* `pow` - Power function
* `sqrt` - Square root
* `abs` - Absolute value
* `exp` - Exponential
* `floor` - Floor function
* `ceil` - Ceiling function
* `round` - Rounding function
* `min` - Minimum of values
* `max` - Maximum of values
**Examples:**
* `1000` - Fixed 1 second delay
* `1000 * (1 + retried)` - Linear backoff: 1s, 2s, 3s, 4s...
* `pow(2, retried) * 1000` - Exponential backoff: 1s, 2s, 4s, 8s...
* `max(1000, pow(2, retried) * 100)` - Exponential with minimum 1s delay
### Set callback url
Receive a response from the endpoint and send it to the specified callback URL.
If the endpoint does not return a response, QStash will send it to the failure callback URL.
```shell theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer XXX' \
-H "Content-type: application/json" \
-H "Upstash-Callback: https://example.com/callback" \
-H "Upstash-Failure-Callback: https://example.com/failure" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://example.com'
```
```typescript theme={"system"}
const client = new Client({ token: "" });
await client.publishJSON({
url: "https://example.com",
body: {
hello: "world",
},
callback: "https://example.com/callback",
failureCallback: "https://example.com/failure",
});
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://example.com",
body={
"hello": "world",
},
callback="https://example.com/callback",
failure_callback="https://example.com/failure",
)
# Async version is also available
```
### Get message logs
Retrieve logs for all messages that have been published (filtering is also available).
```shell theme={"system"}
curl https://qstash.upstash.io/v2/logs \
-H "Authorization: Bearer XXX"
```
```typescript theme={"system"}
const client = new Client({ token: "" });
const logs = await client.logs()
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.event.list()
# Async version is also available
```
### List all schedules
```shell theme={"system"}
curl https://qstash.upstash.io/v2/schedules \
-H "Authorization: Bearer XXX"
```
```typescript theme={"system"}
const client = new Client({ token: "" });
const scheds = await client.schedules.list();
```
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.list()
# Async version is also available
```
# Changelog
Source: https://upstash.com/docs/qstash/overall/changelog
We have moved the roadmap and the changelog to [Github Discussions](https://github.com/orgs/upstash/discussions) starting from October 2025.Now you can follow `In Progress` features. You can see that your `Feature Requests` are recorded. You can vote for them and comment your specific use-cases to shape the feature to your needs.
* **TypeScript SDK (`qstash-js`):**
* `Label` feature is added. This will enable our users to label their publishes so that
* Logs can be filtered with user given label.
* DLQ can be filtered with user given label.
* **Console:**
* `Flat view` on the `Logs` tab is removed. The purpose is to simplify the `Logs` tab.
All the information is already available on the default(grouped) view. Let us know if there is something missing
via Discord/Support so that we can fill in the gaps.
* **Console:**
* Added ability to hide/show columns on the Schedules tab.
* Local mode is added to enable our users to use the console with their local development envrionment. See [docs](http://localhost:3000/qstash/howto/local-development) for details.
* **TypeScript SDK (`qstash-js`):**
* Added `retryDelay` option to dynamicaly program the retry duration of a failed message.
The new parameter is available in publish/batch/enqueue/schedules. See [here](/qstash/features/retry#custom-retry-delay)
* Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.8.1...v2.8.2).
* No new features for QStash this month. We are mostly focused on stability and performance.
* **TypeScript SDK (`qstash-js`):**
* Added `flow control period` and deprecated `ratePerSecond`. See [here](https://github.com/upstash/qstash-js/pull/237).
* Added `IN_PROGRESS` state filter. See [here](https://github.com/upstash/qstash-js/pull/236).
* Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.23...v2.8.1).
* **Python SDK (`qstash-py`):**
* Added `IN_PROGRESS` state filter. See [here](https://github.com/upstash/qstash-js/pull/236).
* Added various missing features: Callback Headers, Schedule with Queue, Overwrite Schedule ID, Flow Control Period. See [here](https://github.com/upstash/qstash-py/pull/41).
* Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-py/compare/v2.0.5...v3.0.0).
* **Console:**
* Improved logs tab behavior to prevent collapsing or unnecessary refreshes, increasing usability.
* **QStash Server:**
* Added support for filtering messages by `FlowControlKey` (Console and SDK support in progress).
* Applied performance improvements for bulk cancel operations.
* Applied performance improvements for bulk publish operations.
* Fixed an issue where scheduled publishes with queues would reset queue parallelism to 1.
* Added support for updating existing queue parallelisms even when the max queue limit is reached.
* Applied several additional performance optimizations.
* **QStash Server:**
* Added support for `flow-control period`, allowing users to define a period for a given rate—up to 1 week.\
Previously, the period was fixed at 1 second.\
For example, `rate: 3 period: 1d` means publishes will be throttled to 3 per day.
* Applied several performance optimizations.
* **Console:**
* Added `IN_PROGRESS` as a filter option when grouping by message ID, making it easier to query in-flight messages.\
See [here](/qstash/howto/debug-logs#lifecycle-of-a-message) for an explanation of message states.
* **TypeScript SDK (`qstash-js`):**
* Renamed `events` to `logs` for clarity when referring to QStash features. `client.events()` is now deprecated, and `client.logs()` has been introduced. See [details here](https://github.com/upstash/qstash-js/pull/225).
* For all fixes, see the full changelog [here](https://github.com/upstash/qstash-js/compare/v2.7.22...v2.7.23).
* **QStash Server:**
* Fixed an issue where messages with delayed callbacks were silently failing. Now, such messages are explicitly rejected during insertion.
* **Python SDK (`qstash-py`):**
* Flow Control Parallelism and Rate. See [here](https://github.com/upstash/qstash-py/pull/36)
* Addressed a few minor bugs. See the full changelog [here](https://github.com/upstash/qstash-py/compare/v2.0.3...v2.0.5)
* **QStash Server:**
* Introduced RateLimit and Parallelism controls to manage the rate and concurrency of message processing. Learn more [here](/qstash/features/flowcontrol).
* Improved connection timeout detection mechanism to enhance scalability.
* Added several new features to better support webhook use cases:
* Support for saving headers in a URL group. See [here](/qstash/howto/webhook#2-url-group).
* Ability to pass configuration parameters via query strings instead of headers. See [here](/qstash/howto/webhook#1-publish).
* Introduced a new `Upstash-Header-Forward` header to forward all headers from the incoming request. See [here](/qstash/howto/webhook#1-publish).
* **Python SDK (`qstash-py`):**
* Addressed a few minor bugs. See the full changelog [here](https://github.com/upstash/qstash-py/compare/v2.0.2...v2.0.3).
* **Local Development Server:**
* The local development server is now publicly available. This server allows you to test your Qstash setup locally. Learn more about the local development server [here](/qstash/howto/local-development).
* **Console:**
* Separated the Workflow and QStash consoles for an improved user experience.
* Separated their DLQ messages as well.
* **QStash Server:**
* The core team focused on RateLimit and Parallelism features. These features are ready on the server and will be announced next month after the documentation and SDKs are completed.
* **TypeScript SDK (`qstash-js`):**
* Added global headers to the client, which are automatically included in every publish request.
* Resolved issues related to the Anthropics and Resend integrations.
* Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.17...v2.7.20).
* **Python SDK (`qstash-py`):**
* Introduced support for custom `schedule_id` values.
* Enabled passing headers to callbacks using the `Upstash-Callback-Forward-...` prefix.
* Full changelog, including all fixes, is available [here](https://github.com/upstash/qstash-py/compare/v2.0.0...v2.0.1).
* **Qstash Server:**
* Finalized the local development server, now almost ready for public release.
* Improved error reporting by including the field name in cases of invalid input.
* Increased the maximum response body size for batch use cases to 100 MB per REST call.
* Extended event retention to up to 14 days, instead of limiting to the most recent 10,000 events. Learn more on the [Pricing page](https://upstash.com/pricing/qstash).
* **TypeScript SDK (qstash-js):**
* Added support for the Anthropics provider and refactored the `api` field of `publishJSON`. See the documentation [here](/qstash/integrations/anthropic).
* Full changelog, including fixes, is available [here](https://github.com/upstash/qstash-js/compare/v2.7.14...v2.7.17).
* **Qstash Server:**
* Fixed a bug in schedule reporting. The Upstash-Caller-IP header now correctly reports the user’s IP address instead of an internal IP for schedules.
* Validated the scheduleId parameter. The scheduleId must now be alphanumeric or include hyphens, underscores, or periods.
* Added filtering support to bulk message cancellation. Users can now delete messages matching specific filters. See Rest API [here](/qstash/api/messages/bulk-cancel).
* Resolved a bug that caused the DLQ Console to become unusable when data was too large.
* Fixed an issue with queues that caused them to stop during temporary network communication problems with the storage layer.
* **TypeScript SDK (qstash-js):**
* Fixed a bug on qstash-js where we skipped using the next signing key when the current signing key fails to verify the `upstash-signature`. Released with qstash-js v2.7.14.
* Added resend API. See [here](/qstash/integrations/resend). Released with qstash-js v2.7.14.
* Added `schedule to queues` feature to the qstash-js. See [here](/qstash/features/schedules#scheduling-to-a-queue). Released with qstash-js v2.7.14.
* **Console:**
* Optimized the console by trimming event bodies, reducing resource usage and enabling efficient querying of events with large payloads.
* **Qstash Server:**
* Began development on a new architecture to deliver faster event processing on the server.
* Added more fields to events in the [REST API](/qstash/api/events/list), including `Timeout`, `Method`, `Callback`, `CallbackHeaders`, `FailureCallback`, `FailureCallbackHeaders`, and `MaxRetries`.
* Enhanced retry backoff logic by supporting additional headers for retry timing. Along with `Retry-After`, Qstash now recognizes `X-RateLimit-Reset`, `X-RateLimit-Reset-Requests`, and `X-RateLimit-Reset-Tokens` as backoff time indicators. See [here](/qstash/features/retry#retry-after-headers) for more details.
* Improved performance, resulting in reduced latency for average publish times.
* Set the `nbf` (not before) claim on Signing Keys to 0. This claim specifies the time before which the JWT must not be processed. Previously, this was incorrectly used, causing validation issues when there were minor clock discrepancies between systems.
* Fixed queue name validation. Queue names must now be alphanumeric or include hyphens, underscores, or periods, consistent with other API resources.
* Resolved bugs related to [overwriting a schedule](/qstash/features/schedules#overwriting-an-existing-schedule).
* Released [Upstash Workflow](/qstash/workflow).
* Fixed a bug where paused schedules were mistakenly resumed after a process restart (typically occurring during new version releases).
* Big update on the UI, where all the Rest functinality exposed in the Console.
* Addded order query parameter to [/v2/events](/qstash/api/events/list) and [/v2/dlq](/qstash/api/dlq/listMessages) endpoints.
* Added [ability to configure](/qstash/features/callbacks#configuring-callbacks) callbacks(/failure\_callbacks)
* A critical fix for schedule pause and resume Rest APIs where the endpoints were not working at all before the fix.
* Pause and resume for scheduled messages
* Pause and resume for queues
* [Bulk cancel](/qstash/api/messages/bulk-cancel) messages
* Body and headers on [events](/qstash/api/events/list)
* Fixed inaccurate queue lag
* [Retry-After](/qstash/features/retry#retry-after-header) support for rate-limited endpoints
* [Upstash-Timeout](/qstash/api/publish) header
* [Queues and parallelism](/qstash/features/queues)
* [Event filtering](/qstash/api/events/list)
* [Batch publish messages](/qstash/api/messages/batch)
* [Bulk delete](/qstash/api/dlq/deleteMessages) for DLQ
* Added [failure callback support](/qstash/api/schedules/create) to scheduled messages
* Added Upstash-Caller-IP header to outgoing messages. See \[[https://upstash.com/docs/qstash/howto/receiving](https://upstash.com/docs/qstash/howto/receiving)] for all headers
* Added Schedule ID to [events](/qstash/api/events/list) and [messages](/qstash/api/messages/get)
* Put last response in DLQ
* DLQ [get message](/qstash/api/dlq/getMessage)
* Pass schedule ID to the header when calling the user's endpoint
* Added more information to [callbacks](/qstash/features/callbacks)
* Added [Upstash-Failure-Callback](/qstash/features/callbacks#what-is-a-failure-callback)
# Compare
Source: https://upstash.com/docs/qstash/overall/compare
In this section, we will compare QStash with alternative solutions.
### BullMQ
BullMQ is a message queue for NodeJS based on Redis. BullMQ is open source
project, you can run BullMQ yourself.
* Using BullMQ in serverless environments is problematic due to stateless nature
of serverless. QStash is designed for serverless environments.
* With BullMQ, you need to run a stateful application to consume messages.
QStash calls the API endpoints, so you do not need your application to consume
messages continuously.
* You need to run and maintain BullMQ and Redis yourself. QStash is completely
serverless, you maintain nothing and pay for just what you use.
### Zeplo
Zeplo is a message queue targeting serverless. Just like QStash it allows users
to queue and schedule HTTP requests.
While Zeplo targets serverless, it has a fixed monthly price in paid plans which
is \$39/month. In QStash, price scales to zero, you do not pay if you are not
using it.
With Zeplo, you can send messages to a single endpoint. With QStash, in addition
to endpoint, you can submit messages to a URL Group which groups one or more
endpoints into a single namespace. Zeplo does not have URL Group functionality.
### Quirrel
Quirrel is a job queueing service for serverless. It has a similar functionality
with QStash.
Quirrel is acquired by Netlify, some of its functionality is available as
Netlify scheduled functions. QStash is platform independent, you can use it
anywhere.
# Prod Pack & Enterprise
Source: https://upstash.com/docs/qstash/overall/enterprise
Upstash has Prod Pack and Enterprise plans for customers with critical production workloads. Prod Pack and Enterprise plans include additional monitoring and security features in addition to higher capacity limits and more powerful resources.
Prod Pack add-on is available for both pay-as-you-go and fixed-price plans. Enterprise plans are custom plans with additional features and higher limits.
All features of Prod Pack and Enterprise plan for Upstash QStash are detailed below.
## How to Upgrade
You can activate Prod Pack in the QStash settings page in the [Upstash Console](https://upstash.com/dashboard/qstash). For the Enterprise plan, please create a request through the Upstash Console or contact [support@upstash.com](mailto:support@upstash.com).
# Prod Pack Features
Below QStash features are enabled with Prod Pack.
### Uptime SLA
All Prod Pack accounts come with an SLA guaranteeing 99.99% uptime. For mission-critical messaging where uptime is crucial, we recommend Prod Pack plans. Learn more about [Uptime SLA](/common/help/sla).
### SOC-2 Type 2 Compliance & Report
Upstash QStash is SOC-2 Type 2 compliant with Prod Pack. Once you enable Prod Pack, you can request access to the report by going to [Upstash Trust Center](https://trust.upstash.com/) or contacting [support@upstash.com](mailto:support@upstash.com).
### Encryption at Rest
Encrypts the storage where your QStash message data is persisted and stored.
### Prometheus Metrics
Prometheus is an open-source monitoring system widely used for monitoring and alerting in cloud-native and containerized environments.
Upstash Prod Pack and Enterprise plans offer Prometheus metrics collection, enabling you to monitor your QStash messages with Prometheus in addition to console metrics. Learn more about [Prometheus integration](/qstash/integrations/prometheus).
### Datadog Integration
Upstash Prod Pack and Enterprise plans include integration with Datadog, allowing you to monitor your QStash messages with Datadog in addition to console metrics. Learn more about [Datadog integration](/qstash/integrations/datadog).
# Enterprise Features
All Prod Pack features are included in the Enterprise plan. Additionally, Enterprise plans include:
### 100M+ Messages Daily
Enterprise plans support 100 million or more messages per day, suitable for high-volume production workloads.
### Unlimited Bandwidth
Enterprise plans include unlimited bandwidth, ensuring no data transfer limits for your messaging needs.
### SAML SSO
Single Sign-On (SSO) allows you to use your existing identity provider to authenticate users for your Upstash account. This feature is available upon request for Enterprise customers.
### Professional Support with SLA
Enterprise plans include access to our professional support with response time SLAs and priority access to our support team. Check out the [support page](/common/help/prosupport) for more details.
### Dedicated Resources for Isolation
Enterprise customers receive dedicated resources to ensure isolation and consistent performance for their messaging workloads.
# Getting Started
Source: https://upstash.com/docs/qstash/overall/getstarted
QStash is a **serverless messaging and scheduling solution**. It fits easily into your existing workflow and allows you to build reliable systems without managing infrastructure.
Instead of calling an endpoint directly, QStash acts as a middleman between you and an API to guarantee delivery, perform automatic retries on failure, and more.
We have a new SDK called [Upstash Workflow](/workflow/getstarted).
**Upstash Workflow SDK** is **QStash** simplified for your complex applications
* Skip the details of preparing a complex dependent endpoints.
* Focus on the essential parts.
* Enjoy automatic retries and delivery guarantees.
* Avoid platform-specific timeouts.
Check out [Upstash Workflow Getting Started](/workflow/getstarted) for more.
## Quick Start
Check out these Quick Start guides to get started with QStash in your application.
Build a Next application that uses QStash to start a long-running job on your platform
Build a Python application that uses QStash to schedule a daily job that clean up a database
Or continue reading to learn how to send your first message!
## Send your first message
**Prerequisite**
You need an Upstash account before publishing messages, create one
[here](https://console.upstash.com).
### Public API
Make sure you have a publicly available HTTP API that you want to send your
messages to. If you don't, you can use something like
[requestcatcher.com](https://requestcatcher.com/), [webhook.site](https://webhook.site/) or
[webhook-test.com](https://webhook-test.com/) to try it out.
For example, you can use this URL to test your messages: [https://firstqstashmessage.requestcatcher.com](https://firstqstashmessage.requestcatcher.com)
### Get your token
Go to the [Upstash Console](https://console.upstash.com/qstash) and copy the
`QSTASH_TOKEN`.
### Publish a message
A message can be any shape or form: json, xml, binary, anything, that can be
transmitted in the http request body. We do not impose any restrictions other
than a size limit of 1 MB (which can be customized at your request).
In addition to the request body itself, you can also send HTTP headers. Learn
more about this in the [message publishing section](/qstash/howto/publishing).
```bash cURL theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer ' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://'
```
```bash cURL RequestCatcher theme={"system"}
curl -XPOST \
-H 'Authorization: Bearer ' \
-H "Content-type: application/json" \
-d '{ "hello": "world" }' \
'https://qstash.upstash.io/v2/publish/https://firstqstashmessage.requestcatcher.com/test'
```
Don't worry, we have SDKs for different languages so you don't
have to make these requests manually.
### Check Response
You should receive a response with a unique message ID.
### Check Message Status
Head over to [Upstash Console](https://console.upstash.com/qstash) and go to the
`Logs` tab where you can see your message activities.
Learn more about different states [here](/qstash/howto/debug-logs).
## Features and Use Cases
Run long-running tasks in the background, without blocking your application
Schedule messages to be delivered at a time in the future
Publish messages to multiple endpoints, in parallel, using URL Groups
Enqueue messages to be delivered one by one in the order they have enqueued.
Custom rate per second and parallelism limits to avoid overflowing your endpoint.
Get a response delivered to your API when a message is delivered
Use a Dead Letter Queue to have full control over failed messages
Prevent duplicate messages from being delivered
Publish, enqueue, or batch chat completion requests using large language models with QStash
features.
# llms.txt
Source: https://upstash.com/docs/qstash/overall/llms-txt
# Pricing & Limits
Source: https://upstash.com/docs/qstash/overall/pricing
Please check our [pricing page](https://upstash.com/pricing/qstash) for the most up-to-date information on pricing and limits.
# Roadmap
Source: https://upstash.com/docs/qstash/overall/roadmap
We have moved the roadmap and the changelog to [Github Discussions](https://github.com/orgs/upstash/discussions) starting from October 2025.Now you can follow `In Progress` features. You can see that your `Feature Requests` are recorded. You can vote for them and comment your specific use-cases to shape the feature to your needs.
# Use Cases
Source: https://upstash.com/docs/qstash/overall/usecases
TODO: andreas: rework and reenable this page after we have 2 use cases ready
[https://linear.app/upstash/issue/QSTH-84/use-cases-summaryhighlights-of-recipes](https://linear.app/upstash/issue/QSTH-84/use-cases-summaryhighlights-of-recipes)
This section is still a work in progress.
We will be adding detailed tutorials for each use case soon.
Tell us on [Discord](https://discord.gg/w9SenAtbme) or
[X](https://x.com/upstash) what you would like to see here.
### Triggering Nextjs Functions on a schedule
Create a schedule in QStash that runs every hour and calls a Next.js serverless
function hosted on Vercel.
### Reset Billing Cycle in your Database
Once a month, reset database entries to start a new billing cycle.
### Fanning out alerts to Slack, email, Opsgenie, etc.
Createa QStash URL Group that receives alerts from a single source and delivers them
to multiple destinations.
### Send delayed message when a new user signs up
Publish delayed messages whenever a new user signs up in your app. After a
certain delay (e.g. 10 minutes), QStash will send a request to your API,
allowing you to email the user a welcome message.
# AWS Lambda (Node)
Source: https://upstash.com/docs/qstash/quickstarts/aws-lambda/nodejs
## Setting up a Lambda
The [AWS CDK](https://aws.amazon.com/cdk/) is the most convenient way to create a new project on AWS Lambda. For example, it lets you directly define integrations such as APIGateway, a tool to make our lambda publicly available as an API, in your code.
```bash Terminal theme={"system"}
mkdir my-app
cd my-app
cdk init app -l typescript
npm i esbuild @upstash/qstash
mkdir lambda
touch lambda/index.ts
```
## Webhook verification
### Using the SDK (recommended)
Edit `lambda/index.ts`, the file containing our core lambda logic:
```ts lambda/index.ts theme={"system"}
import { Receiver } from "@upstash/qstash"
import type { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda"
const receiver = new Receiver({
currentSigningKey: process.env.QSTASH_CURRENT_SIGNING_KEY ?? "",
nextSigningKey: process.env.QSTASH_NEXT_SIGNING_KEY ?? "",
})
export const handler = async (
event: APIGatewayProxyEvent
): Promise => {
const signature = event.headers["upstash-signature"]
const lambdaFunctionUrl = `https://${event.requestContext.domainName}`
if (!signature) {
return {
statusCode: 401,
body: JSON.stringify({ message: "Missing signature" }),
}
}
try {
await receiver.verify({
signature: signature,
body: event.body ?? "",
url: lambdaFunctionUrl,
})
} catch (err) {
return {
statusCode: 401,
body: JSON.stringify({ message: "Invalid signature" }),
}
}
// Request is valid, perform business logic
return {
statusCode: 200,
body: JSON.stringify({ message: "Request processed successfully" }),
}
}
```
We'll set the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables together when deploying our Lambda.
### Manual Verification
In this section, we'll manually verify our incoming QStash requests without additional packages. Also see our [manual verification example](https://github.com/upstash/qstash-examples/tree/main/aws-lambda).
1. Implement the handler function
```ts lambda/index.ts theme={"system"}
import type { APIGatewayEvent, APIGatewayProxyResult } from "aws-lambda"
import { createHash, createHmac } from "node:crypto"
export const handler = async (
event: APIGatewayEvent,
): Promise => {
const signature = event.headers["upstash-signature"] ?? ""
const currentSigningKey = process.env.QSTASH_CURRENT_SIGNING_KEY ?? ""
const nextSigningKey = process.env.QSTASH_NEXT_SIGNING_KEY ?? ""
const url = `https://${event.requestContext.domainName}`
try {
// Try to verify the signature with the current signing key and if that fails, try the next signing key
// This allows you to roll your signing keys once without downtime
await verify(signature, currentSigningKey, event.body, url).catch((err) => {
console.error(
`Failed to verify signature with current signing key: ${err}`
)
return verify(signature, nextSigningKey, event.body, url)
})
} catch (err) {
const message = err instanceof Error ? err.toString() : err
return {
statusCode: 400,
body: JSON.stringify({ error: message }),
}
}
// Add your business logic here
return {
statusCode: 200,
body: JSON.stringify({ message: "Request processed successfully" }),
}
}
```
2. Implement the `verify` function:
```ts lambda/index.ts theme={"system"}
/**
* @param jwt - The content of the `upstash-signature` header (JWT)
* @param signingKey - The signing key to use to verify the signature (Get it from Upstash Console)
* @param body - The raw body of the request
* @param url - The public URL of the lambda function
*/
async function verify(
jwt: string,
signingKey: string,
body: string | null,
url: string
): Promise {
const split = jwt.split(".")
if (split.length != 3) {
throw new Error("Invalid JWT")
}
const [header, payload, signature] = split
if (
signature !=
createHmac("sha256", signingKey)
.update(`${header}.${payload}`)
.digest("base64url")
) {
throw new Error("Invalid JWT signature")
}
// JWT is verified, start looking at payload claims
const p: {
sub: string
iss: string
exp: number
nbf: number
body: string
} = JSON.parse(Buffer.from(payload, "base64url").toString())
if (p.iss !== "Upstash") {
throw new Error(`invalid issuer: ${p.iss}, expected "Upstash"`)
}
if (p.sub !== url) {
throw new Error(`invalid subject: ${p.sub}, expected "${url}"`)
}
const now = Math.floor(Date.now() / 1000)
if (now > p.exp) {
throw new Error("token has expired")
}
if (now < p.nbf) {
throw new Error("token is not yet valid")
}
if (body != null) {
if (
p.body.replace(/=+$/, "") !=
createHash("sha256").update(body).digest("base64url")
) {
throw new Error("body hash does not match")
}
}
}
```
You can find the complete example
[here](https://github.com/upstash/qstash-examples/blob/main/aws-lambda/typescript-example/index.ts).
## Deploying a Lambda
### Using the AWS CDK (recommended)
Because we used the AWS CDK to initialize our project, deployment is straightforward. Edit the `lib/.ts` file the CDK created when bootstrapping the project. For example, if our lambda webhook does video processing, it could look like this:
```ts lib/.ts theme={"system"}
import * as cdk from "aws-cdk-lib";
import * as lambda from "aws-cdk-lib/aws-lambda";
import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs";
import { Construct } from "constructs";
import path from "path";
import * as apigateway from 'aws-cdk-lib/aws-apigateway';
export class VideoProcessingStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props)
// Create the Lambda function
const videoProcessingLambda = new NodejsFunction(this, 'VideoProcessingLambda', {
runtime: lambda.Runtime.NODEJS_20_X,
handler: 'handler',
entry: path.join(__dirname, '../lambda/index.ts'),
});
// Create the API Gateway
const api = new apigateway.RestApi(this, 'VideoProcessingApi', {
restApiName: 'Video Processing Service',
description: 'This service handles video processing.',
defaultMethodOptions: {
authorizationType: apigateway.AuthorizationType.NONE,
},
});
api.root.addMethod('POST', new apigateway.LambdaIntegration(videoProcessingLambda));
}
}
```
Every time we now run the following deployment command in our terminal, our changes are going to be deployed right to a publicly available API, authorized by our QStash webhook logic from before.
```bash Terminal theme={"system"}
cdk deploy
```
You may be prompted to confirm the necessary AWS permissions during this process, for example allowing APIGateway to invoke your lambda function.
Once your code has been deployed to Lambda, you'll receive a live URL to your endpoint via the CLI and can see the new APIGateway connection in your AWS dashboard:
The URL you use to invoke your function typically follows this format, especially if you follow the same stack configuration as shown above:
`https://.execute-api..amazonaws.com/prod/`
To provide our `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` environment variables, navigate to your QStash dashboard:
and make these two variables available to your Lambda in your function configuration:
Tada, we just deployed a live Lambda with the AWS CDK! 🎉
### Manual Deployment
1. Create a new Lambda function by going to the [AWS dashboard](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/create/function) for your desired lambda region. Give your new function a name and select `Node.js 20.x` as runtime, then create the function.
2. To make this Lambda available under a public URL, navigate to the `Configuration` tab and click `Function URL`:
3. In the following dialog, you'll be asked to select one of two authentication types. Select `NONE`, because we are handling authentication ourselves. Then, click `Save`.
You'll see the function URL on the right side of your function overview:
4. Get your current and next signing key from the
[Upstash Console](https://console.upstash.com/qstash).
5. Still under the `Configuration` tab, set the `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY`
environment variables:
6. Add the following script to your `package.json` file to build and zip your code:
```json package.json theme={"system"}
{
"scripts": {
"build": "rm -rf ./dist; esbuild index.ts --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=dist/index.js && cd dist && zip -r index.zip index.js*"
}
}
```
7. Click the `Upload from` button for your Lambda and
deploy the code to AWS. Select `./dist/index.zip` as the upload file.
Tada, you've manually deployed a zip file to AWS Lambda! 🎉
## Testing the Integration
To make sure everything works as expected, navigate to your QStash request builder and send a request to your freshly deployed Lambda function:
Alternatively, you can also send a request via CURL:
```bash Terminal theme={"system"}
curl --request POST "https://qstash.upstash.io/v2/publish/" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d "{ \"hello\": \"world\"}"
```
# AWS Lambda (Python)
Source: https://upstash.com/docs/qstash/quickstarts/aws-lambda/python
[Source Code](https://github.com/upstash/qstash-examples/tree/main/aws-lambda/python-example)
This is a step by step guide on how to receive webhooks from QStash in your
Lambda function on AWS.
### 1. Create a new project
Let's create a new folder called `aws-lambda` and initialize a new project by
creating `lambda_function.py` This example uses Makefile, but the scripts can
also be written for `Pipenv`.
```bash theme={"system"}
mkdir aws-lambda
cd aws-lambda
touch lambda_function.py
```
### 2. Dependencies
We are using `PyJwt` for decoding the JWT token in our code. We will install the
package in the zipping stage.
### 3. Creating the handler function
In this example we will show how to receive a webhook from QStash and verify the
signature.
First, let's import everything we need:
```python theme={"system"}
import json
import os
import hmac
import hashlib
import base64
import time
import jwt
```
Now, we create the handler function. In the handler we will prepare all
necessary variables that we need for verification. This includes the signature,
the signing keys and the url of the lambda function. Then we try to verify the
request using the current signing key and if that fails we will try the next
one. If the signature could be verified, we can start processing the request.
```python theme={"system"}
def lambda_handler(event, context):
# parse the inputs
current_signing_key = os.environ['QSTASH_CURRENT_SIGNING_KEY']
next_signing_key = os.environ['QSTASH_NEXT_SIGNING_KEY']
headers = event['headers']
signature = headers['upstash-signature']
url = "https://{}{}".format(event["requestContext"]["domainName"], event["rawPath"])
body = None
if 'body' in event:
body = event['body']
# check verification now
try:
verify(signature, current_signing_key, body, url)
except Exception as e:
print("Failed to verify signature with current signing key:", e)
try:
verify(signature, next_signing_key, body, url)
except Exception as e2:
return {
"statusCode": 400,
"body": json.dumps({
"error": str(e2),
}),
}
# Your logic here...
return {
"statusCode": 200,
"body": json.dumps({
"message": "ok",
}),
}
```
The `verify` function will handle the actual verification of the signature. The
signature itself is actually a [JWT](https://jwt.io) and includes claims about
the request. See [here](/qstash/features/security#claims).
```python theme={"system"}
# @param jwt_token - The content of the `upstash-signature` header
# @param signing_key - The signing key to use to verify the signature (Get it from Upstash Console)
# @param body - The raw body of the request
# @param url - The public URL of the lambda function
def verify(jwt_token, signing_key, body, url):
split = jwt_token.split(".")
if len(split) != 3:
raise Exception("Invalid JWT.")
header, payload, signature = split
message = header + '.' + payload
generated_signature = base64.urlsafe_b64encode(hmac.new(bytes(signing_key, 'utf-8'), bytes(message, 'utf-8'), digestmod=hashlib.sha256).digest()).decode()
if generated_signature != signature and signature + "=" != generated_signature :
raise Exception("Invalid JWT signature.")
decoded = jwt.decode(jwt_token, options={"verify_signature": False})
sub = decoded['sub']
iss = decoded['iss']
exp = decoded['exp']
nbf = decoded['nbf']
decoded_body = decoded['body']
if iss != "Upstash":
raise Exception("Invalid issuer: {}".format(iss))
if sub.rstrip("/") != url.rstrip("/"):
raise Exception("Invalid subject: {}".format(sub))
now = time.time()
if now > exp:
raise Exception("Token has expired.")
if now < nbf:
raise Exception("Token is not yet valid.")
if body != None:
while decoded_body[-1] == "=":
decoded_body = decoded_body[:-1]
m = hashlib.sha256()
m.update(bytes(body, 'utf-8'))
m = m.digest()
generated_hash = base64.urlsafe_b64encode(m).decode()
if generated_hash != decoded_body and generated_hash != decoded_body + "=" :
raise Exception("Body hash doesn't match.")
```
You can find the complete file
[here](https://github.com/upstash/qstash-examples/tree/main/aws-lambda/python-example/lambda_function.py).
That's it, now we can create the function on AWS and test it.
### 4. Create a Lambda function on AWS
Create a new Lambda function from scratch by going to the
[AWS console](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/create/function).
(Make sure you select your desired region)
Give it a name and select `Python 3.8` as runtime, then create the function.
Afterwards we will add a public URL to this lambda by going to the
`Configuration` tab:
Select `Auth Type = NONE` because we are handling authentication ourselves.
After creating the url, you should see it on the right side of the overview of
your function:
### 5. Set Environment Variables
Get your current and next signing key from the
[Upstash Console](https://console.upstash.com/qstash)
On the same `Configuration` tab from earlier, we will now set the required
environment variables:
### 6. Deploy your Lambda function
We need to bundle our code and zip it to deploy it to AWS.
Add the following script to your `Makefile` file (or corresponding pipenv
script):
```yaml theme={"system"}
zip:
rm -rf dist
pip3 install --target ./dist pyjwt
cp lambda_function.py ./dist/lambda_function.py
cd dist && zip -r lambda.zip .
mv ./dist/lambda.zip ./
```
When calling `make zip` this will install PyJwt and zip the code.
Afterwards we can click the `Upload from` button in the lower right corner and
deploy the code to AWS. Select `lambda.zip` as upload file.
### 7. Publish a message
Open a different terminal and publish a message to QStash. Note the destination
url is the URL from step 4.
```bash theme={"system"}
curl --request POST "https://qstash.upstash.io/v2/publish/https://urzdbfn4et56vzeasu3fpcynym0zerme.lambda-url.eu-west-1.on.aws" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d "{ \"hello\": \"world\"}"
```
## Next Steps
That's it, you have successfully created a secure AWS lambda function, that
receives and verifies incoming webhooks from qstash.
Learn more about publishing a message to qstash [here](/qstash/howto/publishing)
# Cloudflare Workers
Source: https://upstash.com/docs/qstash/quickstarts/cloudflare-workers
This is a step by step guide on how to receive webhooks from QStash in your
Cloudflare Worker.
### Project Setup
We will use **C3 (create-cloudflare-cli)** command-line tool to create our functions. You can open a new terminal window and run C3 using the prompt below.
```shell npm theme={"system"}
npm create cloudflare@latest
```
```shell yarn theme={"system"}
yarn create cloudflare@latest
```
This will install the `create-cloudflare` package, and lead you through setup. C3 will also install Wrangler in projects by default, which helps us testing and deploying the projects.
```text theme={"system"}
➜ npm create cloudflare@latest
Need to install the following packages:
create-cloudflare@2.52.3
Ok to proceed? (y) y
using create-cloudflare version 2.52.3
╭ Create an application with Cloudflare Step 1 of 3
│
├ In which directory do you want to create your application?
│ dir ./cloudflare_starter
│
├ What would you like to start with?
│ category Hello World example
│
├ Which template would you like to use?
│ type Worker only
│
├ Which language do you want to use?
│ lang TypeScript
│
├ Do you want to use git for version control?
│ yes git
│
╰ Application created
```
We will also install the **Upstash QStash library**.
```bash theme={"system"}
npm install @upstash/qstash
```
### 3. Use QStash in your handler
First we import the library:
```ts src/index.ts theme={"system"}
import { Receiver } from "@upstash/qstash";
```
Then we adjust the `Env` interface to include the `QSTASH_CURRENT_SIGNING_KEY`
and `QSTASH_NEXT_SIGNING_KEY` environment variables.
```ts src/index.ts theme={"system"}
export interface Env {
QSTASH_CURRENT_SIGNING_KEY: string;
QSTASH_NEXT_SIGNING_KEY: string;
}
```
And then we validate the signature in the `handler` function.
First we create a new receiver and provide it with the signing keys.
```ts src/index.ts theme={"system"}
const receiver = new Receiver({
currentSigningKey: env.QSTASH_CURRENT_SIGNING_KEY,
nextSigningKey: env.QSTASH_NEXT_SIGNING_KEY,
});
```
Then we verify the signature.
```ts src/index.ts theme={"system"}
const body = await request.text();
const isValid = await receiver.verify({
signature: request.headers.get("Upstash-Signature")!,
body,
});
```
The entire file looks like this now:
```ts src/index.ts theme={"system"}
import { Receiver } from "@upstash/qstash";
export interface Env {
QSTASH_CURRENT_SIGNING_KEY: string;
QSTASH_NEXT_SIGNING_KEY: string;
}
export default {
async fetch(request, env, ctx): Promise {
const receiver = new Receiver({
currentSigningKey: env.QSTASH_CURRENT_SIGNING_KEY,
nextSigningKey: env.QSTASH_NEXT_SIGNING_KEY,
});
const body = await request.text();
const isValid = await receiver.verify({
signature: request.headers.get("Upstash-Signature")!,
body,
});
if (!isValid) {
return new Response("Invalid signature", { status: 401 });
}
// signature is valid
return new Response("Hello World!");
},
} satisfies ExportedHandler;
```
### Configure Credentials
There are two methods for setting up the credentials for QStash. One for worker level, the other for account level.
#### Using Cloudflare Secrets (Worker Level Secrets)
This is the common way of creating secrets for your worker, see [Workflow Secrets](https://developers.cloudflare.com/workers/configuration/secrets/)
* Navigate to [Upstash Console](https://console.upstash.com) and get your QStash credentials.
* In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Compute (Workers)** > **Workers & Pages**.
* Select your worker and go to **Settings** > **Variables and Secrets**.
* Add your QStash credentials as secrets here:
#### Using Cloudflare Secrets Store (Account Level Secrets)
This method requires a few modifications in the worker code, see [Access to Secret on Env Object](https://developers.cloudflare.com/secrets-store/integrations/workers/#3-access-the-secret-on-the-env-object)
```ts src/index.ts theme={"system"}
import { Receiver } from "@upstash/qstash";
export interface Env {
QSTASH_CURRENT_SIGNING_KEY: SecretsStoreSecret;
QSTASH_NEXT_SIGNING_KEY: SecretsStoreSecret;
}
export default {
async fetch(request, env, ctx): Promise {
const c = new Receiver({
currentSigningKey: await env.QSTASH_CURRENT_SIGNING_KEY.get(),
nextSigningKey: await env.QSTASH_NEXT_SIGNING_KEY.get(),
});
// Rest of the code
},
};
```
After doing these modifications, you can deploy the worker to Cloudflare with `npx wrangler deploy`, and
follow the steps below to define the secrets:
* Navigate to [Upstash Console](https://console.upstash.com) and get your QStash credentials.
* In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Secrets Store** and add QStash credentials as secrets.
* Under **Compute (Workers)** > **Workers & Pages**, find your worker and add these secrets as bindings.
### Deployment
Newer deployments may revert the configurations you did in the dashboard.
While worker level secrets persist, the bindings will be gone!
Deploy your function to Cloudflare with `npx wrangler deploy`
The endpoint of the function will be provided to you, once the deployment is done.
### Publish a message
Open a different terminal and publish a message to QStash. Note the destination
url is the same that was printed in the previous deploy step.
```bash theme={"system"}
curl --request POST "https://qstash.upstash.io/v2/publish/https://..workers.dev" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d "{ \"hello\": \"world\"}"
```
In the logs you should see something like this:
```bash theme={"system"}
$ npx wrangler tail
⛅️ wrangler 4.43.0
--------------------
Successfully created tail, expires at 2025-10-16T00:25:17Z
Connected to , waiting for logs...
POST https://..workers.dev/ - Ok @ 10/15/2025, 10:34:55 PM
```
## Next Steps
That's it, you have successfully created a secure Cloudflare Worker, that
receives and verifies incoming webhooks from qstash.
Learn more about publishing a message to qstash [here](/qstash/howto/publishing).
You can find the source code [here](https://github.com/upstash/qstash-examples/tree/main/cloudflare-workers).
# Deno Deploy
Source: https://upstash.com/docs/qstash/quickstarts/deno-deploy
[Source Code](https://github.com/upstash/qstash-examples/tree/main/deno-deploy)
This is a step by step guide on how to receive webhooks from QStash in your Deno
deploy project.
### 1. Create a new project
Go to [https://dash.deno.com/projects](https://dash.deno.com/projects) and
create a new playground project.
### 2. Edit the handler function
Then paste the following code into the browser editor:
```ts theme={"system"}
import { serve } from "https://deno.land/std@0.142.0/http/server.ts";
import { Receiver } from "https://deno.land/x/upstash_qstash@v0.1.4/mod.ts";
serve(async (req: Request) => {
const r = new Receiver({
currentSigningKey: Deno.env.get("QSTASH_CURRENT_SIGNING_KEY")!,
nextSigningKey: Deno.env.get("QSTASH_NEXT_SIGNING_KEY")!,
});
const isValid = await r
.verify({
signature: req.headers.get("Upstash-Signature")!,
body: await req.text(),
})
.catch((err: Error) => {
console.error(err);
return false;
});
if (!isValid) {
return new Response("Invalid signature", { status: 401 });
}
console.log("The signature was valid");
// do work
return new Response("OK", { status: 200 });
});
```
### 3. Add your signing keys
Click on the `settings` button at the top of the screen and then click
`+ Add Variable`
Get your current and next signing key from
[Upstash](https://console.upstash.com/qstash) and then set them in deno deploy.
### 4. Deploy
Simply click on `Save & Deploy` at the top of the screen.
### 5. Publish a message
Make note of the url displayed in the top right. This is the public url of your
project.
```bash theme={"system"}
curl --request POST "https://qstash.upstash.io/v2/publish/https://early-frog-33.deno.dev" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d "{ \"hello\": \"world\"}"
```
In the logs you should see something like this:
```basheurope-west3isolate start time: 2.21 ms theme={"system"}
Listening on http://localhost:8000/
The signature was valid
```
## Next Steps
That's it, you have successfully created a secure deno API, that receives and
verifies incoming webhooks from qstash.
Learn more about publishing a message to qstash [here](/qstash/howto/publishing)
# Golang
Source: https://upstash.com/docs/qstash/quickstarts/fly-io/go
[Source Code](https://github.com/upstash/qstash-examples/tree/main/fly.io/go)
This is a step by step guide on how to receive webhooks from QStash in your
Golang application running on [fly.io](https://fly.io).
## 0. Prerequisites
* [flyctl](https://fly.io/docs/getting-started/installing-flyctl/) - The fly.io
CLI
## 1. Create a new project
Let's create a new folder called `flyio-go` and initialize a new project.
```bash theme={"system"}
mkdir flyio-go
cd flyio-go
go mod init flyio-go
```
## 2. Creating the main function
In this example we will show how to receive a webhook from QStash and verify the
signature using the popular [golang-jwt/jwt](https://github.com/golang-jwt/jwt)
library.
First, let's import everything we need:
```go theme={"system"}
package main
import (
"crypto/sha256"
"encoding/base64"
"fmt"
"github.com/golang-jwt/jwt/v4"
"io"
"net/http"
"os"
"time"
)
```
Next we create `main.go`. Ignore the `verify` function for now. We will add that
next. In the handler we will prepare all necessary variables that we need for
verification. This includes the signature and the signing keys. Then we try to
verify the request using the current signing key and if that fails we will try
the next one. If the signature could be verified, we can start processing the
request.
```go theme={"system"}
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
currentSigningKey := os.Getenv("QSTASH_CURRENT_SIGNING_KEY")
nextSigningKey := os.Getenv("QSTASH_NEXT_SIGNING_KEY")
tokenString := r.Header.Get("Upstash-Signature")
body, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
err = verify(body, tokenString, currentSigningKey)
if err != nil {
fmt.Printf("Unable to verify signature with current signing key: %v", err)
err = verify(body, tokenString, nextSigningKey)
}
if err != nil {
http.Error(w, err.Error(), http.StatusUnauthorized)
return
}
// handle your business logic here
w.WriteHeader(http.StatusOK)
})
fmt.Println("listening on", port)
err := http.ListenAndServe(":"+port, nil)
if err != nil {
panic(err)
}
}
```
The `verify` function will handle verification of the [JWT](https://jwt.io),
that includes claims about the request. See
[here](/qstash/features/security#claims).
```go theme={"system"}
func verify(body []byte, tokenString, signingKey string) error {
token, err := jwt.Parse(
tokenString,
func(token *jwt.Token) (interface{}, error) {
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"])
}
return []byte(signingKey), nil
})
if err != nil {
return err
}
claims, ok := token.Claims.(jwt.MapClaims)
if !ok || !token.Valid {
return fmt.Errorf("Invalid token")
}
if !claims.VerifyIssuer("Upstash", true) {
return fmt.Errorf("invalid issuer")
}
if !claims.VerifyExpiresAt(time.Now().Unix(), true) {
return fmt.Errorf("token has expired")
}
if !claims.VerifyNotBefore(time.Now().Unix(), true) {
return fmt.Errorf("token is not valid yet")
}
bodyHash := sha256.Sum256(body)
if claims["body"] != base64.URLEncoding.EncodeToString(bodyHash[:]) {
return fmt.Errorf("body hash does not match")
}
return nil
}
```
You can find the complete file
[here](https://github.com/upstash/qstash-examples/blob/main/fly.io/go/main.go).
That's it, now we can deploy our API and test it.
## 3. Create app on fly.io
[Login](https://fly.io/docs/getting-started/log-in-to-fly/) with `flyctl` and
then `flyctl launch` the new app. This will create the necessary `fly.toml` for
us. It will ask you a bunch of questions. I chose all defaults here except for
the last question. We do not want to deploy just yet.
```bash theme={"system"}
$ flyctl launch
Creating app in /Users/andreasthomas/github/upstash/qstash-examples/fly.io/go
Scanning source code
Detected a Go app
Using the following build configuration:
Builder: paketobuildpacks/builder:base
Buildpacks: gcr.io/paketo-buildpacks/go
? App Name (leave blank to use an auto-generated name):
Automatically selected personal organization: Andreas Thomas
? Select region: fra (Frankfurt, Germany)
Created app winer-cherry-9545 in organization personal
Wrote config file fly.toml
? Would you like to setup a Postgresql database now? No
? Would you like to deploy now? No
Your app is ready. Deploy with `flyctl deploy`
```
## 4. Set Environment Variables
Get your current and next signing key from the
[Upstash Console](https://console.upstash.com/qstash)
Then set them using `flyctl secrets set ...`
```bash theme={"system"}
flyctl secrets set QSTASH_CURRENT_SIGNING_KEY=...
flyctl secrets set QSTASH_NEXT_SIGNING_KEY=...
```
## 5. Deploy the app
Fly.io made this step really simple. Just `flyctl deploy` and enjoy.
```bash theme={"system"}
flyctl deploy
```
## 6. Publish a message
Now you can publish a message to QStash. Note the destination url is basically
your app name, if you are not sure what it is, you can go to
[fly.io/dashboard](https://fly.io/dashboard) and find out. In my case the app is
named "winter-cherry-9545" and the public url is
"[https://winter-cherry-9545.fly.dev](https://winter-cherry-9545.fly.dev)".
```bash theme={"system"}
curl --request POST "https://qstash.upstash.io/v2/publish/https://winter-cherry-9545.fly.dev" \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d "{ \"hello\": \"world\"}"
```
## Next Steps
That's it, you have successfully created a Go API hosted on fly.io, that
receives and verifies incoming webhooks from qstash.
Learn more about publishing a message to qstash [here](/qstash/howto/publishing)
# Python on Vercel
Source: https://upstash.com/docs/qstash/quickstarts/python-vercel
## Introduction
This quickstart will guide you through setting up QStash to run a daily script
to clean up your database. This is useful for testing and development environments
where you want to reset the database every day.
## Prerequisites
* Create an Upstash account and get your [QStash token](https://console.upstash.com/qstash)
First, we'll create a new directory for our Python app. We'll call it `clean-db-cron`.
The database we'll be using is Redis, so we'll need to install the `upstash_redis` package.
```bash theme={"system"}
mkdir clean-db-cron
```
```bash theme={"system"}
cd clean-db-cron
```
```bash theme={"system"}
pip install upstash-redis
```
Let's write the Python code to clean up the database. We'll use the `upstash_redis`
package to connect to the database and delete all keys.
```python index.py theme={"system"}
from upstash_redis import Redis
redis = Redis(url="https://YOUR_REDIS_URL", token="YOUR_TOKEN")
def delete_all_entries():
keys = redis.keys("*") # Match all keys
redis.delete(*keys)
delete_all_entries()
```
Try running the code to see if it works. Your database keys should be deleted!
In order to use QStash, we need to make the Python code into a public endpoint. There
are many ways to do this such as using Flask, FastAPI, or Django. In this example, we'll
use the Python `http.server` module to create a simple HTTP server.
```python api/index.py theme={"system"}
from http.server import BaseHTTPRequestHandler
from upstash_redis import Redis
redis = Redis(url="https://YOUR_REDIS_URL", token="YOUR_TOKEN")
def delete_all_entries():
keys = redis.keys("*") # Match all keys
redis.delete(*keys)
class handler(BaseHTTPRequestHandler):
def do_POST(self):
delete_all_entries()
self.send_response(200)
self.end_headers()
```
For the purpose of this tutorial, I'll deploy the application to Vercel using the
[Python Runtime](https://vercel.com/docs/functions/runtimes/python), but feel free to
use any other hosting provider.
There are many ways to [deploy to Vercel](https://vercel.com/docs/deployments/overview), but
I'm going to use the Vercel CLI.
```bash theme={"system"}
npm install -g vercel
```
```bash theme={"system"}
vercel
```
Once deployed, you can find the public URL in the dashboard.
There are two ways we can go about configuring QStash. We can either use the QStash dashboard
or the QStash API. In this example, it makes more sense to utilize the dashboard since we
only need to set up a singular cronjob.
However, you can imagine a scenario where you have a large number of cronjobs and you'd
want to automate the process. In that case, you'd want to use the QStash Python SDK.
To create the schedule, go to the [QStash dashboard](https://console.upstash.com/qstash) and enter
the URL of the public endpoint you created. Then, set the type to schedule and change the
`Upstash-Cron` header to run daily at a time of your choosing.
```
URL: https://your-vercel-app.vercel.app/api
Type: Schedule
Every: every day at midnight (feel free to customize)
```
Once you start the schedule, QStash will invoke the endpoint at the specified time. You can
scroll down and verify the job has been created!
If you have a use case where you need to automate the creation of jobs, you can use the SDK instead.
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="https://YOUR_URL.vercel.app/api",
cron="0 12 * * *",
)
```
Now, go ahead and try it out for yourself! Try using some of the other features of QStash, such as
[callbacks](/qstash/features/callbacks) and [URL Groups](/qstash/features/url-groups).
# Next.js
Source: https://upstash.com/docs/qstash/quickstarts/vercel-nextjs
QStash is a robust message queue and task-scheduling service that integrates perfectly with Next.js. This guide will show you how to use QStash in your Next.js projects, including a quickstart and a complete example.
## Quickstart
At its core, each QStash message contains two pieces of information:
* URL (which endpoint to call)
* Request body (e.g. IDs of items you want to process)
The following endpoint could be used to upload an image and then asynchronously queue a processing task to optimize the image in the background.
```tsx upload-image/route.ts theme={"system"}
import { Client } from "@upstash/qstash"
import { NextResponse } from "next/server"
const client = new Client({ token: process.env.QSTASH_TOKEN! })
export const POST = async (req: Request) => {
// Image uploading logic
// 👇 Once uploading is done, queue an image processing task
const result = await client.publishJSON({
url: "https://your-api-endpoint.com/process-image",
body: { imageId: "123" },
})
return NextResponse.json({
message: "Image queued for processing!",
qstashMessageId: result.messageId,
})
}
```
Note that the URL needs to be publicly available for QStash to call, either as a deployed project or by [developing with QStash locally](/qstash/howto/local-tunnel).
Because QStash calls our image processing task, we get automatic retries whenever the API throws an error. These retries make our function very reliable. We also let the user know immediately that their image has been successfully queued.
Now, let's **receive the QStash message** in our image processing endpoint:
```tsx process-image/route.ts theme={"system"}
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs"
// 👇 Verify that this messages comes from QStash
export const POST = verifySignatureAppRouter(async (req: Request) => {
const body = await req.json()
const { imageId } = body as { imageId: string }
// Image processing logic, i.e. using sharp
return new Response(`Image with id "${imageId}" processed successfully.`)
})
```
```bash .env theme={"system"}
# Copy all three from your QStash dashboard
QSTASH_TOKEN=
QSTASH_CURRENT_SIGNING_KEY=
QSTASH_NEXT_SIGNING_KEY=
```
Just like that, we set up a reliable and asynchronous image processing system in Next.js. The same logic works for email queues, reliable webhook processing, long-running report generations and many more.
## Example project
* Create an Upstash account and get your [QStash token](https://console.upstash.com/qstash)
* Node.js installed
```bash theme={"system"}
npx create-next-app@latest qstash-bg-job
```
```bash theme={"system"}
cd qstash-bg-job
```
```bash theme={"system"}
npm install @upstash/qstash
```
```bash theme={"system"}
npm run dev
```
After removing the default content in `src/app/page.tsx`, let's create a simple UI to trigger the background job
using a button.
```tsx src/app/page.tsx theme={"system"}
"use client"
export default function Home() {
return (
)
}
```
We can use QStash to start a background job by calling the `publishJSON` method.
In this example, we're using Next.js server actions, but you can also use route handlers.
Since we don't have our public API endpoint yet, we can use [Request Catcher](https://requestcatcher.com/) to test the background job.
This will eventually be replaced with our own API endpoint.
```ts src/app/actions.ts theme={"system"}
"use server"
import { Client } from "@upstash/qstash"
const qstashClient = new Client({
// Add your token to a .env file
token: process.env.QSTASH_TOKEN!,
})
export async function startBackgroundJob() {
await qstashClient.publishJSON({
url: "https://firstqstashmessage.requestcatcher.com/test",
body: {
hello: "world",
},
})
}
```
Now let's invoke the `startBackgroundJob` function when the button is clicked.
```tsx src/app/page.tsx theme={"system"}
"use client"
import { startBackgroundJob } from "@/app/actions"
export default function Home() {
async function handleClick() {
await startBackgroundJob()
}
return (
)
}
```
To test the background job, click the button and check the Request Catcher for the incoming request.
You can also head over to [Upstash Console](https://console.upstash.com/qstash) and go to the
`Logs` tab where you can see your message activities.
Now that we know QStash is working, let's create our own endpoint to handle a background job. This
is the endpoint that will be invoked by QStash.
This job will be responsible for sending 10 requests, each with a 500ms delay. Since we're deploying
to Vercel, we have to be cautious of the [time limit for serverless functions](https://vercel.com/docs/functions/runtimes#max-duration).
```ts src/app/api/long-task/route.ts theme={"system"}
export async function POST(request: Request) {
const data = await request.json()
for (let i = 0; i < 10; i++) {
await fetch("https://firstqstashmessage.requestcatcher.com/test", {
method: "POST",
body: JSON.stringify(data),
headers: { "Content-Type": "application/json" },
})
await new Promise((resolve) => setTimeout(resolve, 500))
}
return Response.json({ success: true })
}
```
Now let's update our `startBackgroundJob` function to use our new endpoint.
There's 1 problem: our endpoint is not public. We need to make it public so that QStash can call it.
We have 2 options:
1. Deploy our application to a platform like Vercel and use the public URL.
2. Create a [local tunnel](/qstash/howto/local-tunnel) to test the endpoint locally.
For the purpose, of this tutorial, I'll deploy the application to Vercel, but
feel free to use a local tunnel if you prefer.
There are many ways to [deploy to Vercel](https://vercel.com/docs/deployments/overview), but
I'm going to use the Vercel CLI.
```bash theme={"system"}
npm install -g vercel
```
```bash theme={"system"}
vercel
```
Once deployed, you can find the public URL in the Vercel dashboard.
Now that we have a public URL, we can update the URL.
```ts src/app/actions.ts theme={"system"}
"use server"
import { Client } from "@upstash/qstash"
const qstashClient = new Client({
token: process.env.QSTASH_TOKEN!,
})
export async function startBackgroundJob() {
await qstashClient.publishJSON({
// Replace with your public URL
url: "https://qstash-bg-job.vercel.app/api/long-task",
body: {
hello: "world",
},
})
}
```
And voila! You've created a Next.js app that calls a long-running background job using QStash.
QStash is a great way to handle background jobs, but it's important to remember that it's a public
API. This means that anyone can call your endpoint. Make sure to add security measures to your endpoint
to ensure that QStash is the sender of the request.
Luckily, our SDK provides a way to verify the sender of the request. Make sure to get your signing keys
from the QStash console and add them to your environment variables. The `verifySignatureAppRouter` will try to
load `QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` from the environment. If one of them is missing,
an error is thrown.
```ts src/app/api/long-task/route.ts theme={"system"}
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs"
async function handler(request: Request) {
const data = await request.json()
for (let i = 0; i < 10; i++) {
await fetch("https://firstqstashmessage.requestcatcher.com/test", {
method: "POST",
body: JSON.stringify(data),
headers: { "Content-Type": "application/json" },
})
await new Promise((resolve) => setTimeout(resolve, 500))
}
return Response.json({ success: true })
}
export const POST = verifySignatureAppRouter(handler)
```
Let's also add error catching to our action and a loading state to our UI.
```ts src/app/actions.ts theme={"system"}
"use server"
import { Client } from "@upstash/qstash";
const qstashClient = new Client({
token: process.env.QSTASH_TOKEN!,
});
export async function startBackgroundJob() {
try {
const response = await qstashClient.publishJSON({
"url": "https://qstash-bg-job.vercel.app/api/long-task",
body: {
"hello": "world"
}
});
return response.messageId;
} catch (error) {
console.error(error);
return null;
}
}
```
```tsx src/app/page.tsx theme={"system"}
"use client"
import { startBackgroundJob } from "@/app/actions";
import { useState } from "react";
export default function Home() {
const [loading, setLoading] = useState(false);
const [msg, setMsg] = useState("");
async function handleClick() {
setLoading(true);
const messageId = await startBackgroundJob();
if (messageId) {
setMsg(`Started job with ID ${messageId}`);
} else {
setMsg("Failed to start background job");
}
setLoading(false);
}
return (
{loading &&
Loading...
}
{msg &&
{msg}
}
);
}
```
## Result
We have now created a Next.js app that calls a long-running background job using QStash!
Here's the app in action:
We can also view the logs on Vercel and QStash
Vercel
QStash
And the code for the 3 files we created:
```tsx src/app/page.tsx theme={"system"}
"use client"
import { startBackgroundJob } from "@/app/actions";
import { useState } from "react";
export default function Home() {
const [loading, setLoading] = useState(false);
const [msg, setMsg] = useState("");
async function handleClick() {
setLoading(true);
const messageId = await startBackgroundJob();
if (messageId) {
setMsg(`Started job with ID ${messageId}`);
} else {
setMsg("Failed to start background job");
}
setLoading(false);
}
return (
{loading &&
Loading...
}
{msg &&
{msg}
}
);
}
```
```ts src/app/actions.ts theme={"system"}
"use server"
import { Client } from "@upstash/qstash";
const qstashClient = new Client({
token: process.env.QSTASH_TOKEN!,
});
export async function startBackgroundJob() {
try {
const response = await qstashClient.publishJSON({
"url": "https://qstash-bg-job.vercel.app/api/long-task",
body: {
"hello": "world"
}
});
return response.messageId;
} catch (error) {
console.error(error);
return null;
}
}
```
```ts src/app/api/long-task/route.ts theme={"system"}
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs"
async function handler(request: Request) {
const data = await request.json()
for (let i = 0; i < 10; i++) {
await fetch("https://firstqstashmessage.requestcatcher.com/test", {
method: "POST",
body: JSON.stringify(data),
headers: { "Content-Type": "application/json" },
})
await new Promise((resolve) => setTimeout(resolve, 500))
}
return Response.json({ success: true })
}
export const POST = verifySignatureAppRouter(handler)
```
Now, go ahead and try it out for yourself! Try using some of the other features of QStash, like
[schedules](/qstash/features/schedules), [callbacks](/qstash/features/callbacks), and [URL Groups](/qstash/features/url-groups).
# Periodic Data Updates
Source: https://upstash.com/docs/qstash/recipes/periodic-data-updates
* Code:
[Repository](https://github.com/upstash/qstash-examples/tree/main/periodic-data-updates)
* App:
[qstash-examples-periodic-data-updates.vercel.app](https://qstash-examples-periodic-data-updates.vercel.app)
This recipe shows how to use QStash as a trigger for a Next.js api route, that
fetches data from somewhere and stores it in your database.
For the database we will use Redis because it's very simple to setup and is not
really the main focus of this recipe.
## What will be build?
Let's assume there is a 3rd party API that provides some data. One approach
would be to just query the API whenever you or your users need it, however that
might not work well if the API is slow, unavailable or rate limited.
A better approach would be to continuously fetch fresh data from the API and
store it in your database.
Traditionally this would require a long running process, that would continuously
call the API. With QStash you can do this inside your Next.js app and you don't
need to worry about maintaining anything.
For the purpose of this recipe we will build a simple app, that scrapes the
current Bitcoin price from a public API, stores it in redis and then displays a
chart in the browser.
## Setup
If you don't have one already, create a new Next.js project with
`npx create-next-app@latest --ts`.
Then install the required packages
```bash theme={"system"}
npm install @upstash/qstash @upstash/redis
```
You can replace `@upstash/redis` with any kind of database client you want.
## Scraping the API
Create a new serverless function in `/pages/api/cron.ts`
````ts theme={"system"}
import { NextApiRequest, NextApiResponse } from "next";
import { Redis } from "@upstash/redis";
import { verifySignature } from "@upstash/qstash/nextjs";
/**
* You can use any database you want, in this case we use Redis
*/
const redis = Redis.fromEnv();
/**
* Load the current bitcoin price in USD and store it in our database at the
* current timestamp
*/
async function handler(_req: NextApiRequest, res: NextApiResponse) {
try {
/**
* The API returns something like this:
* ```json
* {
* "USD": {
* "last": 123
* },
* ...
* }
* ```
*/
const raw = await fetch("https://blockchain.info/ticker");
const prices = await raw.json();
const bitcoinPrice = prices["USD"]["last"] as number;
/**
* After we have loaded the current bitcoin price, we can store it in the
* database together with the current time
*/
await redis.zadd("bitcoin-prices", {
score: Date.now(),
member: bitcoinPrice,
});
res.send("OK");
} catch (err) {
res.status(500).send(err);
} finally {
res.end();
}
}
/**
* Wrap your handler with `verifySignature` to automatically reject all
* requests that are not coming from Upstash.
*/
export default verifySignature(handler);
/**
* To verify the authenticity of the incoming request in the `verifySignature`
* function, we need access to the raw request body.
*/
export const config = {
api: {
bodyParser: false,
},
};
````
## Deploy to Vercel
That's all we need to fetch fresh data. Let's deploy our app to Vercel.
You can either push your code to a git repository and deploy it to Vercel, or
you can deploy it directly from your local machine using the
[vercel cli](https://vercel.com/docs/cli).
For a more indepth tutorial on how to deploy to Vercel, check out this
[quickstart](/qstash/quickstarts/vercel-nextjs#4-deploy-to-vercel).
After you have deployed your app, it is time to add your secrets to your
environment variables.
## Secrets
Head over to [QStash](https://console.upstash.com/qstash) and copy the
`QSTASH_CURRENT_SIGNING_KEY` and `QSTASH_NEXT_SIGNING_KEY` to vercel's
environment variables.
If you are not using a custom database, you can quickly create a new
[Redis database](https://console.upstash.com/redis). Afterwards copy the
`UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to vercel.
In the near future we will update our
[Vercel integration](https://vercel.com/integrations/upstash) to do this for
you.
## Redeploy
To use the environment variables, you need to redeploy your app. Either with
`npx vercel --prod` or in the UI.
## Create cron trigger in QStash
The last part is to add the trigger in QStash. Go to
[QStash](https://console.upstash.com/qstash) and create a new schedule.
Now we will call your api function whenever you schedule is triggered.
## Adding frontend UI
This part is probably the least interesting and would require more dependencies
for styling etc. Check out the
[index.tsx](https://github.com/upstash/qstash-examples/blob/main/periodic-data-updates/pages/index.tsx)
file, where we load the data from the database and display it in a chart.
## Hosted example
You can find a running example of this recipe
[here](https://qstash-examples-periodic-data-updates.vercel.app/).
# DLQ
Source: https://upstash.com/docs/qstash/sdks/py/examples/dlq
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Get all messages with pagination using cursor
Since the DLQ can have a large number of messages, they are paginated.
You can go through the results using the `cursor`.
```python theme={"system"}
from qstash import QStash
client = QStash("")
all_messages = []
cursor = None
while True:
res = client.dlq.list(cursor=cursor)
all_messages.extend(res.messages)
cursor = res.cursor
if cursor is None:
break
```
#### Get a message from the DLQ
```python theme={"system"}
from qstash import QStash
client = QStash("")
msg = client.dlq.get("")
```
#### Delete a message from the DLQ
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.dlq.delete("")
```
# Events
Source: https://upstash.com/docs/qstash/sdks/py/examples/events
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Get all events with pagination using cursor
Since there can be a large number of events, they are paginated.
You can go through the results using the `cursor`.
```python theme={"system"}
from qstash import QStash
client = QStash("")
all_events = []
cursor = None
while True:
res = client.event.list(cursor=cursor)
all_events.extend(res.events)
cursor = res.cursor
if cursor is None:
break
```
# Keys
Source: https://upstash.com/docs/qstash/sdks/py/examples/keys
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Retrieve your signing Keys
```python theme={"system"}
from qstash import QStash
client = QStash("")
signing_key = client.signing_key.get()
print(signing_key.current, signing_key.next)
```
#### Rotate your signing Keys
```python theme={"system"}
from qstash import QStash
client = QStash("")
new_signing_key = client.signing_key.rotate()
print(new_signing_key.current, new_signing_key.next)
```
# Messages
Source: https://upstash.com/docs/qstash/sdks/py/examples/messages
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
Messages are removed from the database shortly after they're delivered, so you
will not be able to retrieve a message after. This endpoint is intended to be used
for accessing messages that are in the process of being delivered/retried.
#### Retrieve a message
```python theme={"system"}
from qstash import QStash
client = QStash("")
msg = client.message.get("")
```
#### Cancel/delete a message
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.cancel("")
```
#### Cancel messages in bulk
Cancel many messages at once or cancel all messages
```python theme={"system"}
from qstash import QStash
client = QStash("")
# cancel more than one message
client.message.cancel_many(["", ""])
# cancel all messages
client.message.cancel_all()
```
# Overview
Source: https://upstash.com/docs/qstash/sdks/py/examples/overview
These are example usages of each method in the QStash SDK. You can also reference the
[examples repo](https://github.com/upstash/qstash-py/tree/main/examples) and [API examples](/qstash/overall/apiexamples) for more.
# Publish
Source: https://upstash.com/docs/qstash/sdks/py/examples/publish
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Publish to a URL with a 3 second delay and headers/body
```python theme={"system"}
from qstash import QStash
client = QStash("")
res = client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
headers={
"test-header": "test-value",
},
delay="3s",
)
print(res.message_id)
```
#### Publish to a URL group with a 3 second delay and headers/body
You can make a URL group on the QStash console or using the [URL group API](/qstash/sdks/py/examples/url-groups)
```python theme={"system"}
from qstash import QStash
client = QStash("")
res = client.message.publish_json(
url_group="my-url-group",
body={
"hello": "world",
},
headers={
"test-header": "test-value",
},
delay="3s",
)
# When publishing to a URL group, the response is an array of messages for each URL in the group
print(res[0].message_id)
```
#### Publish a method with a callback URL
[Callbacks](/qstash/features/callbacks) are useful for long running functions. Here, QStash will return the response
of the publish request to the callback URL.
We also change the `method` to `GET` in this use case so QStash will make a `GET` request to the `url`. The default
is `POST`.
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
callback="https://my-callback...",
failure_callback="https://my-failure-callback...",
method="GET",
)
```
#### Configure the number of retries
The max number of retries is based on your [QStash plan](https://upstash.com/pricing/qstash)
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
retries=1,
)
```
By default, the delay between retries is calculated using an exponential backoff algorithm. You can customize this using the `retryDelay` parameter. Check out [the retries page to learn more about custom retry delay values](/qstash/features/retry#custom-retry-delay).
#### Publish HTML content instead of JSON
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish(
url="https://my-api...",
body="
Hello World
",
content_type="text/html",
)
```
#### Publish a message with [content-based-deduplication](/qstash/features/deduplication)
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
content_based_deduplication=True,
)
```
#### Publish a message with timeout
Timeout value to use when calling a url ([See `Upstash-Timeout` in Publish Message page](/qstash/api/publish#request))
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(
url="https://my-api...",
body={
"hello": "world",
},
timeout="30s",
)
```
# Queues
Source: https://upstash.com/docs/qstash/sdks/py/examples/queues
#### Create a queue with parallelism
```python theme={"system"}
from qstash import QStash
client = QStash("")
queue_name = "upstash-queue"
client.queue.upsert(queue_name, parallelism=2)
print(client.queue.get(queue_name))
```
#### Delete a queue
```python theme={"system"}
from qstash import QStash
client = QStash("")
queue_name = "upstash-queue"
client.queue.delete(queue_name)
```
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
#### Pause/Resume a queue
```python theme={"system"}
from qstash import QStash
client = QStash("")
queue_name = "upstash-queue"
client.queue.upsert(queue_name, parallelism=1)
client.queue.pause(queue_name)
queue = client.queue.get(queue_name)
print(queue.paused) # prints True
client.queue.resume(queue_name)
```
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
# Receiver
Source: https://upstash.com/docs/qstash/sdks/py/examples/receiver
When receiving a message from QStash, you should [verify the signature](/qstash/howto/signature).
The QStash Python SDK provides a helper function for this.
```python theme={"system"}
from qstash import Receiver
receiver = Receiver(
current_signing_key="YOUR_CURRENT_SIGNING_KEY",
next_signing_key="YOUR_NEXT_SIGNING_KEY",
)
# ... in your request handler
signature, body = req.headers["Upstash-Signature"], req.body
receiver.verify(
body=body,
signature=signature,
url="YOUR-SITE-URL",
)
```
# Schedules
Source: https://upstash.com/docs/qstash/sdks/py/examples/schedules
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Create a schedule that runs every 5 minutes
```python theme={"system"}
from qstash import QStash
client = QStash("")
schedule_id = client.schedule.create(
destination="https://my-api...",
cron="*/5 * * * *",
)
print(schedule_id)
```
#### Create a schedule that runs every hour and sends the result to a [callback URL](/qstash/features/callbacks)
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="https://my-api...",
cron="0 * * * *",
callback="https://my-callback...",
failure_callback="https://my-failure-callback...",
)
```
#### Create a schedule to a URL group that runs every minute
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.create(
destination="my-url-group",
cron="0 * * * *",
)
```
#### Get a schedule by schedule id
```python theme={"system"}
from qstash import QStash
client = QStash("")
schedule = client.schedule.get("")
print(schedule.cron)
```
#### List all schedules
```python theme={"system"}
from qstash import QStash
client = QStash("")
all_schedules = client.schedule.list()
print(all_schedules)
```
#### Delete a schedule
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.schedule.delete("")
```
#### Create a schedule with timeout
Timeout value to use when calling a schedule URL ([See `Upstash-Timeout` in Create Schedule page](/qstash/api/schedules/create)).
```python theme={"system"}
from qstash import QStash
client = QStash("")
schedule_id = client.schedule.create(
destination="https://my-api...",
cron="*/5 * * * *",
timeout="30s",
)
print(schedule_id)
```
#### Pause/Resume a schedule
```python theme={"system"}
from qstash import QStash
client = QStash("")
schedule_id = "scd_1234"
client.schedule.pause(schedule_id)
schedule = client.schedule.get(schedule_id)
print(schedule.paused) # prints True
client.schedule.resume(schedule_id)
```
# URL Groups
Source: https://upstash.com/docs/qstash/sdks/py/examples/url-groups
You can run the async code by importing `AsyncQStash` from `qstash`
and awaiting the methods.
#### Create a URL group and add 2 endpoints
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.url_group.upsert_endpoints(
url_group="my-url-group",
endpoints=[
{"url": "https://my-endpoint-1"},
{"url": "https://my-endpoint-2"},
],
)
```
#### Get URL group by name
```python theme={"system"}
from qstash import QStash
client = QStash("")
url_group = client.url_group.get("my-url-group")
print(url_group.name, url_group.endpoints)
```
#### List URL groups
```python theme={"system"}
from qstash import QStash
client = QStash("")
all_url_groups = client.url_group.list()
for url_group in all_url_groups:
print(url_group.name, url_group.endpoints)
```
#### Remove an endpoint from a URL group
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.url_group.remove_endpoints(
url_group="my-url-group",
endpoints=[
{"url": "https://my-endpoint-1"},
],
)
```
#### Delete a URL group
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.url_group.delete("my-url-group")
```
# Getting Started
Source: https://upstash.com/docs/qstash/sdks/py/gettingstarted
## Install
### PyPI
```bash theme={"system"}
pip install qstash
```
## Get QStash token
Follow the instructions [here](/qstash/overall/getstarted) to get your QStash token and signing keys.
## Usage
#### Synchronous Client
```python theme={"system"}
from qstash import QStash
client = QStash("")
client.message.publish_json(...)
```
#### Asynchronous Client
```python theme={"system"}
import asyncio
from qstash import AsyncQStash
async def main():
client = AsyncQStash("")
await client.message.publish_json(...)
asyncio.run(main())
```
#### RetryConfig
You can configure the retry policy of the client by passing the configuration to the client constructor.
Note: This isn for sending the request to QStash, not for the retry policy of QStash.
The default number of retries is **5** and the default backoff function is `lambda retry_count: math.exp(retry_count) * 50`.
You can also pass in `False` to disable retrying.
```python theme={"system"}
from qstash import QStash
client = QStash(
"",
retry={
"retries": 3,
"backoff": lambda retry_count: (2**retry_count) * 20,
},
)
```
# Overview
Source: https://upstash.com/docs/qstash/sdks/py/overview
`qstash` is an Python SDK for QStash, allowing for easy access to the QStash API.
Using `qstash` you can:
* Publish a message to a URL/URL group/API
* Publish a message with a delay
* Schedule a message to be published
* Access logs for the messages that have been published
* Create, read, update, or delete URL groups.
* Read or remove messages from the [DLQ](/qstash/features/dlq)
* Read or cancel messages
* Verify the signature of a message
You can find the Github Repository [here](https://github.com/upstash/qstash-py).
# DLQ
Source: https://upstash.com/docs/qstash/sdks/ts/examples/dlq
#### Get all messages with pagination using cursor
Since the DLQ can have a large number of messages, they are paginated.
You can go through the results using the `cursor`.
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client("");
const dlq = client.dlq;
const all_messages = [];
let cursor = null;
while (true) {
const res = await dlq.listMessages({ cursor });
all_messages.push(...res.messages);
cursor = res.cursor;
if (!cursor) {
break;
}
}
```
#### Delete a message from the DLQ
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const dlq = client.dlq;
await dlq.delete("dlqId");
```
# Logs
Source: https://upstash.com/docs/qstash/sdks/ts/examples/logs
#### Get all logs with pagination using cursor
Since there can be a large number of logs, they are paginated.
You can go through the results using the `cursor`.
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const logs = [];
let cursor = null;
while (true) {
const res = await client.logs({ cursor });
logs.push(...res.logs);
cursor = res.cursor;
if (!cursor) {
break;
}
}
```
#### Filter logs by state and only return the first 50.
More filters can be found in the [API Reference](/qstash/api/events/list).
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.logs({
filter: {
state: "DELIVERED",
count: 50
}
});
```
# Messages
Source: https://upstash.com/docs/qstash/sdks/ts/examples/messages
Messages are removed from the database shortly after they're delivered, so you
will not be able to retrieve a message after. This endpoint is intended to be used
for accessing messages that are in the process of being delivered/retried.
#### Retrieve a message
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const messages = client.messages
const msg = await messages.get("msgId");
```
#### Cancel/delete a message
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const messages = client.messages
const msg = await messages.delete("msgId");
```
#### Cancel messages in bulk
Cancel many messages at once or cancel all messages
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
// deleting two messages at once
await client.messages.deleteMany([
"message-id-1",
"message-id-2",
])
// deleting all messages
await client.messages.deleteAll()
```
# Overview
Source: https://upstash.com/docs/qstash/sdks/ts/examples/overview
These are example usages of each method in the QStash SDK. You can also reference the
[examples repo](https://github.com/upstash/sdk-qstash-ts/tree/main/examples) and [API examples](/qstash/overall/apiexamples) for more.
# Publish
Source: https://upstash.com/docs/qstash/sdks/ts/examples/publish
#### Publish to a URL with a 3 second delay and headers/body
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
headers: { "test-header": "test-value" },
delay: "3s",
});
```
#### Publish to a URL group with a 3 second delay and headers/body
You create URL group on the QStash console or using the [URL Group API](/qstash/sdks/ts/examples/url-groups#create-a-url-group-and-add-2-endpoints)
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
urlGroup: "my-url-group",
body: { hello: "world" },
headers: { "test-header": "test-value" },
delay: "3s",
});
// When publishing to a URL Group, the response is an array of messages for each URL in the URL Group
console.log(res[0].messageId);
```
#### Publish a method with a callback URL
[Callbacks](/qstash/features/callbacks) are useful for long running functions. Here, QStash will return the response
of the publish request to the callback URL.
We also change the `method` to `GET` in this use case so QStash will make a `GET` request to the `url`. The default
is `POST`.
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
callback: "https://my-callback...",
failureCallback: "https://my-failure-callback...",
method: "GET",
});
```
#### Configure the number of retries
The max number of retries is based on your [QStash plan](https://upstash.com/pricing/qstash)
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
retries: 1,
});
```
By default, the delay between retries is calculated using an exponential backoff algorithm. You can customize this using the `retry_delay` parameter. Check out [the retries documentation to learn more about custom retry delay values](/qstash/features/retry#custom-retry-delay).
#### Publish HTML content instead of JSON
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publish({
url: "https://my-api...",
body: "
Hello World
",
headers: {
"Content-Type": "text/html",
},
});
```
#### Publish a message with [content-based-deduplication](/qstash/features/deduplication)
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
contentBasedDeduplication: true,
});
```
#### Publish a message with timeout
Timeout value in seconds to use when calling a url ([See `Upstash-Timeout` in Publish Message page](/qstash/api/publish#request))
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.publishJSON({
url: "https://my-api...",
body: { hello: "world" },
timeout: "30s" // 30 seconds timeout
});
```
# Queues
Source: https://upstash.com/docs/qstash/sdks/ts/examples/queues
#### Create a queue with parallelism 2
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const queueName = "upstash-queue";
await client.queue({ queueName }).upsert({ parallelism: 2 });
const queueDetails = await client.queue({ queueName }).get();
```
#### Delete Queue
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const queueName = "upstash-queue";
await client.queue({ queueName: queueName }).delete();
```
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
#### Pause/Resume a queue
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const name = "upstash-pause-resume-queue";
const queue = client.queue({ queueName: name });
await queue.upsert({ parallelism: 1 });
// pause queue
await queue.pause();
const queueInfo = await queue.get();
console.log(queueInfo.paused); // prints true
// resume queue
await queue.resume();
```
Resuming or creating a queue may take up to a minute.
Therefore, it is not recommended to pause or delete a queue during critical operations.
# Receiver
Source: https://upstash.com/docs/qstash/sdks/ts/examples/receiver
When receiving a message from QStash, you should [verify the signature](/qstash/howto/signature).
The QStash Typescript SDK provides a helper function for this.
```typescript theme={"system"}
import { Receiver } from "@upstash/qstash";
const receiver = new Receiver({
currentSigningKey: "YOUR_CURRENT_SIGNING_KEY",
nextSigningKey: "YOUR_NEXT_SIGNING_KEY",
});
// ... in your request handler
const signature = req.headers["Upstash-Signature"];
const body = req.body;
const isValid = await receiver.verify({
body,
signature,
url: "YOUR-SITE-URL",
});
```
# Schedules
Source: https://upstash.com/docs/qstash/sdks/ts/examples/schedules
#### Create a schedule that runs every 5 minutes
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://my-api...",
cron: "*/5 * * * *",
});
```
#### Create a schedule that runs every hour and sends the result to a [callback URL](/qstash/features/callbacks)
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://my-api...",
cron: "0 * * * *",
callback: "https://my-callback...",
failureCallback: "https://my-failure-callback...",
});
```
#### Create a schedule to a URL Group that runs every minute
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "my-url-group",
cron: "* * * * *",
});
```
#### Get a schedule by schedule id
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const res = await client.schedules.get("scheduleId");
console.log(res.cron);
```
#### List all schedules
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const allSchedules = await client.schedules.list();
console.log(allSchedules);
```
#### Create/overwrite a schedule with a user chosen schedule id
Note that if a schedule exists with the same id, the old one will be discarded
and new schedule will be used.
```typescript Typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
destination: "https://example.com",
scheduleId: "USER_PROVIDED_SCHEDULE_ID",
cron: "* * * * *",
});
```
#### Delete a schedule
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.delete("scheduleId");
```
#### Create a schedule with timeout
Timeout value in seconds to use when calling a schedule URL ([See `Upstash-Timeout` in Create Schedule page](/qstash/api/schedules/create)).
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
await client.schedules.create({
url: "https://my-api...",
cron: "* * * * *",
timeout: "30" // 30 seconds timeout
});
```
#### Pause/Resume a schedule
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const scheduleId = "my-schedule"
// pause schedule
await client.schedules.pause({ schedule: scheduleId });
// check if paused
const result = await client.schedules.get(scheduleId);
console.log(getResult.isPaused) // prints true
// resume schedule
await client.schedules.resume({ schedule: scheduleId });
```
# URL Groups
Source: https://upstash.com/docs/qstash/sdks/ts/examples/url-groups
#### Create a URL Group and add 2 endpoints
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const urlGroups = client.urlGroups;
await urlGroups.addEndpoints({
name: "url_group_name",
endpoints: [
{ url: "https://my-endpoint-1" },
{ url: "https://my-endpoint-2" },
],
});
```
#### Get URL Group by name
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const urlGroups = client.urlGroups;
const urlGroup = await urlGroups.get("urlGroupName");
console.log(urlGroup.name, urlGroup.endpoints);
```
#### List URL Groups
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const allUrlGroups = await client.urlGroups.list();
for (const urlGroup of allUrlGroups) {
console.log(urlGroup.name, urlGroup.endpoints);
}
```
#### Remove an endpoint from a URL Group
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const urlGroups = client.urlGroups;
await urlGroups.removeEndpoints({
name: "urlGroupName",
endpoints: [{ url: "https://my-endpoint-1" }],
});
```
#### Delete a URL Group
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({ token: "" });
const urlGroups = client.urlGroups;
await urlGroups.delete("urlGroupName");
```
# Getting Started
Source: https://upstash.com/docs/qstash/sdks/ts/gettingstarted
## Install
### NPM
```bash theme={"system"}
npm install @upstash/qstash
```
## Get QStash token
Follow the instructions [here](/qstash/overall/getstarted) to get your QStash token and signing keys.
## Usage
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({
token: "",
});
```
#### RetryConfig
You can configure the retry policy of the client by passing the configuration to the client constructor.
Note: This is for sending the request to QStash, not for the retry policy of QStash.
The default number of attempts is **6** and the default backoff function is `(retry_count) => (Math.exp(retry_count) * 50)`.
You can also pass in `false` to disable retrying.
```typescript theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({
token: "",
retry: {
retries: 3,
backoff: retry_count => 2 ** retry_count * 20,
},
});
```
## Telemetry
This sdk sends anonymous telemetry headers to help us improve your experience.
We collect the following:
* SDK version
* Platform (Cloudflare, AWS or Vercel)
* Runtime version ([node@18.x](mailto:node@18.x))
You can opt out by setting the `UPSTASH_DISABLE_TELEMETRY` environment variable
to any truthy value. Or setting `enableTelemetry: false` in the client options.
```ts theme={"system"}
const client = new Client({
token: "",
enableTelemetry: false,
});
```
# Overview
Source: https://upstash.com/docs/qstash/sdks/ts/overview
`@upstash/qstash` is a Typescript SDK for QStash, allowing for easy access to the QStash API.
Using `@upstash/qstash` you can:
* Publish a message to a URL/URL Group
* Publish a message with a delay
* Schedule a message to be published
* Access logs for the messages that have been published
* Create, read, update, or delete URL groups.
* Read or remove messages from the [DLQ](/qstash/features/dlq)
* Read or cancel messages
* Verify the signature of a message
You can find the Github Repository [here](https://github.com/upstash/sdk-qstash-ts).
# Channels
Source: https://upstash.com/docs/realtime/features/channels
Channels allow you to scope events to specific people or rooms. For example:
* Chat rooms
* Emitting events to a specific user
## Default Channel
By default, events are sent to the `default` channel. If we emit an event without specifying a channel like so:
```typescript theme={"system"}
await realtime.emit("notification.alert", "hello world!")
```
it can automatically be read using the default channel:
```typescript theme={"system"}
useRealtime({
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(data)
},
})
```
***
## Custom Channels
Emit events to a specific channel:
```typescript route.ts theme={"system"}
const channel = realtime.channel("user-123")
await channel.emit("notification.alert", "hello world!")
```
Subscribe to one or more channels:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
useRealtime({
channels: ["user-123"],
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(data)
},
})
return <>...>
}
```
## Channel Patterns
Send notifications to individual users:
```typescript route.ts theme={"system"}
const channel = realtime.channel(`user-${userId}`)
await channel.emit("notification.alert", "hello world!")
```
```typescript page.tsx theme={"system"}
useRealtime({
channels: [`user-${user.id}`],
events: ["notification.alert"],
onData({ data }) {},
})
```
Broadcast to all users in a room:
```typescript route.ts theme={"system"}
await realtime.channel(`room-${roomId}`).emit("room.message", {
text: "Hello everyone!",
sender: "Alice",
})
```
Scope events to team workspaces:
```typescript route.ts theme={"system"}
await realtime.channel(`team-${teamId}`).emit("project.update", {
project: "Website Redesign",
status: "In Progress",
})
```
## Dynamic Channels
Subscribe to multiple channels at the same time:
```tsx page.tsx theme={"system"}
"use client"
import { useState } from "react"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
const [channels, setChannels] = useState(["lobby"])
useRealtime({
channels,
events: ["chat.message"],
onData({ event, data, channel }) {
console.log(`Message from ${channel}:`, data)
},
})
const joinRoom = (roomId: string) => {
setChannels((prev) => [...prev, roomId])
}
const leaveRoom = (roomId: string) => {
setChannels((prev) => prev.filter((c) => c !== roomId))
}
return (
Active channels: {channels.join(", ")}
)
}
```
## Broadcasting to Multiple Channels
Emit to multiple channels at the same time:
```typescript route.ts theme={"system"}
const rooms = ["lobby", "room-1", "room-2"]
await Promise.all(
rooms.map((room) => {
const channel = realtime.channel(room)
return channel.emit("chat.message", `Hi channel ${room}!`)
})
)
```
## Channel Security
Combine channels with [middleware](/realtime/features/middleware) for secure access control:
```typescript title="app/api/realtime/route.ts" theme={"system"}
import { handle } from "@upstash/realtime"
import { realtime } from "@/lib/realtime"
import { currentUser } from "@/auth"
export const GET = handle({
realtime,
middleware: async ({ request, channels }) => {
const user = await currentUser(request)
for (const channel of channels) {
if (!user.canAccessChannel(channel)) {
return new Response("Unauthorized", { status: 401 })
}
}
},
})
```
See the middleware documentation for authentication examples
# Client-Side Usage
Source: https://upstash.com/docs/realtime/features/client-side
The `useRealtime` hook connects your React components to realtime events with full type safety.
## Setup
### 1. Add the Provider
Wrap your app in the `RealtimeProvider`:
```tsx providers.tsx theme={"system"}
"use client"
import { RealtimeProvider } from "@upstash/realtime/client"
export function Providers({ children }: { children: React.ReactNode }) {
return {children}
}
```
```tsx layout.tsx theme={"system"}
import { Providers } from "./providers"
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
{children}
)
}
```
### 2. Create Typed Hook
Create a typed `useRealtime` hook using `createRealtime`:
```typescript lib/realtime-client.ts theme={"system"}
"use client"
import { createRealtime } from "@upstash/realtime/client"
import type { RealtimeEvents } from "./realtime"
export const { useRealtime } = createRealtime()
```
## Basic Usage
Subscribe to events in any client component:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
useRealtime({
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(`Received ${event}:`, data)
},
})
return
Listening for events...
}
```
## Provider Options
API configuration: - `url`: The realtime endpoint URL - `withCredentials`: Whether to
send cookies with requests
Maximum number of reconnection attempts before giving up
```tsx providers.tsx theme={"system"}
"use client"
import { RealtimeProvider } from "@upstash/realtime/client"
export function Providers({ children }: { children: React.ReactNode }) {
return (
{children}
)
}
```
## Hook Options
Array of event names to subscribe to (e.g. `["notification.alert", "chat.message"]`)
Callback when an event is received. Receives an object with `event`, `data`, and
`channel`.
Array of channel names to subscribe to
Whether the subscription is active. Set to `false` to disconnect.
## Return Value
The hook returns an object with:
Current connection state: `"connecting"`, `"connected"`, `"disconnected"`, or `"error"`
```tsx page.tsx theme={"system"}
import { useRealtime } from "@/lib/realtime-client"
const { status } = useRealtime({
events: ["notification.alert"],
onData({ event, data, channel }) {},
})
console.log(status)
```
## Connection Control
Enable or disable connections dynamically:
```tsx page.tsx theme={"system"}
"use client"
import { useState } from "react"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
const [enabled, setEnabled] = useState(true)
const { status } = useRealtime({
enabled,
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(event, data, channel)
},
})
return (
Status: {status}
)
}
```
### Conditional Connections
Connect only when certain conditions are met:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
import { useUser } from "@/hooks/auth"
export default function Page() {
const { user } = useUser()
useRealtime({
enabled: Boolean(user),
channels: [`user-${user?.id}`],
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(data)
},
})
return
Notifications {user ? "enabled" : "disabled"}
}
```
## Multiple Events
Subscribe to multiple events at once:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
useRealtime({
events: ["chat.message", "chat.reaction", "user.joined"],
onData({ event, data, channel }) {
// 👇 data is automatically typed based on the event
if (event === "chat.message") console.log("New message:", data)
if (event === "chat.reaction") console.log("New reaction:", data)
if (event === "user.joined") console.log("User joined:", data)
},
})
return
Listening to multiple events
}
```
## Multiple Channels
Subscribe to multiple channels at once:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Page() {
useRealtime({
channels: ["global", "announcements", "user-123"],
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(`Message from ${channel}:`, data)
},
})
return
)
}
```
Sync changes across users:
```tsx editor.tsx theme={"system"}
"use client"
import { useState } from "react"
import { useRealtime } from "@/lib/realtime-client"
export default function Editor({ documentId }: { documentId: string }) {
const [content, setContent] = useState("")
useRealtime({
channels: [`doc-${documentId}`],
events: ["document.update"],
onData({ data }) {
setContent(data.content)
},
})
return
## Next Steps
Scope events to specific rooms or users
Configure message retention and replay
# History
Source: https://upstash.com/docs/realtime/features/history
Message history allows you to retrieve past events and replay them to clients on connection. This is useful for making sure clients always have the latest state.
## Overview
All Upstash Realtime messages are automatically stored in Redis Streams. This way, messages are always delivered correctly, even after reconnects or network interruptions.
Clients can fetch past events and optionally subscribe to new events.
## Configuration
```typescript lib/realtime.ts theme={"system"}
import { Realtime } from "@upstash/realtime"
import { redis } from "./redis"
import z from "zod/v4"
const schema = {
chat: {
message: z.object({
text: z.string(),
sender: z.string(),
}),
},
}
export const realtime = new Realtime({
schema,
redis,
history: {
maxLength: 100,
expireAfterSecs: 86400,
},
})
```
Maximum number of messages to retain per channel. Example: `maxLength: 100` will keep
the last 100 messages in the stream and automatically remove older messages as new ones
are added.
How long to keep messages per channel before deleting them (in seconds). Resets every
time a message is emitted to this channel.
## Server-Side History
Retrieve and process history on the server:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const GET = async () => {
const messages = await realtime.channel("room-123").history()
return new Response(JSON.stringify(messages))
}
```
### History Options
Maximum number of messages to retrieve (capped at 1000)
Fetch messages after this Unix timestamp (in milliseconds)
Fetch messages before this Unix timestamp (in milliseconds)
```typescript route.ts theme={"system"}
const messages = await realtime.channel("room-123").history({
limit: 50,
start: Date.now() - 86400000,
})
```
### History Response
Each history message contains:
```typescript theme={"system"}
type HistoryMessage = {
id: string
event: string
channel: string
data: unknown
}
```
### Subscribe with History
You can automatically replay past messages when subscribing to a channel:
```typescript route.ts theme={"system"}
await realtime.channel("room-123").subscribe({
events: ["chat.message"],
history: true,
onData({ event, data, channel }) {
console.log("Message from room-123:", data)
},
})
```
Pass history options for more control:
```typescript route.ts theme={"system"}
await realtime.channel("room-123").subscribe({
events: ["chat.message"],
history: {
limit: 50,
start: Date.now() - 3600000,
},
onData({ data }) {
console.log("Message:", data)
},
})
```
## Use Cases
Load recent messages when a user joins a room:
We recommend keeping long chat histories in a database (e.g. Redis) and only fetching the latest messages from Upstash Realtime.
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
import { useState, useEffect } from "react"
import z from "zod/v4"
import type { RealtimeEvents } from "@/lib/realtime"
type Message = z.infer
export default function ChatRoom({ roomId }: { roomId: string }) {
const [messages, setMessages] = useState([])
useEffect(() => {
fetch(`/api/history?channel=${roomId}`)
.then((res) => res.json())
.then((history) => setMessages(history.map((m: any) => m.data)))
}, [roomId])
useRealtime({
channels: [roomId],
events: ["chat.message"],
onData({ data }) {
setMessages((prev) => [...prev, data])
},
})
return (
{messages.map((msg, i) => (
{msg.sender}: {msg.text}
))}
)
}
```
Show unread notifications with history:
```tsx notifications.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
import { useUser } from "@/hooks/auth"
import { useState, useEffect } from "react"
import z from "zod/v4"
import type { RealtimeEvents } from "@/lib/realtime"
type Notification = z.infer
export default function Notifications() {
const user = useUser()
const [notifications, setNotifications] = useState([])
useEffect(() => {
fetch(`/api/history?channel=user-${user.id}`)
.then((res) => res.json())
.then((history) => {
const unread = history.filter((m: any) => m.data.status === "unread")
setNotifications(unread.map((m: any) => m.data))
})
}, [user.id])
useRealtime({
channels: [`user-${user.id}`],
events: ["notification.alert"],
onData({ data }) {
if (data.status === "unread") {
setNotifications((prev) => [...prev, data])
}
},
})
return (
{notifications.map((notif, i) => (
{notif}
))}
)
}
```
Replay recent activity when users visit:
```tsx activity-feed.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
import { useTeam } from "@/hooks/team"
import { useState, useEffect } from "react"
import z from "zod/v4"
import type { RealtimeEvents } from "@/lib/realtime"
type Activity = z.infer
export default function ActivityFeed() {
const team = useTeam()
const [activities, setActivities] = useState([])
useEffect(() => {
fetch(`/api/history?channel=team-${team.id}&limit=100`)
.then((res) => res.json())
.then((history) => setActivities(history.map((m: any) => m.data)))
}, [team.id])
useRealtime({
channels: [`team-${team.id}`],
events: ["activity.update"],
onData({ data }) {
setActivities((prev) => [data, ...prev])
},
})
return (
{activities.map((activity, i) => (
{activity.message}
))}
)
}
```
## How It Works
1. When you emit an event, it's stored in a Redis Stream with a unique stream ID
2. The stream is trimmed to `maxLength` if configured
3. The stream expires after `expireAfterSecs` if configured
4. History can be fetched via `channel.history()` on the server
5. History is replayed in chronological order (oldest to newest)
6. New events continue streaming right after history replay, no messages lost
## Performance Considerations
Upstash Realtime can handle extremely large histories without problems. The bottleneck is the client who needs to handle all replayed events.
At that point you should probably consider using a database like Redis or Postgres to fetch the history once, then stream new events to the client with Upstash Realtime.
For high-volume channels, limit history to prevent large initial payloads.
```typescript lib/realtime.ts theme={"system"}
export const realtime = new Realtime({
schema,
redis,
history: {
maxLength: 1000,
},
})
```
Expire old messages to reduce storage:
```typescript lib/realtime.ts theme={"system"}
export const realtime = new Realtime({
schema,
redis,
history: {
expireAfterSecs: 3600,
},
})
```
## Next Steps
Stream history and subscribe to events on the server
Scope history to specific rooms or users
# Authentication
Source: https://upstash.com/docs/realtime/features/middleware
Protect your realtime endpoints with custom auth logic.
## Basic Middleware
```typescript api/realtime/route.ts theme={"system"}
import { handle } from "@upstash/realtime"
import { realtime } from "@/lib/realtime"
import { currentUser } from "@/auth"
export const GET = handle({
realtime,
middleware: async ({ request, channels }) => {
const user = await currentUser(request)
if (!user) {
return new Response("Unauthorized", { status: 401 })
}
},
})
```
## Middleware API
The middleware function receives:
The incoming HTTP Request object
The channels a user is attempting to connect to
* Return `undefined` or nothing to allow the connection
* Return a `Response` object to block the connection with a custom error
## Authentication Patterns
Verify users can access specific channels:
```typescript api/realtime/route.ts theme={"system"}
export const GET = handle({
realtime,
middleware: async ({ request, channels }) => {
const user = await currentUser(request)
for (const channel of channels) {
if (channel === "default") {
continue
}
if (!channel.startsWith(user.id)) {
return new Response("You can only access your own channels", { status: 403 })
}
}
},
})
```
Verify user sessions before allowing connections:
```typescript api/realtime/route.ts theme={"system"}
import { getSession } from "@/auth"
export const GET = handle({
realtime,
middleware: async ({ request }) => {
const session = await getSession(request)
if (!session?.user) {
return new Response("Please sign in", { status: 401 })
}
},
})
```
Control access based on user roles:
```typescript api/realtime/route.ts theme={"system"}
export const GET = handle({
realtime,
middleware: async ({ request, channels }) => {
const user = await currentUser(request)
for (const channel of channels) {
if (channel === "default") {
continue
}
if (channel.startsWith("admin-") && user.role !== "admin") {
return new Response("Admin access required", { status: 403 })
}
if (channel.startsWith("team-")) {
const teamId = channel.replace("team-", "")
const isMember = await checkTeamMembership(user.id, teamId)
if (!isMember) {
return new Response("Not a team member", { status: 403 })
}
}
}
},
})
```
# Server-Side Usage
Source: https://upstash.com/docs/realtime/features/server-side
Use Upstash Realtime on the server to emit events, subscribe to channels, and retrieve message history.
## Emit Events
Emit events from any server context:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async () => {
await realtime.emit("notification.alert", "hello world!")
return new Response("OK")
}
```
Emit to specific channels:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async () => {
const channel = realtime.channel("user-123")
await channel.emit("notification.alert", "hello world!")
return new Response("OK")
}
```
## Subscribe to Events
Subscribe to events on a channel:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
const unsubscribe = await realtime.channel("notifications").subscribe({
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log("New notification:", data)
},
})
```
Subscribe to multiple events:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
const unsubscribe = await realtime.channel("room-123").subscribe({
events: ["chat.message", "user.joined", "user.left"],
onData({ event, data, channel }) {
// 👇 data is automatically typed based on the event
if (event === "chat.message") console.log("New message:", data)
if (event === "user.joined") console.log("User joined:", data)
if (event === "user.left") console.log("User left:", data)
},
})
```
### Unsubscribe
Clean up subscriptions when done:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
const channel = realtime.channel("room-123")
const unsubscribe = await channel.subscribe({
events: ["chat.message"],
onData({ data }) {
console.log("Message:", data)
},
})
unsubscribe()
// or: channel.unsubscribe()
```
## Retrieve History
Fetch past messages from a channel:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const GET = async () => {
const messages = await realtime.channel("room-123").history()
return new Response(JSON.stringify(messages))
}
```
### History Options
Maximum number of messages to retrieve (capped at 1000)
Fetch messages after this Unix timestamp (in milliseconds)
Fetch messages before this Unix timestamp (in milliseconds)
```typescript route.ts theme={"system"}
const messages = await realtime.channel("room-123").history({
limit: 100,
start: Date.now() - 86400000,
})
```
### History Response
Each history message contains:
```typescript theme={"system"}
type HistoryMessage = {
id: string
event: string
channel: string
data: unknown
}
```
## Subscribe with History
Replay past messages and continue subscribing to new ones:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
const channel = realtime.channel("room-123")
await channel.subscribe({
events: ["chat.message"],
history: true,
onData({ event, data, channel }) {
console.log("Message:", data)
},
})
```
Pass history options:
```typescript route.ts theme={"system"}
await channel.subscribe({
events: ["chat.message"],
history: {
limit: 50,
start: Date.now() - 3600000,
},
onData({ data }) {
console.log("Message:", data)
},
})
```
This pattern:
1. Fetches messages matching the history criteria
2. Replays them in chronological order
3. Continues to listen for new messages
## Use Cases
Stream progress updates from long-running tasks:
```typescript app/api/job/route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async (req: Request) => {
const { jobId } = await req.json()
const channel = realtime.channel(jobId)
await channel.emit("job.started", { progress: 0 })
for (let i = 0; i <= 100; i += 10) {
await processChunk()
await channel.emit("job.progress", { progress: i })
}
await channel.emit("job.completed", { progress: 100 })
return new Response("OK")
}
```
Process events with server-side logic:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
import { sendEmail } from "@/lib/email"
await realtime.channel("notifications").subscribe({
events: ["notification.alert"],
onData: async ({ data }) => {
if (data.priority === "high") {
await sendEmail({
to: data.userId,
subject: "Urgent Notification",
body: data.message,
})
}
},
})
```
Emit events to multiple channels:
```typescript route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async (req: Request) => {
const { teamIds, message } = await req.json()
await Promise.all(
teamIds.map((teamId: string) =>
realtime.channel(`team-${teamId}`).emit("announcement", message)
)
)
return new Response("Broadcast sent")
}
```
Forward webhook events to realtime channels:
```typescript app/api/webhook/route.ts theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async (req: Request) => {
const payload = await req.json()
const channel = realtime.channel(`user-${payload.userId}`)
await channel.emit("webhook.received", payload)
return new Response("OK")
}
```
## Next Steps
Configure message retention and expiration
Scope events to specific rooms or users
# Deployment
Source: https://upstash.com/docs/realtime/features/serverless
Deploy Upstash Realtime to providers that bill **based on active CPU time**. Great places to
deploy are
* Vercel with Fluid Compute enabled
* Cloudflare
* Railway
* A personal VPS
* any other service that does not bill based on connection duration.
## Deploying to Vercel
To deploy Upstash Realtime to Vercel, [enable Fluid Compute](https://vercel.com/docs/fluid-compute#enable-for-entire-project) for your project. For new projects, this is enabled by default.
Fluid Compute allows for less cold-starts, has much higher function timeouts compared to serverless functions, and most importantly **only bills for active CPU time**.
That way, you're only billed for actual message processing time, not connection duration.
## Optional: Configure Max Duration
You can configure the maximum duration for your realtime connections:
```typescript lib/realtime.ts theme={"system"}
import { Realtime } from "@upstash/realtime"
import { redis } from "./redis"
export const realtime = new Realtime({
schema,
redis,
maxDurationSecs: 300,
})
```
The default is 300 seconds (5 minutes), which works well with Vercel's Fluid Compute. After this interval, the client will automatically reconnect. Redis auto-replays all messages sent during reconnect.
## Billing Example
Traditional serverless connection billing:
```plaintext Serverless Billing theme={"system"}
Connection duration: 5 minutes
Billing: 5 minutes = $$$
```
Upstash Realtime with fluid compute:
```plaintext Fluid Compute Billing theme={"system"}
Connection duration: 5 minutes
Active processing: 2 seconds
Billing: 2 seconds x CPU cost = $
```
## Automatic Reconnection
The client automatically reconnects before your function timeout:
```tsx page.tsx theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Component() {
const { status } = useRealtime({
events: ["notification.alert"],
onData({ event, data, channel }) {},
})
return
Status: {status}
}
```
## Message Delivery Guarantee
Upstash Realtime is powered by Redis Streams, so no message is ever delivered twice or gets lost. Every message is guaranteed to be delivered exactly once.
1. Client establishes connection and subscribes to stream
2. Client initiates reconnection before function timeout (default every 5 mins)
3. Redis auto-replays all messages sent during reconnect
# Pricing
Source: https://upstash.com/docs/realtime/overall/pricing
**Upstash Realtime is designed to be extremely cost-efficient.** With minimal Redis commands per operation and smart connection management, you can build real-time features at scale without worrying about costs.
Upstash Realtime is built on Redis Streams and Pub/Sub. Every operation translates to one or more Redis commands, detailed below.
## Command Overview
### Client-Side Operations
When using [`useRealtime`](/realtime/features/client-side#basic-usage) in your React components:
| Operation | Commands | Count |
| ---------------------------------------------- | ------------------------------ | ----- |
| Initial connection | SUBSCRIBE, XRANGE | 2 |
| Reconnection every 300 seconds | UNSUBSCRIBE, XRANGE, SUBSCRIBE | 3 |
| Ping to keep connection alive every 60 seconds | PUBLISH | 1 |
### Server-Side Operations
When using the [server-side API](/realtime/features/server-side):
| Operation | Commands | Count |
| --------------------------------------------------------------------------------- | --------------------- | ----- |
| [Emit event](/realtime/features/server-side#emit-events) | PUBLISH, XADD | 2 |
| Emit with [`expireAfterSecs`](/realtime/features/history#param-expire-after-secs) | PUBLISH, XADD, EXPIRE | 3 |
| [Read history](/realtime/features/history#server-side-history) | XRANGE | 1 |
## Next Steps
Learn how to use the useRealtime hook in React
Learn how to read event history
# Quickstart
Source: https://upstash.com/docs/realtime/overall/quickstart
Upstash Realtime is the easiest way to add realtime features to any Next.js project.
## Why Upstash Realtime?
* 🧨 Clean APIs & first-class TypeScript support
* ⚡ Extremely fast, zero dependencies, 2.6kB gzipped
* 💻 Deploy anywhere: Vercel, Netlify, etc.
* 💎 100% type-safe with zod 4 or zod mini
* ⏱️ Built-in message histories
* 🔌 Automatic connection management w/ delivery guarantee
* 🔋 Built-in middleware and authentication helpers
* 📶 100% HTTP-based: Redis streams & SSE
***
## Quickstart
### 1. Installation
```bash npm theme={"system"}
npm install @upstash/realtime @upstash/redis zod
```
```bash yarn theme={"system"}
yarn add @upstash/realtime @upstash/redis zod
```
```bash pnpm theme={"system"}
pnpm add @upstash/realtime @upstash/redis zod
```
```bash bun theme={"system"}
bun install @upstash/realtime @upstash/redis zod
```
### 2. Configure Upstash Redis
Upstash Realtime is powered by Redis Streams. Grab your credentials from the [Upstash Console](https://console.upstash.com).
Add them to your environment variables:
```bash title=".env" theme={"system"}
UPSTASH_REDIS_REST_URL=https://striking-osprey-20681.upstash.io
UPSTASH_REDIS_REST_TOKEN=AVDJAAIjcDEyZ...
```
And lastly, create a Redis instance:
```typescript title="lib/redis.ts" theme={"system"}
import { Redis } from "@upstash/redis"
export const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
})
```
### 3. Define Event Schema
Define the structure of realtime events in your app:
```typescript title="lib/realtime.ts" theme={"system"}
import { Realtime, InferRealtimeEvents } from "@upstash/realtime"
import { redis } from "./redis"
import z from "zod/v4"
const schema = {
notification: {
alert: z.string(),
},
}
export const realtime = new Realtime({ schema, redis })
export type RealtimeEvents = InferRealtimeEvents
```
### 4. Create Realtime Route Handler
Create a route handler at `api/realtime/route.ts`:
```typescript title="app/api/realtime/route.ts" theme={"system"}
import { handle } from "@upstash/realtime"
import { realtime } from "@/lib/realtime"
export const GET = handle({ realtime })
```
### 5. Add the Provider
Wrap your application in `RealtimeProvider`:
```tsx title="app/providers.tsx" theme={"system"}
"use client"
import { RealtimeProvider } from "@upstash/realtime/client"
export function Providers({ children }: { children: React.ReactNode }) {
return {children}
}
```
```tsx title="app/layout.tsx" theme={"system"}
import { Providers } from "./providers"
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
{children}
)
}
```
### 6. Create Typed Client Hook
Create a typed `useRealtime` hook for your client components:
```typescript title="lib/realtime-client.ts" theme={"system"}
"use client"
import { createRealtime } from "@upstash/realtime/client"
import type { RealtimeEvents } from "./realtime"
export const { useRealtime } = createRealtime()
```
### 7. Emit Events
From any server component, server action, or API route:
```typescript title="app/api/notify/route.ts" theme={"system"}
import { realtime } from "@/lib/realtime"
export const POST = async () => {
await realtime.emit("notification.alert", "hello world!")
return new Response("OK")
}
```
### 8. Subscribe to Events
In any client component:
```tsx title="app/components/notifications.tsx" theme={"system"}
"use client"
import { useRealtime } from "@/lib/realtime-client"
export default function Notifications() {
useRealtime({
events: ["notification.alert"],
onData({ event, data, channel }) {
console.log(`Received ${event}:`, data)
},
})
return
Listening for events...
}
```
That's it! Your app is now listening for realtime events with full type safety. 🎉
## Next Steps
Complete guide to the useRealtime hook
Subscribe to events and stream updates on the server
Scope events to specific rooms or channels
Fetch and replay past messages
# Examples Index
Source: https://upstash.com/docs/redis/examples
List of all Upstash Examples
TODO: fahreddin
import TagFilters from "../../src/components/Filter.js"
SvelteKit TODO App with Redis
Serverless Redis Caching for Strapi
To-Do List with Blitz.js & Redis
Slackbot with AWS Chalice and Upstash Redis
Using Render with Redis
Slackbot with Vercel and Upstash Redis
Slackbot with Vercel and Upstash Redis
Remix on Cloudflare with Upstash Redis
Remix TODO App with Redis
Global Cache for Netlify Graph with Upstash Redis
Next.js Authentication with NextAuth and Serverless Redis
Building a Survey App with Upstash Redis and Next.js
Building React Native Apps Backed by AWS Lambda and Serverless Redis
Using Upstash Redis with Remix
Using Upstash Redis as a Session Store for Remix
Building a Serverless Notification API for Your Web Application with Redis
Build Stateful Applications with AWS App Runner and Serverless Redis
Session Management on Google Cloud Run with Serverless Redis
Use Redis in Cloudflare Workers
Use Redis in Fastly Compute
Build a Leaderboard API at Edge Using Cloudflare Workers and Redis
Job Processing and Event Queue with Serverless Redis
AWS Lambda Rate Limiting with Serverless Redis
Build a Serverless Histogram API with Redis
Autocomplete API with Serverless Redis
Roadmap Voting App with Serverless Redis
Building a Survey App with Upstash Redis only
Serverless Redis on Google Cloud Functions
Using Serverless Framework
Using AWS SAM
Deploy a Serverless API with AWS CDK and AWS Lambda
Express Session with Serverless Redis
Next.js with Redis
Nuxt.js with Redis
Serverless API with Java and Redis
Serverless Golang API with Redis
Serverless Python API with Redis
Serverless Redisson
Building SvelteKit Applications with Serverless Redis
Build Your Own Waiting Room for Your Website with Cloudflare Workers and
Serverless Redis
Fullstack Serverless App with Flutter, Serverless Framework and
Upstash(REDIS) - PART 1
Getting Started with Next.js Edge Functions
Waiting Room for Your Next.js App Using Edge Functions
Serverless Battleground - DynamoDB vs Firestore vs MongoDB vs Cassandra vs
Redis vs FaunaDB
Stateful AWS Lambda with Redis REST
Pipeline REST API on Serverless Redis
The Most Minimalist Next.js TODO App
Implement IP Allow/Deny List at Edge with Cloudflare Workers and Upstash
Redis
Redis @ Edge with Cloudflare Workers
Using Serverless Redis with Next.js
Building a Cache with Upstash Redis in Next.js
Vercel Edge Function URL Shortener with Upstash Redis
Adding Feature Flags to Next.js (Upstash Redis, SWR, Hooks)
Rate Limiting Your Serverless Functions with Upstash Redis
Create a React Scoreboard with Upstash Redis
Upstash on AWS Lambda Using Golang
IP Address Allow/Deny with Cloudflare Workers and Upstash Redis
Edge Functions Explained with Kelsey Hightower and Lee Robinson - (Next.js
Conf 2021)
Elixir with Redis
# Auto Upgrade
Source: https://upstash.com/docs/redis/features/auto-upgrade
By default, Upstash will apply usage limits based on your current plan. When you reach these limits, behavior depends on the specific limit type - bandwidth limits will throttle your traffic, while storage limits will reject new write operations. However, Upstash offers an Auto Upgrade feature that automatically upgrades your database to the next higher plan when you reach your usage limits, ensuring uninterrupted service.
Auto Upgrade is particularly useful for applications with fluctuating or growing workloads, as it prevents service disruptions during high-traffic periods or when your data storage needs expand beyond your current plan. This feature allows your database to automatically scale with your application's demands without requiring manual intervention.
## How Auto Upgrade Works
When enabled:
* For **bandwidth limits**: Instead of throttling your traffic when you reach the bandwidth limit, your database will automatically upgrade to the next plan to accommodate the increased traffic.
* For **storage limits**:
* **When eviction is off**: Instead of rejecting write operations when you reach maximum data size, your database will upgrade to a plan with larger storage capacity.
* **When eviction is on**: Your data will be evicted and operations will resume. Auto Upgrade will not be triggered and system will rely on eviction mechanism in this case.
## Managing Auto Upgrade
* You can enable Auto Upgrade by checking the Auto Upgrade checkbox while creating a new database:
* Or for an existing database by clicking Enable in the Configuration/Auto Upgrade box in the database details page:
# Backup/Restore
Source: https://upstash.com/docs/redis/features/backup
You can create a manual backup of your database and restore that backup to any of your databases.
Additionally, you can utilize the daily backup feature to automatically create backups of your database on a daily basis.
### Create A Manual Backup
To create a manual backup of your database:
* Go to the database details page and navigate to the `Backups` tab
* Click on the `Backup` button and fill in a name for your backup. **Your backup name must be unique.**
* Then click on the `Create` button.
During the process of creating a backup for your database, it is important to note that your database will be temporarily locked, which means these operations will be unavailable during this time:
* Create Database Backup
* Enable TLS
* Move Database to Team
* Restore Database Backup
* Update Eviction
* Update Password
* Delete Database
### Restore A Backup
To restore a backup that was created from your current database, follow the steps below:
* Go to the database details page and navigate to the `Backups` tab
* Click on the `Restore` button next to the backup record listed.
* Click on `Restore`. **Be aware of the fact that your target database will be flushed with this operation.**
### Restore A Backup From Another Database
To restore a backup that was created from one of your databases other than the current one, follow the steps below:
* Go to the database details page and navigate to the `Backups` tab
* Click on the `Restore...` button
* Select the source database, referring to the database from which the backup was generated.
* Select the backup record that you want to restore to the current database.
* Click on `Start Restore`. **Be aware of the fact that your target database will be flushed with this operation.**
### Enable Daily Automated Backup
To enable daily automated backup for your database:
* Go to the database details page and navigate to the `Backups` tab
* Enable the switch next to the `Daily Backup`
* Click on `Enable`
### Disable Daily Automated Backup
To disable the daily automated backup for your database, please follow the steps below:
* Go to the database details page and navigate to the `Backups` tab
* Disable the switch next to the `Daily Backup`
* Click on `Disable`
# Consistency
Source: https://upstash.com/docs/redis/features/consistency
Upstash utilizes a leader-based replication mechanism. Under this mechanism,
each key is assigned to a leader replica, which is responsible for handling
write operations on that key. The remaining replicas serve as backups to the
leader. When a write operation is performed on a key, it is initially processed
by the leader replica and then asynchronously propagated to the backup replicas.
This ensures that data consistency is maintained across the replicas. Reads can
be performed from any replica.
Each replica employs a failure detector to track liveness of the leader replica.
When the leader replica fails for a reason, remaining replicas start a new
leader election round and elect a new leader. This is the only unavailability
window for the cluster where *write* your requests can be blocked for a short
period of time. Also in case of cluster wide failures like network partitioning
(split brain); periodically running anti entropy jobs resolve the conflicts
using `Last-Writer-Wins` algorithm and converge the replicas to the same state.
This model gives a better write consistency and read scalability but can provide
only **Eventual Consistency**. Additionally you can achieve **Causal
Consistency** (`Read-Your-Writes`, `Monotonic-Reads`, `Monotonic-Writes` and
`Writes-Follow-Reads` guarantees) for a single Redis connection. (A TCP
connection forms a session between client and server).
Checkout [Read Your Writes](/redis/howto/readyourwrites) for more details on how to achieve RYW consistency.
Checkout [Replication](/redis/features/replication) for more details on Replication mechanism.
Previously, Upstash supported `Strong Consistency` mode for the single region
databases. We decided to deprecate this feature because its effect on latency
started to conflict with the performance expectations of Redis use cases. Also
we are gradually moving to **CRDT** based Redis data structures, which will
provide `Strong Eventual Consistency`.
# Credential Protection
Source: https://upstash.com/docs/redis/features/credential-protection
Enabling Credential Protection ensures your database credentials are never stored within Upstash infrastructure. This enhances security by making credentials accessible only once—at the moment they are generated.
Credential Protection is a [Production
Pack](/redis/overall/enterprise#prod-pack-features)
feature.
## How It Works
When enabled:
* Redis database credentials are no longer stored in Upstash infrastructure
* Credentials are displayed only once during enablement - save them immediately
* Console features requiring database access are disabled (CLI, Data Browser, Monitor, RBAC)
## Managing Credential Protection
1. Go to database details page → Configuration section
2. Toggle **Protect Credentials** switch:
3. Save the credentials shown in the modal:
Disabling this feature will permanently revoke current credentials and
generate new ones, potentially breaking applications using those credentials.
## What If You Lose Your Credentials
**Reset Credentials**: This function remains available and, when credential protection is enabled, will generate new protected credentials.
# Durable Storage
Source: https://upstash.com/docs/redis/features/durability
This article explains the persistence provided by Upstash databases.
In Upstash, persistence is always enabled, setting it apart from other Redis
offerings. Every write operation is consistently stored in both memory and the
block storage provided by cloud providers, such as AWS's EBS. This dual storage
approach ensures data durability. Read operations are optimized to first check
if the data exists in memory, facilitating faster access. If the data is not in
memory, it is retrieved from disk. This combination of memory and disk storage
in Upstash guarantees reliable data access and maintains data integrity, even
during system restarts or failures.
### Multi Tier Storage
Upstash keeps your data both in memory and disk. This design provides:
* Data safety with persistent storage
* Low latency with in memory access
* Price flexibility by using memory only for active data
In Upstash, an entry in memory is evicted if it remains idle, meaning it has not
been accessed for an extended period. It's important to note that eviction does
not result in data loss since the entry is still stored in the block storage.
When a read operation occurs for an evicted entry, it is efficiently reloaded
from the block storage back into memory, ensuring fast access to the data. This
eviction mechanism in Upstash optimizes memory usage by prioritizing frequently
accessed data while maintaining the ability to retrieve less frequently accessed
data when needed.
Definitely, yes. Some users are worried that Redis data will be lost when a
server crashes. This is not the case for Upstash thanks to Durable Storage.
Data is reloaded to memory from block storage in case of a server crash.
Moreover, except for the free tier, all paid tier databases provide extra redundancy by replicating data to multiple instances.
# Eviction
Source: https://upstash.com/docs/redis/features/eviction
By default eviction is disabled, and Upstash Redis will reject write operations once the maximum data size
limit has been reached. However, if you are utilizing Upstash Redis as a cache, you
have the option to enable eviction. Enabling eviction allows older data to be
automatically removed from the cache (including Durable Storage) when the maximum size limit is reached.
This ensures that the cache remains within the allocated size and can make room
for new data to be stored. Enabling eviction is particularly useful when the
cache is intended to store frequently changing or temporary data, allowing the
cache to adapt to evolving data needs while maintaining optimal performance.
* You can enable eviction by checking **Eviction** checkbox while creating a new
database:
* Or for an existing database by clicking **Enable** in Configuration/Eviction
box in the database details page:
Upstash currently uses a single eviction algorithm, called
**optimistic-volatile**, which is a combination of *volatile-random* and
*allkeys-random* eviction policies available in
[the original Redis](https://redis.io/docs/manual/eviction/#eviction-policies).
Initially, Upstash employs random sampling to select keys for eviction, giving
priority to keys marked with a TTL (expire field). If there is a shortage of
volatile keys or they are insufficient to create space, additional non-volatile
keys are randomly chosen for eviction. In future releases, Upstash plans to
introduce more eviction policies, offering users a wider range of options to
customize the eviction behavior according to their specific needs.
# Global Database
Source: https://upstash.com/docs/redis/features/globaldatabase
In the global database, the replicas are distributed across multiple regions
around the world. The clients are routed to the nearest region. This helps with
minimizing latency for use cases where users can be anywhere in the world.
### Primary Region and Read Regions
The Upstash Global database is structured with a Primary Region and multiple
Read Regions. When a write command is issued, which can be initiated from any region, it is initially sent and processed
at the Primary Region. The write operation is then replicated to all the Read
Regions, ensuring data consistency across the database.
On the other hand, when a read command is executed, it is directed to the
nearest Read Region to optimize response time. By leveraging the Global
database's distributed architecture, read operations can be performed with
reduced latency, as data retrieval occurs from the closest available Read
Region.
The Global database's design thus aids in minimizing read operation latency by
efficiently distributing data across multiple regions and enabling requests to
be processed from the nearest Read Region.
User selects a single primary region and multiple read regions. For the best
performance, you should select the primary region in the same location where
your writes happen. Select the read regions where your clients that read the
Redis located. You may have your database with a single primary region but no
read regions which would be practically same with a single region (regional)
database. You can add or remove regions on a running Redis database.
Here the list of regions currently supported:
* AWS US-East-1 North Virginia
* AWS US-East-2 Ohio
* AWS US-West-1 North California
* AWS US-West-2 Oregon
* AWS EU-West-1 Ireland
* AWS EU-West-2 London
* AWS EU-Central-1 Frankfurt
* AWS AP-South-1 Mumbai
* AWS AP-SouthEast-1 Singapore
* AWS AP-SouthEast-2 Sydney
* AWS AP-NorthEast-1 Japan
* AWS SA-East-1 São Paulo
In our internal tests, we see the following latencies (99th percentile):
* Read latency from the same region \<1ms
* Write latency from the same region \<5ms
* Read/write latency from the same continent \<50ms
### Architecture
In the multi region architecture, each key is owned by a primary replica which
is located at the region that you choose as primary region. Read replicas become
the backups of the primary for the related keys. The primary replica processes
the writes, then propagates them to the read replicas. Read requests are
processed by all replicas, this means you can read a value from any of the
replicas. This model gives a better write consistency and read scalability.
Each replica employs a failure detector to track the liveness of the primary
replica. When the primary replica fails for a reason, read replicas start a new
leader election round and elect a new leader (primary). This is the only
unavailability window for the cluster where your requests can be blocked for a
short period of time.
Global Database is designed to optimize the latency of READ operations. It may
not be a good choice if your use case is WRITE heavy.
### Use Cases
* **Edge functions:** Edge computing (Cloudflare workers, Fastly Compute) is
becoming a popular way of building globally fast applications. But there are
limited data solutions accessible from edge functions. Upstash Global Database
is accessible from Edge functions with the REST API. Low latency from all edge
locations makes it a perfect solution for Edge functions
* Multi region serverless architectures: You can run your AWS Lambda function in
multiple regions to lower global latency. Vercel/Netlify functions can be run
in different regions. Upstash Global database provides low latency data
wherever your serverless functions are.
* Web/mobile use cases where you need low latency globally. Thanks to the read
only REST API, you can access Redis from your web/mobile application directly.
In such a case, Global Database will help to lower the latency as you can
expect the clients from anywhere.
### High Availability and Disaster Recovery
Although the main motivation behind the Global Database is to provide low
latency; it also makes your database resilient to region wide failures. When a
region is not available, your requests are routed to another region; so your
database remains available.
### Consistency
Global Database is an eventually consistent database. The write request returns
after the primary replica processes the operation. Write operation is replicated
to read replicas asynchronously. Read requests can be served by any replica,
which gives better horizontal scalability but also means a read request may
return a stale value while a write operation for the same key is being
propagated to read replicas.
In case of cluster wide failures like network partitioning (split brain);
periodically running anti entropy jobs resolve the conflicts using LWW
algorithms and converge the replicas to the same state.
### Upgrade from Regional to Global
Currently, we do not support auto-upgrade from regional to global database. You
can export data from your old database and import into the global database.
### Pricing
Global Database charges \$0.2 per 100K commands. The write commands are replicated to all read regions in addition to primary region so the replications are counted as commands. For example, if you have 1 primary 1 read region, 100K writes will cost \$0.4 (\$0.2 x 2). You can use Global Database in the free tier too. Free usage is limited with max one read region.
# Replication
Source: https://upstash.com/docs/redis/features/replication
Replication is enabled for all paid Upstash databases. The data is replicated to
multiple instances. Replication provides you high availability and better
scalability.
### High Availability
Replication makes your database resilient to failures because even one of the
replicas is not available, your database continues to work.
There are two types of replicas in Upstash Redis: primary replicas and read replicas. Primary replicas handle both reads and writes, while read replicas are used only for reads.
In all subscription plans, primary replicas are highly available with multiple replicas to ensure that even if one fails, your database continues to function.
If a read replica fails, your database remains operational, and you can still read from the primary replicas, though with higher latency.
When [Prod Pack](/redis/overall/enterprise#prod-pack-features) is enabled, read replicas are also highly available. This ensures that if one read replica fails, you can read from another read replica in the same region without any additional latency.
### Better Scalability
In a replicated database, your requests are evenly distributed among the
replicas using a round-robin approach. As your throughput requirements grow,
additional replicas can be added to the cluster to handle the increased workload
and maintain high performance. This scalability feature ensures that your
database can effectively meet the demands of high throughput scenarios.
### Architecture
We use the single leader replication model. Each key is owned by a leader
replica and other replicas become the backups of the leader. Writes on a key are
processed by the leader replica first then propagated to backup replicas. Reads
can be performed from any replica. This model gives a better write consistency
and read scalability.
### Consistency
Each replica in the cluster utilizes a failure detector to monitor the status of
the leader replica. In the event that the leader replica fails, the remaining
replicas initiate a new leader election process to select a new leader. During
this leader election round, which is the only unavailability window for the
cluster, there may be a short period of time where your requests can be
temporarily blocked.
However, once a new leader is elected, normal operations resume, ensuring the
continued availability of the cluster. This mechanism ensures that any potential
unavailability caused by leader failure is minimized, and the cluster can
quickly recover and resume processing requests.
Checkout [Read Your Writes](/redis/howto/readyourwrites) for more details on how to achieve RYW consistency.
# REST API
Source: https://upstash.com/docs/redis/features/restapi
REST API enables you to access your Upstash database using REST.
## Get Started
If you do not have a database already, follow
[these steps](../overall/getstarted) to create one.
In the [Upstash Console](https://console.upstash.com/redis), select your database. Then, in the database page, you will see the section that includes the endpoint URL and token details. When you hover over the `Endpoint` or `Token / Readonly Token` fields, copy button will appear for each. You can click it to easily copy the values you need for your connection.
Copy the `HTTPS` for REST URL and the `Token` for authorization. Send an HTTP SET request to the
provided URL by adding an `Authorization: Bearer $TOKEN` header like below: (See the sample command with your credentials in the `cURL` tab of Connection section)
```shell theme={"system"}
curl https://us1-merry-cat-32748.upstash.io/set/foo/bar \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
The above script executes a `SET foo bar` command. It will return a JSON
response:
```json theme={"system"}
{ "result": "OK" }
```
You can also set the token as `_token` request parameter as below:
```shell theme={"system"}
curl https://us1-merry-cat-32748.upstash.io/set/foo/bar?_token=2553feg6a2d9842h2a0gcdb5f8efe9934
```
## API Semantics
Upstash REST API follows the same convention with
[Redis Protocol](https://redis.io/commands). Give the command name and
parameters in the same order as Redis protocol by separating them with a `/`.
```shell theme={"system"}
curl REST_URL/COMMAND/arg1/arg2/../argN
```
Here are some examples:
* `SET foo bar` -> `REST_URL/set/foo/bar`
* `SET foo bar EX 100` -> `REST_URL/set/foo/bar/EX/100`
* `GET foo` -> `REST_URL/get/foo`
* `MGET foo1 foo2 foo3` -> `REST_URL/mget/foo1/foo2/foo3`
* `HGET employee:23381 salary` -> `REST_URL/hget/employee:23381/salary`
* `ZADD teams 100 team-x 90 team-y` ->
`REST_URL/zadd/teams/100/team-x/90/team-y`
#### JSON or Binary Value
To post a JSON or a binary value, you can use an HTTP POST request and set value
as the request body:
```shell theme={"system"}
curl -X POST -d '$VALUE' https://us1-merry-cat-32748.upstash.io/set/foo \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
In the example above, `$VALUE` sent in request body is appended to the command
as `REST_URL/set/foo/$VALUE`.
Please note that when making a POST request to the Upstash REST API, the request
body is appended as the last parameter of the Redis command. If there are
additional parameters in the Redis command after the value, you should include
them as query parameters in the request:
```shell theme={"system"}
curl -X POST -d '$VALUE' https://us1-merry-cat-32748.upstash.io/set/foo?EX=100 \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
Above command is equivalent to `REST_URL/set/foo/$VALUE/EX/100`.
#### POST Command in Body
Alternatively, you can send the whole command in the request body as a single
JSON array. Array's first element must be the command name and command
parameters should be appended next to each other in the same order as Redis
protocol.
```shell theme={"system"}
curl -X POST -d '[COMMAND, ARG1, ARG2,.., ARGN]' REST_URL
```
For example, Redis command `SET foo bar EX 100` can be sent inside the request
body as:
```shell theme={"system"}
curl -X POST -d '["SET", "foo", "bar", "EX", 100]' https://us1-merry-cat-32748.upstash.io \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
## HTTP Codes
* `200 OK`: When request is accepted and successfully executed.
* `400 Bad Request`: When there's a syntax error, an invalid/unsupported command
is sent or command execution fails.
* `401 Unauthorized`: When authentication fails; auth token is missing or
invalid.
* `405 Method Not Allowed`: When an unsupported HTTP method is used. Only
`HEAD`, `GET`, `POST` and `PUT` methods are allowed.
## Response
REST API returns a JSON response by default. When command execution is successful, response
JSON will have a single `result` field and its value will contain the Redis
response. It can be either;
* a `null` value
```json theme={"system"}
{ "result": null }
```
* an integer
```json theme={"system"}
{ "result": 137 }
```
* a string
```json theme={"system"}
{ "result": "value" }
```
* an array value:
```json theme={"system"}
{ "result": ["value1", null, "value2"] }
```
If command is rejected or fails, response JSON will have a single `error` field
with a string value explaining the failure:
```json theme={"system"}
{"error":"WRONGPASS invalid password"}
{"error":"ERR wrong number of arguments for 'get' command"}
```
### Base64 Encoded Responses
If the response contains an invalid utf-8 character, it will be replaced with
a � (Replacement character U+FFFD). This can happen when you are using binary
operations like `BITOP NOT` etc.
If you prefer the raw response in base64 format, you can achieve this by setting
the `Upstash-Encoding` header to `base64`. In this case, all strings in the response
will be base64 encoded, except for the "OK" response.
```shell theme={"system"}
curl https://us1-merry-cat-32748.upstash.io/SET/foo/bar \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Upstash-Encoding: base64"
# {"result":"OK"}
curl https://us1-merry-cat-32748.upstash.io/GET/foo \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Upstash-Encoding: base64"
# {"result":"YmFy"}
```
### RESP2 Format Responses
REST API returns a JSON response by default and the response content type is set to `application/json`.
If you prefer the binary response in RESP2 format, you can achieve this by setting
the `Upstash-Response-Format` header to `resp2`. In this case, the response content type
is set to `application/octet-stream` and the raw response is returned as binary similar to a TCP-based Redis client.
The default value for this option is `json`.
Any format other than `json` and `resp2` is not allowed and will result in a HTTP 400 Bad Request.
This option is not applicable to `/multi-exec` transactions endpoint, as it only returns response in JSON format.
Additionally, setting the `Upstash-Encoding` header to `base64` is not permitted when the `Upstash-Response-Format` is set to `resp2`
and will result in a HTTP 400 Bad Request.
```shell theme={"system"}
curl https://us1-merry-cat-32748.upstash.io/SET/foo/bar \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Upstash-Reponse-Format: resp2"
# +OK\r\n
curl https://us1-merry-cat-32748.upstash.io/GET/foo \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Upstash-Reponse-Format: resp2"
# $3\r\nbar\r\n
```
## Pipelining
Upstash REST API provides support for command pipelining, allowing you to send
multiple commands as a batch instead of sending them individually and waiting
for responses. With the pipeline API, you can include several commands in a
single HTTP request, and the response will be a JSON array. Each item in the
response array corresponds to the result of a command in the same order as they
were included in the pipeline.
API endpoint for command pipelining is `/pipeline`. Pipelined commands should be
send as a two dimensional JSON array in the request body, each row containing
name of the command and its arguments.
**Request syntax**:
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/pipeline \
-H "Authorization: Bearer $TOKEN" \
-d '
[
["CMD_A", "arg0", "arg1", ..., "argN"],
["CMD_B", "arg0", "arg1", ..., "argM"],
...
]
'
```
**Response syntax**:
```json theme={"system"}
[{"result":"RESPONSE_A"},{"result":"RESPONSE_B"},{"error":"ERR ..."}, ...]
```
Execution of the pipeline is *not atomic*. Even though each command in the
pipeline will be executed in order, commands sent by other clients can
interleave with the pipeline. Use [transactions](#transactions) API instead if
you need atomicity.
For example you can write the `curl` command below to send following Redis
commands using pipeline:
```redis theme={"system"}
SET key1 valuex
SETEX key2 13 valuez
INCR key1
ZADD myset 11 item1 22 item2
```
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/pipeline \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-d '
[
["SET", "key1", "valuex"],
["SETEX", "key2", 13, "valuez"],
["INCR", "key1"],
["ZADD", "myset", 11, "item1", 22, "item2"]
]
'
```
And pipeline response will be:
```json theme={"system"}
[
{ "result": "OK" },
{ "result": "OK" },
{ "error": "ERR value is not an int or out of range" },
{ "result": 2 }
]
```
You can use pipelining when;
* You need more throughput, since pipelining saves from multiple round-trip
times. (*But beware that latency of each command in the pipeline will be equal
to the total latency of the whole pipeline.*)
* Your commands are independent of each other, response of a former command is
not needed to submit a subsequent command.
## Transactions
Upstash REST API supports transactions to execute multiple commands atomically.
With transactions API, several commands are sent using a single HTTP request,
and a single JSON array response is returned. Each item in the response array
corresponds to the command in the same order within the transaction.
API endpoint for transaction is `/multi-exec`. Transaction commands should be
send as a two dimensional JSON array in the request body, each row containing
name of the command and its arguments.
**Request syntax**:
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/multi-exec \
-H "Authorization: Bearer $TOKEN" \
-d '
[
["CMD_A", "arg0", "arg1", ..., "argN"],
["CMD_B", "arg0", "arg1", ..., "argM"],
...
]
'
```
**Response syntax**:
In case when transaction is successful, multiple responses corresponding to each
command is returned in json as follows:
```json theme={"system"}
[{"result":"RESPONSE_A"},{"result":"RESPONSE_B"},{"error":"ERR ..."}, ...]
```
If transaction is discarded as a whole, a single error is returned in json as
follows:
```json theme={"system"}
{ "error": "ERR ..." }
```
A transaction might be discarded in following cases:
* There is a syntax error on the transaction request.
* At least one of the commands is unsupported.
* At least one of the commands exceeds the
[max request size](../troubleshooting/max_request_size_exceeded).
* At least one of the commands exceeds the
[daily request limit](../troubleshooting/max_daily_request_limit).
Note that a command may still fail even if it is a supported and valid command.
In that case, all commands will be executed. Upstash Redis will not stop the
processing of commands. This is to provide same semantics with Redis when there
are
[errors inside a transaction](https://redis.io/docs/manual/transactions/#errors-inside-a-transaction).
**Example**:
You can write the `curl` command below to send following Redis commands using
REST transaction API:
```
MULTI
SET key1 valuex
SETEX key2 13 valuez
INCR key1
ZADD myset 11 item1 22 item2
EXEC
```
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/multi-exec \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-d '
[
["SET", "key1", "valuex"],
["SETEX", "key2", 13, "valuez"],
["INCR", "key1"],
["ZADD", "myset", 11, "item1", 22, "item2"]
]
'
```
And transaction response will be:
```json theme={"system"}
[
{ "result": "OK" },
{ "result": "OK" },
{ "error": "ERR value is not an int or out of range" },
{ "result": 2 }
]
```
## Monitor Command
Upstash REST API provides Redis [`MONITOR`](https://redis.io/docs/latest/commands/monitor/) command using
[Server Send Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) mechanism. API endpoint is `/monitor`.
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/monitor \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Accept:text/event-stream"
```
This request will listen for Redis monitor events and incoming data will be received as:
```
data: "OK"
data: 1721284005.242090 [0 0.0.0.0:0] "GET" "k"
data: 1721284008.663811 [0 0.0.0.0:0] "SET" "k" "v"
data: 1721284025.561585 [0 0.0.0.0:0] "DBSIZE"
data: 1721284030.601034 [0 0.0.0.0:0] "KEYS" "*"
```
## Subscribe & Publish Commands
Simiar to `MONITOR` command, Upstash REST API provides Redis [`SUBSCRIBE`](https://redis.io/docs/latest/commands/subscribe/) and
[`PUBLISH`](https://redis.io/docs/latest/commands/publish/) commands. The `SUBSCRIBE` endpoint works using\
[Server Send Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) mechanism.
API endpoints are `/subscribe` and `/publish`
Following request will subscribe to a channel named `chat`:
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/subscribe/chat \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934" \
-H "Accept:text/event-stream"
```
Following request will publish to a channel named `chat`:
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/publish/chat/hello \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
The subscriber will receive incoming messages as:
```
data: subscribe,chat,1
data: message,chat,hello
data: message,chat,how are you today?
```
## Security and Authentication
You need to add a header to your API requests as `Authorization: Bearer $TOKEN`
or set the token as a url parameter `_token=$TOKEN`.
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/info \
-H "Authorization: Bearer 2553feg6a2d9842h2a0gcdb5f8efe9934"
```
OR
```shell theme={"system"}
curl -X POST https://us1-merry-cat-32748.upstash.io/info?_token=2553feg6a2d9842h2a0gcdb5f8efe9934
```
Upstash by default provides two separate access tokens per database: "Standard"
and "Read Only".
* **Standard** token has full privilege over the database, can execute any
command.
* **Read Only** token permits access to the read commands only. Some powerful
read commands (e.g. SCAN, KEYS) are also restricted with read only token. It
makes sense to use *Read Only* token when you access Upstash Redis from web
and mobile clients where the token is exposed to public.
You can get/copy the tokens by clicking copy button next to
`UPSTASH_REDIS_REST_TOKEN` in REST API section of the console. For the *Read
Only* token, just enable the "Read-Only Token" switch.
Do not expose your *Standard* token publicly. *Standard* token has full
privilege over the database. You can expose the *Read Only* token as it has
access to read commands only. You can revoke both *Standard* and *Read Only*
tokens by resetting password of your database.
### REST Token for ACL Users
In addition to the tokens provided by default, you can create REST tokens for
the users created via [`ACL SETUSER`](https://redis.io/commands/acl-setuser/)
command. Upstash provides a custom `ACL` subcommand to generate REST tokens:
`ACL RESTTOKEN`. It expects two arguments; username and user's password. And
returns the REST token for the user as a string response.
```
ACL RESTTOKEN
Generate a REST token for the specified username & password.
Token will have the same permissions with the user.
```
You can execute `ACL RESTTOKEN` command via `redis-cli`:
```shell theme={"system"}
redis-cli> ACL RESTTOKEN default 35fedg8xyu907d84af29222ert
"AYNgAS2553feg6a2d9842h2a0gcdb5f8efe9934DQ="
```
Or via CLI on the Upstash console:
If the user doesn't exist or password doesn't match then an error will be
returned.
```shell theme={"system"}
redis-cli> ACL RESTTOKEN upstash fakepass
(error) ERR Wrong password or user "upstash" does not exist
```
## Redis Protocol vs REST API
### REST API Pros
* If you want to access to Upstash database from an environment like CloudFlare
Workers, WebAssembly, Fastly Compute\@Edge then you can not use Redis protocol
as it is based on TCP. You can use REST API in those environments.
* REST API is request (HTTP) based where Redis protocol is connection based. If
you are running serverless functions (AWS Lambda etc), you may need to manage
the Redis client's connections. REST API does not have such an issue.
* Redis protocol requires Redis clients. On the other hand, REST API is
accessible with any HTTP client.
### Redis Protocol Pros
* If you have legacy code that relies on Redis clients, the Redis protocol
allows you to utilize Upstash without requiring any modifications to your
code.
* By leveraging the Redis protocol, you can take advantage of the extensive
Redis ecosystem. For instance, you can seamlessly integrate your Upstash
database as a session cache for your Express application.
## Cost and Pricing
Upstash pricing is based on per command/request. So the same pricing listed in
our [pricing](https://upstash.com/pricing/redis) applies to your REST calls too.
## Metrics and Monitoring
In the current version, we do not expose any metrics specific to API calls in
the console. But the metrics of the database backing the API should give a good
summary about the performance of your APIs.
## REST - Redis API Compatibility
| Feature | REST Support? | Notes |
| ------------------------------------------------------------- | :-----------: | :---------------------------------------------------------------: |
| [String](https://redis.io/commands/?group=string) | ✅ | |
| [Bitmap](https://redis.io/commands/?group=bitmap) | ✅ | |
| [Hash](https://redis.io/commands/?group=hash) | ✅ | |
| [List](https://redis.io/commands/?group=list) | ✅ | Blocking commands (BLPOP - BRPOP - BRPOPLPUSH) are not supported. |
| [Set](https://redis.io/commands/?group=set) | ✅ | |
| [SortedSet](https://redis.io/commands/?group=sorted_set) | ✅ | Blocking commands (BZPOPMAX - BZPOPMIN) are not supported. |
| [Geo](https://redis.io/commands/?group=geo) | ✅ | |
| [HyperLogLog](https://redis.io/commands/?group=hyperloglog) | ✅ | |
| [Transactions](https://redis.io/commands/?group=transactions) | ✅ | WATCH/UNWATCH/DISCARD are not supported |
| [Generic](https://redis.io/commands/?group=generic) | ✅ | |
| [Server](https://redis.io/commands/?group=server) | ✅ | |
| [Scripting](https://redis.io/commands/?group=scripting) | ✅ | |
| [Pub/Sub](https://redis.io/commands/?group=pubsub) | ✅ | |
| [Connection](https://redis.io/commands/?group=connection) | ⚠️ | Only PING and ECHO are supported. |
| [JSON](https://redis.io/commands/?group=json) | ✅ | |
| [Streams](https://redis.io/commands/?group=stream) | ✅ | Supported, except blocking versions of XREAD and XREADGROUP. |
| [Cluster](https://redis.io/commands#cluster) | ❌ | |
# Security
Source: https://upstash.com/docs/redis/features/security
Upstash has a set of features to help you secure your data. We will list them
and at the end of the section we will list the best practices to improve
security of database.
## TLS
TLS is always enabled on Upstash Redis databases. The data transfer between the client and database is
encrypted.
## Redis ACL
With Redis ACL, you can improve security by restricting a user's access to
commands and keys, so that untrusted clients have no access and trusted clients
have just the minimum required access level to the database. Moreover it
improves operational safety, so that clients or users accessing Redis are not
allowed to damage the data or the configuration due to errors or mistakes. Check
[Redis ACL documentation](https://redis.io/docs/manual/security/acl/). If you
are using the REST API, you can still benefit from ACLs as explained
[here](/redis/features/restapi#rest-token-for-acl-users)
## Database Credentials
When you create a database, a secure password is generated. Upstash keeps the
password encrypted. Use environment variables or your provider's secret
management system (e.g. AWS Secrets Manager, Vercel Secrets) to keep them. Do
not use them hardcoded in your code. If your password is leaked, reset the
password using Upstash console.
## Encryption at Rest
Encryption at REST encrypts the block storage where your data is persisted and
stored. It is available with [Prod Pack](redis/overall/enterprise#prod-pack-features) add-on.
## Application Level Encryption
Client side encryption can be used to encrypt data through application
lifecycle. Client-side encryption is used to help protect data in use. This
comes with some limitations. Operations that must operate on the data, such as
increments, comparisons, and searches will not function properly. You can write
client-side encryption logic directly in your own application or use functions
built into clients such as the Java Lettuce cipher codec. We have plans to
support encryption in our SDKs.
## IP Allowlisting
We can restrict the access to your database to a set of IP addresses which will
have access to your database. This is quite a strong way to secure your
database, but it has some limitations. For example you can not know the IP
addresses in serverless platforms such AWS Lambda and Vercel functions.
## VPC Peering
VPC Peering enables you to connect to Upstash from your own VPC using private
IP. Database will not be accessible from the public network. Database and your
application can run in the same subnet which also minimizes data transfer costs.
VPC Peering is only available for Pro databases.
## Private Link
AWS Private link provides private connectivity between Upstash Database and your
Redis client inside AWS infrastructure. Private link is only available for
Pro databases.
# Compliance
Source: https://upstash.com/docs/redis/help/compliance
## Upstash Legal & Security Documents
* [Upstash Terms of Service](https://upstash.com/static/trust/terms.pdf)
* [Upstash Privacy Policy](https://upstash.com/static/trust/privacy.pdf)
* [Upstash Data Processing Agreement](https://upstash.com/static/trust/dpa.pdf)
* [Upstash Technical and Organizational Security Measures](https://upstash.com/static/trust/security-measures.pdf)
* [Upstash Subcontractors](https://upstash.com/static/trust/subprocessors.pdf)
## Is Upstash SOC2 Compliant?
As of July 2023, Upstash Redis is SOC2 compliant. Check our [trust page](https://trust.upstash.com/) for details.
## Is Upstash ISO-27001 Compliant?
We are in process of getting this certification. Contact us
([support@upstash.com](mailto:support@upstash.com)) to learn about the expected
date.
## Is Upstash GDPR Compliant?
Yes. For more information, see our
[Privacy Policy](https://upstash.com/static/trust/privacy.pdf). We acquire DPAs
from each [subcontractor](https://upstash.com/static/trust/subprocessors.pdf)
that we work with.
## Is Upstash HIPAA Compliant?
Upstash is currently not HIPAA compliant. Contact us
([support@upstash.com](mailto:support@upstash.com)) if HIPAA is important for
you and we can share more details.
## Is Upstash PCI Compliant?
Upstash does not store personal credit card information. We use Stripe for
payment processing. Stripe is a certified PCI Service Provider Level 1, which is
the highest level of certification in the payments industry.
## Does Upstash conduct vulnerability scanning and penetration tests?
Yes, we use third party tools and work with pen testers. We share the results
with Enterprise customers. Contact us
([support@upstash.com](mailto:support@upstash.com)) for more information.
## Does Upstash take backups?
Yes, we take regular snapshots of the data cluster to the AWS S3 platform.
## Does Upstash encrypt data?
Customers can enable TLS while creating database/cluster, and we recommend it
for production databases/clusters. Also we encrypt data at rest at request of
customers.
# Frequently Asked Questions
Source: https://upstash.com/docs/redis/help/faq
## What is Upstash Redis?
Upstash is a serverless database service compatible with Redis® API.
## What is a Serverless Database?
* You do not have to manage and provision servers.
* You do not deal with configuring or maintaining any server.
* You just use the service and pay what you use. If you are not using it, you should not be paying.
## What are the use cases?
Upstash works for all the common usecases for Redis®. You can use Upstash in your serverless stack. In addition, you can use Upstash as storage (or caching) for your serverless functions. See [Use Cases](/redis/overall/usecases) for more.
## Do you support all Redis® API?
Most of them. See [Redis® API Compatibility](/redis/overall/rediscompatibility) for the list of supported commands.
## Can I use any Redis client?
Yes, Upstash is compatible Redis client protocol.
## Which cloud providers do you support?
Initially we have AWS and GCP. Digital Ocean is planned.
## Which regions do you support in AWS?
We start with AWS-US-EAST-1 (Virginia), GCP-US-CENTRAL-1 (IOWA), AWS-US-WEST-1 (N. California), AWS-EU-WEST-1 (Ireland), AWS-APN-NE-1 (Japan). We will add new regions soon. You can expedite this by telling us your use case and the region you need by emailing to [support@upstash.com](mailto:support@upstash.com)
## Should my client be hosted in the AWS to use Upstash?
No. Your client can be anywhere but the clients in AWS regions will give you better performance.
## How do you compare Upstash with ElastiCache?
Upstash is serverless. With ElastiCache, you pay even you do not use the database. See [Compare](/redis/overall/compare) for more info.
## How do you compare Upstash with Redis Labs or Compose.io?
Upstash is serverless. With Redis Labs or Compose.io, you always pay a lot when your data size is big but your traffic is low. In Upstash, the pricing is based on per request. See [Compare](/redis/overall/compare) for more info.
## Do you persist data?
Yes, by default we write data to the disk. So in case of a failure you should not lose any data.
## Do you support Redis Cluster?
We support replication in Premium type database. We do not support sharding yet.
## I have database with 10GB data, I pay nothing if I do not use it. Is that correct?
You only pay for the disk storage cost that is \$0.25 per GB. For your case, you will pay \$2.5 monthly.
## What happens when I exceed the request limit on Free Database (10.000 requests per day)?
The exceeding commands return exception.
## When I upgrade my free database, do I lose data?
You do not lose data but clients may disconnect and reconnect.
## Upstash is much cheaper than Elasticache and Redis Labs for big data sizes (> 10GB). How is that possible?
Upstash storage layer is multi tiered. We keep your data in both memory and block storage (disk). The entries that are not accessed frequently are removed from the memory but stored in disk. Latency overhead of idle entries is limited thanks to the SSD based storage. Multi tiered storage allows us to provide more flexible pricing.
## Will my data be safe?
Upstash is a GDPR compliant company. We do not share any user data with third parties. See our [Legal Documents](/common/help/legal) for more information.
## What happens if my database is not used?
Free tier databases are archived after a minimum of 14 days of inactivity. Users get several warning emails prior to this operation. Archival means backing up user data and removing the database instance.
If you wish to keep your database endpoint active for a longer period of inactivity, consider switching to a paid plan.
No data is lost due to archival. When a free tier database is archived due to inactivity, we take a backup of the data so users can create a new database and restore their data from the Upstash Console.
## How do you handle the noisy neighbour problem? Do other tenants affect my database?
Databases are isolated on some aspects but still share some hardware resources such as CPU or network. To avoid noisy neighbor influence on these resources, there are specific quotas for each database. When they reach any of these quotas they are throttled using a backoff strategy. When multiple databases sharing the same hardware are close to the limits, our system can add new resources to the pool and/or migrate some of the databases to distribute the load.
Also if a database exceeds its quotas very frequently, we notify users whether they want to upgrade to an upper plan. Databases in enterprise plans are placed either on dedicated or more isolated hardware due to higher resource needs.
# Integration with Third Parties & Partnerships
Source: https://upstash.com/docs/redis/help/integration
## Introduction
In this guideline we will outline the steps to integrate Upstash into your platform (GUI or Web App) and allow your users to create and manage Upstash databases without leaving your interfaces. We will explain how to use OAuth2.0 as the underlying foundation to enable this access seamlessly.
If your product or service offering utilizes Redis, Vector or QStash or if there is a common use case that your end users enable by leveraging these database resources, we invite you to be a partner with us. By integrating Upstash into your platform, you can offer a more complete package for your customers and become a one stop shop. This will also position yourself at the forefront of innovative cloud computing trends such as serverless and expand your customer base.
This is the most commonly used partnership integration model that can be easily implemented by following this guideline. Recently [Cloudflare workers integration](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) is implemented through this methodology. For any further questions or partnership discussions please send us an email at [partnerships@upstash.com](mailto:partnerships@upstash.com)
Before starting development to integrate Upstash into your product, please
send an email to [partnerships@upstash.com](mailto:partnerships@upstash.com) for further assistance and guidance.
**General Flow (High level user flow)**
1. User clicks **`Connect Upstash`** button on your platform’s surface (GUI, Web App)
2. This initiates the OAuth 2.0 flow, which opens a new browser page displaying the **`Upstash Login Page`**.
3. If this is an existing user, user logins with their Upstash credentials otherwise they can directly sign up for a new Upstash account.
4. Browser window redirects to **`Your account has been connected`** page and authentication window automatically closes.
5. After the user returns to your interface, they see their Upstash Account is now connected.
## Technical Design (SPA - Regular Web Application)
1. Users click `Connect Upstash` button from Web App.
2. Web App initiate Upstash OAuth 2.0 flow. Web App can use
[Auth0 native libraries](https://auth0.com/docs/libraries).
Please reach [partnerships@upstash.com](mailto:partnerships@upstash.com) to receive client id and callback url.
3. After user returns from OAuth 2.0 flow then web app will have JWT token. Web
App can generate Developer Api key:
```bash theme={"system"}
curl -XPOST https://api.upstash.com/apikey \
-H "Authorization: Bearer JWT_KEY" \
-H "Content-Type: application/json" \
-d '{ "name": "APPNAME_API_KEY_TIMESTAMP" }'
```
4. Web App need to save Developer Api Key to the backend.
## Technical Design ( GUI Apps )
1. User clicks **`Connect Upstash`** button from web app.
2. Web app initiates Upstash OAuth 2.0 flow and it can use **[Auth0 native libraries](https://auth0.com/docs/libraries)**.
3. App will open new browser:
```
https://auth.upstash.com/authorize?response_type=code&audience=upstash-api&scope=offline_access&client_id=XXXXXXXXXX&redirect_uri=http%3A%2F%2Flocalhost:3000
```
Please reach [partnerships@upstash.com](mailto:partnerships@upstash.com) to receive client id.
4. After user authenticated Auth0 will redirect user to
`localhost:3000/?code=XXXXXX`
5. APP can return some nice html response when Auth0 returns to `localhost:3000`
6. After getting `code` parameter from the URL query, GUI App will make http
call to the Auth0 code exchange api. Example CURL request
```bash theme={"system"}
curl -XPOST 'https://auth.upstash.com/oauth/token' \
--header 'content-type: application/x-www-form-urlencoded' \
--data 'grant_type=authorization_code --data audience=upstash-api' \
--data 'client_id=XXXXXXXXXXX' \
--data 'code=XXXXXXXXXXXX' \
--data 'redirect_uri=localhost:3000'
```
Response:
```json theme={"system"}
{
"access_token": "XXXXXXXXXX",
"refresh_token": "XXXXXXXXXXX",
"scope": "offline_access",
"expires_in": 172800,
"token_type": "Bearer"
}
```
7. After 6th Step the response will include `access_token`, it has 3 days TTL.
GUI App will call Upstash API to get a developer api key:
```bash theme={"system"}
curl https://api.upstash.com/apikey -H "Authorization: Bearer JWT_KEY" -d '{ "name" : "APPNAME_API_KEY_TIMESTAMP" }'
```
8. GUI App will save Developer Api key locally. Then GUI App can call any
Upstash Developer API [developer.upstash.com/](https://developer.upstash.com/)
## Managing Resources
After obtaining Upstash Developer Api key, your platform surface (web or GUI) can call Upstash API. For example **[Create Database](https://developer.upstash.com/#create-database-global)**, **[List Database](https://developer.upstash.com/#list-databases)**
In this flow, you can ask users for region information and name of the database then can call Create Database API to complete the task
Example CURL request:
```bash theme={"system"}
curl -X POST \
https://api.upstash.com/v2/redis/database \
-u 'EMAIL:API_KEY' \
-d '{"name":"myredis", "region":"global", "primary_region":"us-east-1", "read_regions":["us-west-1","us-west-2"], "tls": true}'
```
# Legal
Source: https://upstash.com/docs/redis/help/legal
## Upstash Legal Documents
* [Upstash Terms of Service](https://upstash.com/trust/terms.pdf)
* [Upstash Privacy Policy](https://upstash.com/trust/privacy.pdf)
* [Upstash Subcontractors](https://upstash.com/trust/subprocessors.pdf)
# Managing Healthcare Data
Source: https://upstash.com/docs/redis/help/managing-healthcare-data
You can use Upstash Redis to store and process Protected Health Information (PHI). You are responsible for the following:
* **Signing a Business Associate Agreement (BAA)** with Upstash. This is provided as part of our Enterprise offering. Email [support@upstash.com](mailto:support@upstash.com) to get started.
* **Marking specific databases as HIPAA databases** and addressing security issues raised by the Upstash team.
* **Ensuring MFA is enabled** on all Upstash Console accounts.
* Enforce MFA as a requirement to access the organization
* **Enabling Prod Pack** which provides encryption at rest and advanced security features (already included in the Enterprise plan).
* **Enabling Credential Protection** to prevent storing credentials in Upstash infrastructure and limit console access requiring database credentials.
* **Configuring IP allowlist** to restrict database access to authorized networks.
* **Enabling daily backups** to validate recoverability and meet retention requirements.
* **Complying with encryption requirements** in the HIPAA Security Rule. Data is encrypted at rest and in transit by Upstash. You can consider encrypting the data at your application layer.
* **Ensuring that PHI is stored only within your database**. Storing PHI in resource names or other locations is strictly prohibited.
* **Ensuring that PHI is stored only in values of data structures, not in identifiers or keys**. Avoid logging keys anywhere.
* **Not using public endpoints** to process PHI.
* **Not transferring databases** to a non-HIPAA organization.
For a comprehensive guide on implementing these responsibilities in production, see our [Production Checklist](/redis/help/production-checklist). For questions about managing healthcare data, contact our support team at [support@upstash.com](mailto:support@upstash.com).
# Production Checklist
Source: https://upstash.com/docs/redis/help/production-checklist
This checklist provides essential recommendations for securing and optimizing your Upstash databases for production workloads.
## Security Features
### Enable Prod Pack
Prod Pack provides enterprise-grade security and monitoring features:
* 99.99% uptime SLA
* SOC-2 Type 2 report available
* Role-Based Access Control (RBAC)
* Encryption at Rest
* Advanced monitoring (Prometheus, Datadog)
* High availability for read regions
Prod Pack is available as a \$200/month add-on per database for all paid plans except Free tier.
### Enable Credential Protection
Protect your database credentials (Prod Pack feature):
* Credentials are never stored in Upstash infrastructure
* Credentials are displayed only once during enablement
* Console features requiring database access are disabled
Disabling this feature will permanently revoke current credentials and generate new ones.
### Configure IP Allowlist
Restrict database access to specific IP addresses:
* Available on all plans except Free tier
* Supports IPv4 addresses and CIDR blocks
* Multiple IP ranges can be configured
### Implement Redis ACL
Use Redis Access Control Lists to restrict user access:
* Create users with minimal required permissions
* Available for both TCP connections and REST API
* Use `ACL RESTTOKEN` command to generate REST tokens
### Enable Multi-Factor Authentication
Enable MFA on your Upstash account for enhanced security:
* Use your existing authentication provider (Google, GitHub, Amazon)
* Consider using a dedicated email/password account for production
* Force MFA for all team members to ensure consistent security
* Regularly review account access and team member permissions
### Secure Credential Management
Follow these best practices:
* Never hardcode credentials in your application code
* Use environment variables or secret management systems
* Reset passwords immediately if credentials are compromised
* Use Read-Only tokens for public-facing applications
## Network Security
### TLS Encryption
TLS is always enabled on Upstash Redis databases.
### VPC Peering (Enterprise)
Connect databases to your VPCs using private IP:
* Database becomes inaccessible from public networks
* Minimizes data transfer costs
* Available for Enterprise customers
## Monitoring & Observability
### Enable Advanced Monitoring
Prod Pack includes comprehensive monitoring:
* Prometheus integration
* Datadog integration
* Extended console metrics (up to one month)
## High Availability & Backup
### Enable Daily Backups
Configure automated daily backups for data protection:
* Available on all paid plans
* Backup retention up to 3 days with Prod Pack
* Hourly backups with customizable retention (Enterprise)
### Global Replication
For global applications, consider using Global Database:
* Distribute data across multiple regions
* Minimize latency for users worldwide
* Enhanced disaster recovery capabilities
## Compliance & Governance
### SOC-2 Compliance
Prod Pack and Enterprise plans include SOC-2 Type 2 compliance:
* Request SOC-2 report from [trust.upstash.com](https://trust.upstash.com/)
* Available for production workloads
### Enterprise Features
For enterprise customers:
* HIPAA compliance available
* SAML SSO integration
* Access logs available
* Custom resource allocation
## Pre-Production Checklist
Before going live, ensure you have:
* [ ] Prod Pack enabled (recommended)
* [ ] Credential Protection enabled
* [ ] IP Allowlist configured
* [ ] MFA enabled on your account
* [ ] Daily backups enabled
* [ ] Monitoring and alerts configured
* [ ] Environment variables secured
* [ ] Error handling tested
## Additional Resources
* [Security Features](/redis/features/security)
* [Prod Pack & Enterprise](/redis/overall/enterprise)
* [Backup & Restore](/redis/features/backup)
* [Global Database](/redis/features/globaldatabase)
* [Monitoring & Metrics](/redis/howto/metricsandcharts)
* [Compliance Information](/common/help/compliance)
* [Professional Support](/common/help/prosupport)
For additional assistance with production deployment, contact our support team at [support@upstash.com](mailto:support@upstash.com).
# Shared Responsibility Model
Source: https://upstash.com/docs/redis/help/shared-responsibility-model
The Shared Responsibility Model defines the security and operational responsibilities between Upstash and our customers when using Upstash Redis. This model ensures clarity in who is responsible for what aspects of security, compliance, and operations.
## Overview
Upstash Redis is a serverless database service that provides Redis® API compatibility with automatic scaling, high availability, and enterprise-grade security features. The shared responsibility model divides responsibilities into three main categories:
* **Upstash Responsibilities**: Infrastructure, platform, and service-level security
* **Customer Responsibilities**: Data, application, and access management
* **Shared Responsibilities**: Configuration, monitoring, and incident response
## Responsibility Matrix
| Category | Upstash | Customer | Shared |
| --------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | --------------------------------------------- |
| **Infrastructure Security** | ✅ Physical security, network infrastructure, DDoS protection, hardware maintenance | ❌ | ❌ |
| **Platform Security** | ✅ OS security, Redis updates, container security, infrastructure monitoring | ❌ | ❌ |
| **Service Availability** | ✅ 99.99% SLA (Prod Pack), multi-region replication, auto-scaling, disaster recovery | ❌ | ❌ |
| **Data Encryption** | ✅ TLS in transit, encryption at rest (Prod Pack), key management | ❌ | ❌ |
| **Compliance** | ✅ SOC 2 (Prod Pack), GDPR, HIPAA (Enterprise) | ❌ | ❌ |
| **Data Management** | ❌ | ✅ Data classification, retention policies, quality controls | ❌ |
| **Application Security** | ❌ | ✅ Secure development, input validation, authentication, client-side encryption | ❌ |
| **Access Control** | ❌ | ✅ Redis ACL, user permissions, credential management, MFA | ❌ |
| **Network Security** | ❌ | ✅ IP allowlist, network segmentation, client security | ❌ |
| **Security Configuration** | ❌ | ❌ | ✅ ACL setup, security policies |
| **Monitoring** | ✅ Infrastructure monitoring, incident response | ✅ Application monitoring, custom metrics | ✅ Performance monitoring, security monitoring |
| **Incident Response** | ✅ Infrastructure incidents, service restoration | ✅ Application incidents, data incidents | ✅ Incident coordination, root cause analysis |
## Key Responsibilities
**Infrastructure & Platform:**
* Physical security, network infrastructure, DDoS protection
* OS security, Redis updates, container security
* 99.99% uptime SLA (Prod Pack), multi-region replication, auto-scaling
* TLS encryption, encryption at rest (Prod Pack), key management
* SOC 2 (Prod Pack), GDPR, HIPAA (Enterprise)
* 24/7 infrastructure monitoring and incident response
**Data & Application Security:**
* Architecture: retries/backoff, idempotency, timeouts; region/topology choices
* Data governance: classification, retention, integrity
* App security: secure coding, input validation, authN/authZ
* Access: Redis ACL (least privilege), credential hygiene and rotation
* Network: IP allowlist and client hardening
* Ops: monitoring/alerts, error handling, budgets/limits
**Configuration & Operations:**
* ACL, IP allowlist, and Prod Pack configuration
* Compliance requirements understanding and implementation
* Performance monitoring setup and alerting
* Incident coordination and root cause analysis
# Support & Contact Us
Source: https://upstash.com/docs/redis/help/support
## Community
[Upstash Discord Channel](https://upstash.com/discord) is the best way to
interact with the community.
## Team
You can contact the team
via [support@upstash.com](mailto:support@upstash.com) for technical support as
well as questions and feedback.
## Follow Us
Follow us at [X](https://x.com/upstash).
## Professional Support
Get [Professional Support](/common/help/prosupport) from the Upstash team.
# Uptime Monitor
Source: https://upstash.com/docs/redis/help/uptime
## Status Page
You can track the uptime status of Upstash databases in
[Upstash Status Page](https://status.upstash.com)
## Latency Monitor
You can see the average latencies for different regions in
[Upstash Latency Monitoring](https://latency.upstash.com) page
# Connect Your Client
Source: https://upstash.com/docs/redis/howto/connectclient
Upstash works with Redis® API, that means you can use any Redis client with
Upstash. At the [Redis Clients](https://redis.io/clients) page you can find the
list of Redis clients in different languages.
Probably, the easiest way to connect to your database is to use `redis-cli`.
Because it is already covered in [Getting Started](../overall/getstarted), we
will skip it here.
## Database
After completing the [getting started](../overall/getstarted) guide, you will
see the database page as below:
The information required for Redis clients is displayed here as **Endpoint**,
**Port** and **Password**. Also when you click on `Clipboard` button on **Connect to your database** section, you can copy
the code that is required for your client.
Below, we will provide examples from popular Redis clients, but the information above should help you configure all Redis clients similarly.
TLS is enabled by default for all Upstash Redis databases. It's not possible
to disable it.
## upstash-redis
Because upstash-redis is HTTP based, we recommend it for Serverless functions.
Other TCP based clients can cause connection problems in highly concurrent use
cases.
**Library**: [upstash-redis](https://github.com/upstash/upstash-redis)
**Example**:
```typescript theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: "UPSTASH_REDIS_REST_URL",
token: "UPSTASH_REDIS_REST_TOKEN",
});
(async () => {
try {
const data = await redis.get("key");
console.log(data);
} catch (error) {
console.error(error);
}
})();
```
## Node.js
**Library**: [ioredis](https://github.com/luin/ioredis)
**Example**:
```javascript theme={"system"}
const Redis = require("ioredis");
let client = new Redis("rediss://:YOUR_PASSWORD@YOUR_ENDPOINT:YOUR_PORT");
await client.set("foo", "bar");
let x = await client.get("foo");
console.log(x);
```
## Python
**Library**: [redis-py](https://github.com/andymccurdy/redis-py)
**Example**:
```python theme={"system"}
import redis
r = redis.Redis(
host= 'YOUR_ENDPOINT',
port= 'YOUR_PORT',
password= 'YOUR_PASSWORD',
ssl=True)
r.set('foo','bar')
print(r.get('foo'))
```
## Java
**Library**: [jedis](https://github.com/xetorthio/jedis)
**Example**:
```java theme={"system"}
Jedis jedis = new Jedis("YOUR_ENDPOINT", "YOUR_PORT", true);
jedis.auth("YOUR_PASSWORD");
jedis.set("foo", "bar");
String value = jedis.get("foo");
System.out.println(value);
```
Jedis does not offer command level retry config by default, but you can handle
retries using connection pool. Check [Retrying a command after a connection
failure](https://redis.io/docs/latest/develop/clients/jedis/connect/#retrying-a-command-after-a-connection-failure)
## PHP
**Library**: [phpredis](https://github.com/phpredis/phpredis)
**Example**:
```php theme={"system"}
connect("YOUR_ENDPOINT", "YOUR_PORT");
$redis->auth("YOUR_PASSWORD");
$redis->set("foo", "bar");
print_r($redis->get("foo"));
```
Phpredis supports connection level retries through `OPT_MAX_RETRIES`. However,
for command level retries, it only supports [SCAN
command](https://github.com/phpredis/phpredis?tab=readme-ov-file#example-29).
## Go
**Library**: [redigo](https://github.com/gomodule/redigo)
**Example**:
```go theme={"system"}
func main() {
c, err := redis.Dial("tcp", "YOUR_ENDPOINT:YOUR_PORT", redis.DialUseTLS(true))
if err != nil {
panic(err)
}
_, err = c.Do("AUTH", "YOUR_PASSWORD")
if err != nil {
panic(err)
}
_, err = c.Do("SET", "foo", "bar")
if err != nil {
panic(err)
}
value, err := redis.String(c.Do("GET", "foo"))
if err != nil {
panic(err)
}
println(value)
}
```
# Connect with upstash-redis
Source: https://upstash.com/docs/redis/howto/connectwithupstashredis
[upstash-redis](https://github.com/upstash/redis-js)
is an HTTP/REST based Redis client built on top of
[Upstash REST API](/redis/features/restapi). For more information,
refer to the documentation of Upstash redis client ([TypeScript](/redis/sdks/ts/overview) & [Python](/redis/sdks/py/overview)).
It is the only connectionless (HTTP based) Redis client and designed for:
* Serverless functions (AWS Lambda ...)
* Cloudflare Workers (see
[the example](https://github.com/upstash/redis-js/tree/main/examples/cloudflare-workers-with-typescript))
* Fastly Compute\@Edge
* Next.js, Jamstack ...
* Client side web/mobile applications
* WebAssembly
* and other environments where HTTP is preferred over TCP.
See
[the list of APIs](https://docs.upstash.com/features/restapi#rest---redis-api-compatibility)
supported.
## Quick Start
### Install
```bash theme={"system"}
npm install @upstash/redis
```
### Usage
```typescript theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: "UPSTASH_REDIS_REST_URL",
token: "UPSTASH_REDIS_REST_TOKEN",
});
(async () => {
try {
const data = await redis.get("key");
console.log(data);
} catch (error) {
console.error(error);
}
})();
```
If you define `UPSTASH_REDIS_REST_URL` and`UPSTASH_REDIS_REST_TOKEN` environment
variables, you can load them automatically.
```typescript theme={"system"}
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv()(async () => {
try {
const data = await redis.get("key");
console.log(data);
} catch (error) {
console.error(error);
}
})();
```
# Datadog - Upstash Redis Integration
Source: https://upstash.com/docs/redis/howto/datadog
This guide will walk you through the steps to seamlessly connect your Datadog account with Upstash for enhanced monitoring and analytics.
**Integration Scope**
Upstash Datadog Integration only covers Pro databases or those included in the Enterprise Plan.
## **Step 1: Log in to Your Datadog Account**
1. Open your web browser and navigate to [Datadog](https://www.datadoghq.com/).
2. Log in to your Datadog account.
## **Step 2: Install Upstash Application**
1. Once logged in, navigate to the "Integrations" page in Datadog.
2. Search for "Upstash" in the integrations list and click on it.
Let’s click on the "Install" button to add Upstash to your Datadog account.
## **Step 3: Connect Accounts**
After installing Upstash, click on the "Connect Accounts" button and Datadog will redirect you to the Upstash site for account integration.
## **Step 4: Select Account to Integrate**
1. On the Upstash site, you will be prompted to select the Datadog account you want to integrate.
2. Choose the appropriate Datadog account from the list.
Upstash Datadog Integration allows you to integrate personal and team based accounts.
**Caveats;**
* This integration can only be executed only one time.If you would like to extend list of the team in integration please re-establish the integration from scratch.
## **Step 5: Wait for Metrics Availability**
Once you've selected your Datadog account, Upstash will begin the integration process and please be patient while the metrics are being retrieved. This may take a few moments.
And here we go, metrics will be available in Upstash Overview Dashboard !
## **Step 6: Datadog Integration Removal Process**
Navigate to Integration Tab on your Datadog account,
Once logged in, navigate to the "Integration" tab and continue with Datadog part. If you would like to remove your integration between Upstash and Datadog account press "Remove".
### Confirm Removal:
Upstash will suspend all metric publishing process after the you remove Datadog Integration.
After removing the integration on the Upstash side, it's crucial to go to your Datadog account and remove any related API keys or configurations associated with the integration.
## Pricing
If you choose to integrate Datadog via Upstash, there will be an additional cost of \$5 per month.
This charge will be reflected in your monthly invoice accordingly.
## **Conclusion**
Congratulations! You have successfully integrated your Datadog account with Upstash. You will now have access to enhanced monitoring and analytics for your Datadog metrics.
Feel free to explore Upstash's features and dashboards to gain deeper insights into your system's performance.
If you encounter any issues or have questions, please refer to the Upstash support documentation or contact our support team for assistance.
# EMQX - Upstash Redis Integration
Source: https://upstash.com/docs/redis/howto/emqxintegration
EMQX, a robust open-source MQTT message broker, is engineered for scalable, distributed environments, prioritizing high
availability, throughput, and minimal latency. As a preferred protocol in the IoT landscape, MQTT (Message Queuing
Telemetry Transport) excels in enabling devices to effectively publish and subscribe to messages.
Offered by EMQ, EMQX Cloud is a comprehensively managed MQTT service in the cloud, inherently scalable and secure. Its
design is particularly advantageous for IoT applications, providing dependable MQTT messaging services.
This tutorial guides you on streaming MQTT data to Upstash via data integration. It allows clients to send temperature
and humidity data to EMQX Cloud using MQTT and channel it into Upstash for Redis storage.
## Setting Up Redis Database with Upstash
1. Log in and create a Redis Database by clicking the **Create Database** button on [Upstash Console](https://console.upstash.com).
2. Name your database and select a region close to your EMQX Cloud for optimal performance.
3. Click **Create** to have your serverless Redis Database ready.

### Database Details
Access the database console for the necessary information for further steps.

The above steps, conclude the initial setup for Upstash.
## Establishing Data Integration with Upstash
### Activating EMQX Cloud's NAT Gateway
1. Log into the EMQX Cloud console and go to the deployment Overview.
2. Select **NAT Gateway** at the bottom and click **Subscribe Now**.

### Configuring Data Integration
1. In the EMQX Cloud console, choose **Data Integrations** and select **Upstash for Redis**.

2. Input **Endpoints** info from the Redis detail page into the **Redis Server** field, including the port. Enter the
password in **Password** and click **Test** to ensure connectivity.

3. Click **New** to add a Redis resource. A new Upstash for Redis will appear under **Configured Resources**.
4. Formulate a new SQL rule in the **SQL** field. This rule will read from `temp_hum/emqx` and append client\_id, topic,
timestamp.
* `up_timestamp`: Message report time
* `client_id`: Publishing client's ID
* `temp`: Temperature data
* `Hum`: Humidity data
```sql theme={"system"}
SELECT
timestamp as up_timestamp,
clientid as client_id,
payload.temp as temp,
payload.hum as hum
FROM
"temp_hum/emqx"
```

5. Execute an SQL test with payload, topic, client info. Successful results confirm the rule's effectiveness.

6. Proceed to **Next** to link an action. The rule will store the timestamp, client ID, temperature, and humidity in
Redis. Click **Confirm**.
```bash theme={"system"}
HMSET ${client_id} ${up_timestamp} ${temp}
```

7. Post-binding, click **View Details** for the rule SQL and bound actions.
8. To review rules, select **View Created Rules** in Data Integrations. Check detailed metrics in the **Monitor**
column.
## Testing the Data Bridge
1. Simulate temperature and humidity data with [MQTTX](https://mqttx.app/). Add connection address and client
authentication for the EMQX Dashboard.

2. In Upstash Console, under Data Browser, select a client entry to review messages.

# Get Started with AWS Lambda
Source: https://upstash.com/docs/redis/howto/getstartedawslambda
You can connect to Upstash database from your Lambda functions using your
favorite Redis client. You do not need any extra configuration. The only thing
to note is you should use the same region for your Lambda function and database
to minimize latency.
If you do not have any experience with AWS Lambda functions, you can follow the
following tutorial. The tutorial explains the required steps to implement an AWS
Lambda function that takes the key/value as parameters from APIGateway then
inserts an entry (key/value) to the database which is on Upstash. We have
implemented the function in Node.js, but the steps and the logic are quite
similar in other languages.
This example uses Redis clients. If you expect many concurrent AWS Lambda
invocation then we recommend using
**[upstash-redis](/redis/howto/connectwithupstashredis)** which is HTTP/REST
based.
**Step 1: Create database on Upstash**
If you do not have one, create a database following this
[guide](../overall/getstarted).
**Step 2: Create a Node project**
Create an empty folder for your project and inside the folder create a node
project with the command:
```
npm init
```
Then install the redis client with:
```
npm install ioredis
```
Now create index.js file. Replace the Redis URL in the below code.
```javascript theme={"system"}
var Redis = require("ioredis");
if (typeof client === "undefined") {
var client = new Redis("rediss://:YOUR_PASSWORD@YOUR_ENDPOINT:YOUR_PORT");
}
exports.handler = async (event) => {
await client.set("foo", "bar");
let result = await client.get("foo");
let response = {
statusCode: 200,
body: JSON.stringify({
result: result,
}),
};
return response;
};
```
**Step 3: Deploy Your Function**
Our function is ready to deploy. Normally you could copy-paste your function
code to AWS Lambda editor. But here it is not possible because we have an extra
dependency (redis-client). So we will zip and upload our function.
When you are in your project folder, create a zip with this command:
```
zip -r app.zip .
```
Now open your AWS console, from the top-right menu, select the region that you
created your database in Upstash. Then find or search the lambda service, click
on `Create Function` button.
Enter a name for your function and select `Node.js 14.x` as runtime. Click
`Create Function`.
Now you are on the function screen, scroll below to `Function Code` section. On
`Code entry type` selection, select `Upload a .zip file`. Upload the `app.zip`
file you have just created and click on the `Save` button on the top-right. You
need to see your code as below:
Now you can test your code. Click on the `Test` button on the top right. Create
an event like the below:
```
{
"key": "foo",
"value": "bar"
}
```
Now, click on Test. You will see something like this:
Congratulations, now your lambda function inserts entry to your Upstash
database.
**What can be the next?**
* You can write and deploy another function to just get values from the
database.
* You can learn better ways to deploy your functions such as
[serverless framework](https://serverless.com/) and
[AWS SAM](https://aws.amazon.com/serverless/sam/)
* You can integrate
[API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html)
so you can call your function via http.
* You can learn about how to monitor your functions from CloudWatch as described
[here](https://docs.aws.amazon.com/lambda/latest/dg//monitoring-functions-logs.html)
.
#### Redis Connections in AWS Lambda
Although Redis connections are very lightweight, a new connection inside each
Lambda function can cause a notable latency. On the other hand, reusing Redis
connections inside the AWS Lambda functions has its own drawbacks. When AWS
scales out Lambda functions, the number of open connections can rapidly
increase. Fortunately, Upstash detects and terminates the idle and zombie
connections thanks to its smart connection handling algorithm. Since this
algorithm is used; we have been recommending caching your Redis connection in
serverless functions.
See [the blog post](https://blog.upstash.com/serverless-database-connections)
about the database connections in serverless functions.
Below is our findings about various Redis clients' behaviours when connection is
created, a single command is submitted and then connection is closed. **Note
that these commands (AUTH, INFO, PING, QUIT, COMMAND) are not billed.**
| Client | #Commands | Issued Commands |
| ----------------------------------------------------- | :-------: | :----------------: |
| [redis-cli](https://redis.io/topics/rediscli) | 2 | AUTH - COMMAND |
| [node-redis](https://github.com/NodeRedis/node-redis) | 3 | AUTH - INFO - QUIT |
| [ioredis](https://github.com/luin/ioredis) | 3 | AUTH - INFO - QUIT |
| [redis-py](https://github.com/andymccurdy/redis-py) | 1 | AUTH |
| [jedis](https://github.com/xetorthio/jedis) | 2 | AUTH - QUIT |
| [lettuce](https://github.com/lettuce-io/lettuce-core) | 2 | AUTH - QUIT |
| [go-redis](https://github.com/go-redis/redis) | 1 | AUTH |
# Get Started with Cloudflare Workers
Source: https://upstash.com/docs/redis/howto/getstartedcloudflareworkers
This tutorial showcases using Redis with REST API in Cloudflare Workers. We will
write a sample edge function (Cloudflare Workers) which will show a custom
greeting depending on the location of the client. We will load the greeting
message from Redis so you can update it without touching the code.
See
[the code](https://github.com/upstash/examples/tree/master/examples/using-cloudflare-workers).
### Why Upstash?
* Cloudflare Workers does not allow TCP connections. Upstash provides REST API
on top of the Redis database.
* Upstash is a serverless offering with per-request pricing which fits for edge
and serverless functions.
* Upstash Global database provides low latency all over the world.
### Step-1: Create Redis Database
Create a free Global database from
[Upstash Console](https://console.upstash.com). Find your REST URL and token in
the database details page in the console. Copy them.
Connect your database with redis-cli and add some greetings
```shell theme={"system"}
usw1-selected-termite-30690.upstash.io:30690> set GB "Ey up?"
OK
usw1-selected-termite-30690.upstash.io:30690> set US "Yo, what’s up?"
OK
usw1-selected-termite-30690.upstash.io:30690> set TR "Naber dostum?"
OK
usw1-selected-termite-30690.upstash.io:30690> set DE "Was ist los?"
```
### Step-2: Edge Function
The best way to work with Cloudflare Workers is to use
[Wrangler](https://developers.cloudflare.com/workers/get-started/guide). After
installing and configuring wrangler, create a folder for your project inside the
folder run: `wrangler init`
Choose `yes` to create package.json, `no` to typescript and `yes` to create a
worker in src/index.js.
It will create `wrangler.toml`, `package.json` and `src/index.js`.
Append the Upstash REST URL and token to the toml as below:
```toml theme={"system"}
# wrangler.toml
# existing config
[vars]
UPSTASH_REDIS_REST_TOKEN = "AX_sASQgODM5ZjExZGEtMmI3Mi00Mjcwk3NDIxMmEwNmNkYjVmOGVmZTk5MzQ="
UPSTASH_REDIS_REST_URL = "https://us1-merry-macaque-31458.upstash.io/"
```
Install upstash-redis: `npm install @upstash/redis`
Replace `src/index.js` with the following:
```javascript theme={"system"}
// src/index.js
import { Redis } from "@upstash/redis/cloudflare";
export default {
async fetch(request, env) {
const redis = Redis.fromEnv(env);
const country = request.headers.get("cf-ipcountry");
if (country) {
const greeting = await redis.get(country);
if (greeting) {
return new Response(greeting);
}
}
return new Response("Hello!");
},
};
```
The code tries to find out the user's location checking the "cf-ipcountry"
header. Then it loads the correct greeting for that location using the Redis
REST API.
## Run locally
Run `wrangler dev` and open your browser at
[localhost:8787](http://localhost:8787).
## Build and Deploy
Build and deploy your app to Cloudflare by running: `wrangler publish`
The url of your app will be logged:
[https://using-cloudflare-workers.upstash.workers.dev/](https://using-cloudflare-workers.upstash.workers.dev/)
## Typescript example
We also have a typescript example, available
[here](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers-with-typescript).
# Get Started with Google Cloud Functions
Source: https://upstash.com/docs/redis/howto/getstartedgooglecloudfunctions
### Prerequisites:
* A GCP account for Google Cloud functions.
* Install [Google Cloud SDK](https://cloud.google.com/sdk/docs/install).
* An Upstash account for Serverless Redis.
### Step 1: Init the Project
* Create a folder, then run `npm init` inside the folder.
### Step 2: Install a Redis Client
Our only dependency is redis client. Install go-redis via `npm install ioredis`
### Step 3: Create a Redis Database
Create a Redis database from Upstash console. **Select the GCP US-Central-1 as
the region.** Free tier should be enough. It is pretty straight forward but if
you need help, check [getting started](../overall/getstarted) guide. In the
database details page, click the Connect button. You will need the endpoint and
password in the next step.
### Step 4: The function Code
Create index.js as below:
```javascript theme={"system"}
var Redis = require("ioredis");
if (typeof client === "undefined") {
var client = new Redis("rediss://:YOUR_PASSWORD@YOUR_ENDPOINT:YOUR_PORT");
}
exports.helloGET = async (req, res) => {
let count = await client.incr("counter");
res.send("Page view:" + count);
};
```
The code simply increments a counter in Redis database and returns its value in
json format.
### Step 5: Deployment
Now we are ready to deploy our API. Deploy via:
```shell theme={"system"}
gcloud functions deploy helloGET \
--runtime nodejs14 --trigger-http --allow-unauthenticated
```
You will see the URL of your Cloud Function. Click to the URL to check if it is
working properly.
```shell theme={"system"}
httpsTrigger:
securityLevel: SECURE_OPTIONAL
url: https://us-central1-functions-317005.cloudfunctions.net/helloGET
```
In case of an issue, you can check the logs of your Cloud Function in the GCP
console as below.
# Import/Export Data
Source: https://upstash.com/docs/redis/howto/importexport
## Using Upstash Console
You can use the migration wizard in the
[Upstash console](https://console.upstash.com) to import your Redis to Upstash.
In the database list page, click on the `Import` button, you will see the dialog
like below:
You can move your data from either an Upstash database or a database in another
provider (or on premise).
All the data will be deleted (flushed) in the destination database before the
migration process starts.
## Using upstash-redis-dump
You can also use the
[upstash-redis-dump](https://github.com/upstash/upstash-redis-dump) tool
import/export data from another Redis.
The below is an example how to dump and import data:
```shell theme={"system"}
$ upstash-redis-dump -db 0 -host eu1-moving-loon-6379.upstash.io -port 6379 -pass PASSWORD -tls > redis.dump
Database 0: 9 keys dumped
```
See [upstash-redis-dump repo](https://github.com/upstash/upstash-redis-dump) for
more information.
# ioredis note
Source: https://upstash.com/docs/redis/howto/ioredisnote
This example uses ioredis, you can copy the connection string from the `Node`
tab in the console.
# Use IP Allowlist
Source: https://upstash.com/docs/redis/howto/ipallowlist
IP Allowlist is available on all plans except for the free plan.
IP Allowlist can be used to restrict which IP addresses are permitted to access your database by comparing a connection's address with predefined CIDR blocks. This feature enhances database security by allowing connections only from specified IP addresses. For example if you have dedicated production servers with static IP addresses, enabling IP allowlist blocks connections from other addresses.
## Enabling IP Allowlist
By default, any IP address can be used to connect to your database. You must add at least one IP range to enable the allowlist. You can manage added IP ranges in the `Configuration` section on the database details page. You can either provide
* IPv4 address, e.g. `37.237.15.43`
* CIDR block, e.g. `181.49.172.0/24`
Currently, IP Allowlist only supports IPv4 addresses.
You can use more than one range to allow multiple clients. Meeting the criteria of just one is enough to establish a connection.
It may take a few minutes for changes to propagate.
# Listen Keyspace Notifications
Source: https://upstash.com/docs/redis/howto/keyspacenotifications
Upstash allows you to listen for keyspace notifications over pubsub channels to
receive events for changes over the keys.
For each event that occurs, two kinds of events are fired over the corresponding
pubsub channels:
* A keyspace event that will use the pubsub channel for the key, possibly containing
other events for the same key
* A keyevent event that will use the pubsub channel for the event, possibly containing
other events for the different keys
The channel names and their content are of the form:
* `__keyspace@0__:keyname` channel with the values of the event names for the keyspace
notifications
* `__keyevent@0__:eventname` channel with the values of the key names for the keyevent
notifications
## Enabling Notifications
By default, all keyspace and keyevent notifications are off. To enable it, you can use
the `CONFIG SET` command, and set the `notify-keyspace-events` options to one of the
appropriate flags described below.
Each keyspace and keyevent notification fired might have an effect on the latency of the
commands as the events are delivered to the listening clients and cluster members for
multi-replica deployments. Therefore, it is advised to only enable the minimal subset of the
notifications that are needed.
| Flag | Description |
| ---- | --------------------------- |
| K | Keyspace events |
| E | Keyevent events |
| g | Generic command events |
| \$ | String command events |
| l | List command events |
| s | Set command events |
| h | Hash command events |
| z | Sorted set command events |
| t | Stream command events |
| d | Module(JSON) command events |
| x | Expiration events |
| e | Eviction events |
| m | Key miss events |
| n | New key events |
| A | Alias for g\$lshztxed |
At least one of the `K` or `E` flags must be present in the option value.
For example, you can use the following command to receive keyspace notifications
only for the hash commands:
```bash theme={"system"}
curl -X POST \
-d '["CONFIG", "SET", "notify-keyspace-events", "Kh"]' \
-H "Authorization: Bearer $UPSTASH_REDIS_REST_TOKEN" \
$UPSTASH_REDIS_REST_URL
```
```bash theme={"system"}
redis-cli --tls -u $UPSTASH_REDIS_CLI_URL config set notify-keyspace-events Kh
```
You can listen for all the channels using redis-cli to test the effect of the
above command:
```bash theme={"system"}
redis-cli --tls -u $UPSTASH_REDIS_CLI_URL --csv psubscribe '__key*__:*'
```
### Disabling Notifications
You can reuse the `CONFIG SET` command and set `notify-keyspace-events` option to empty string
to disable all keyspace and keyevent notifications.
```bash theme={"system"}
curl -X POST \
-d '["CONFIG", "SET", "notify-keyspace-events", ""]' \
-H "Authorization: Bearer $UPSTASH_REDIS_REST_TOKEN" \
$UPSTASH_REDIS_REST_URL
```
```bash theme={"system"}
redis-cli --tls -u $UPSTASH_REDIS_CLI_URL config set notify-keyspace-events ""
```
### Checking Notification Configuration
`CONFIG GET` command can be used the get the current value of the `notify-keyspace-events` option
to see the active keyspace and keyevent notifications configuration.
```bash theme={"system"}
curl -X POST \
-d '["CONFIG", "GET", "notify-keyspace-events"]' \
-H "Authorization: Bearer $UPSTASH_REDIS_REST_TOKEN" \
$UPSTASH_REDIS_REST_URL
```
```bash theme={"system"}
redis-cli --tls -u $UPSTASH_REDIS_CLI_URL config get notify-keyspace-events
```
# Metrics and Charts
Source: https://upstash.com/docs/redis/howto/metricsandcharts
There are many metrics and charts in Upstash console. In this document, we will
explain what each of these charts refers to. There are two pages where you can
see charts and metrics:
## Database List
The charts on this page give aggregated and total information about the database
and your usage.
In this chart, all your databases are listed. You can click on the name of the
database that you want to see detailed information. Also, the following
information is listed for each database:
* The region of the database
* The current size of the data
* The current count of active connections: Not that if your connections are
short-lived then you may see 0 here most of the time.
## Database Detail
The charts on this page show metrics that are specific to the selected database.
### Current Month
* This chart shows the daily cost of the database. The chart covers the last 5
days.
### Daily Request
This chart shows the daily total number of requests to the database. The chart
covers the last 5 days.
### Throughput
Throughput chart shows throughput values for reads, writes and commands (all
commands including reads and writes) per second. The chart covers the last 1
hour and it is updated every 10 seconds.
### Service Time Latency
This chart shows the processing time of the request between it is received by
the server and the response is sent to the caller. It shows the times in max,
mean, min, 99.9 percentile and 99.99 percentile. The chart covers the last 1
hour and it is updated every 10 seconds.
### Data Size
This chart shows the data size of your database. The chart covers the last 24
hours and it is updated every 10 seconds.
### Connections
This chart shows the number of active client connections. It shows the number of
open connections plus the number of short-lived connections that started and
terminated in 10 seconds period. The chart covers the last 1 hour and it is
updated every 10 seconds.
### Key Space
This chart shows the number of keys. The chart covers the last 24 hours and it
is updated every 10 seconds.
### Hits / Misses
This chart shows the number of hits per second and misses per second. The chart
covers the last 1 hour and it is updated every 10 seconds.
# Migrate Regional to Global Database
Source: https://upstash.com/docs/redis/howto/migratefromregionaltoglobal
This guide will help you migrate your data from a regional Upstash Redis database to a global database.
If your database is Upstash Regional, we strongly recommend you to migrate to [Upstash Redis Global](/common/concepts/global-replication).
Our Regional Redis databases are legacy and deprecated.
## Why Migrate?
* New infrastructure, more modern and more secure
* Upstash Global is SOC-2 (included with Prod pack) and HIPAA (included with Enterprise) compatible
* Enhanced feature set: New features are only made available on Upstash Global
* Ability to add/remove read regions on the go
* Better performance as per our benchmarks
## Prerequisites
Before starting the migration, make sure you have:
1. An existing regional Upstash Redis database (source)
2. A new global Upstash Redis database (destination)
3. Access to both databases' credentials (connection strings, passwords)
## Migration Process
There are several official ways to migrate your data:
If you are using RBAC, please note that they are not migrated automatically. You need to redefine ACL users for new the global database after migration.
### 1. Using Backup/Restore (Recommended for AWS Regional Databases)
If your regional database is hosted in AWS, you can use Upstash's backup/restore feature:
1. Create a backup of your regional database:
* Go to your regional database details page
* Navigate to the `Backups` tab
* Click the `Backup` button
* Provide a unique name for your backup
* Wait for the backup process to complete
During backup creation, some database operations will be temporarily unavailable.
2. Restore the backup to your global database:
* Go to your global database details page
* Navigate to the `Backups` tab
* Click `Restore...`
* Select your regional database as the source
* Select the backup you created
* Click `Start Restore`
The restore operation will flush (delete) all existing data in your (destination) global database before restoring the backup.
### 2. Using Upstash Console Migration Wizard
The easiest way to migrate your data is using the Upstash Console's built-in migration wizard:
1. Go to [Upstash Console](https://console.upstash.com)
2. In the database list page, click the `Import` button
3. Select your source (regional) database
4. Select your destination (global) database
5. Follow the wizard instructions to complete the migration
Note: The destination database will be flushed before migration starts.
### 3. Using upstash-redis-dump
Another reliable method is using the official [upstash-redis-dump](https://github.com/upstash/upstash-redis-dump) tool:
1. Install upstash-redis-dump:
```bash theme={"system"}
go install github.com/upstash/upstash-redis-dump@latest
```
2. Export data from regional database:
```bash theme={"system"}
upstash-redis-dump -db 0 -host YOUR_REGIONAL_HOST -port YOUR_DATABASE_PORT -pass YOUR_PASSWORD -tls > redis.dump
```
3. Import data to global database:
```bash theme={"system"}
redis-cli --tls -u redis://YOUR_PASSWORD@YOUR_REGIONAL_HOST:6379 --pipe < redis.dump
```
## Verification
After migration, verify your data:
1. Compare key counts in both databases
2. Sample test some keys to ensure data integrity
## Post-Migration Steps
1. Update your application configuration to use the new Global database URL
2. Test your application thoroughly with the new database
3. Monitor performance and consistency across regions
4. Keep the regional database as backup for a few days
5. Once verified, you can safely delete the regional database
## Need Help?
If you encounter any issues during migration, please contact Upstash support via chat, [support@upstash.com](mailto:support@upstash.com) or visit our Discord community for assistance.
# Monitor your usage
Source: https://upstash.com/docs/redis/howto/monitoryourusage
We support the Redis `MONITOR` command, a debugging command that allows you to see all requests processed by your Redis instance in real-time.
## Monitoring Your Usage - Video Guide
In this video, we'll walk through setting up a monitor instance step-by-step.
The `MONITOR`command expects a persistent connection and, therefore, does not work over HTTP.
In this video, we use `ioredis` to connect to our Upstash Redis database. Using an event handler, we can define what should happen for each executed command against on Redis instance. For example, logging all data to the console.
```ts Example theme={"system"}
const monitor = await redis.monitor()
monitor.on("monitor", (time, args, source, database) => {
console.log(time, args, source, database)
})
```
# Read Your Writes
Source: https://upstash.com/docs/redis/howto/readyourwrites
The "Read Your Writes" feature in Upstash Redis ensures that write operations are completed before subsequent read operations occur, maintaining data consistency in your application.
### How It Works
All write operations happen on the primary member and take time to propagate to the read replicas. Imagine that a client attempts to read an item immediately after it’s written. The read may go to a replica that hasn’t synced with the primary yet, resulting in stale data being returned.
RYW consistency solves this by returning a **sync token** after each request, which indicates the primary member’s state. In the next request, this sync token ensures the read replica syncs up to that token before serving the read.
So, the sync token acts as a checkpoint, ensuring that any read operations following a write reflect the most recent changes, even if they are served by a read replica.
Management of the sync token is handled automatically by the official [Typescript (version 1.34.0 and later)](/redis/sdks/ts/overview) and [Python (version 1.2.0 and later)](/redis/sdks/py/overview) SDKs of Upstash. When you initialize a Redis client with these SDKs, the writes made by that client will be respected during subsequent reads from the same client.
For REST users, you can achieve similar behavior by using the `upstash-sync-token` header. Each time you make a request, save the value of the `upstash-sync-token` header from the response and pass it in the `upstash-sync-token` header of your next request. This ensures that subsequent reads reflect the writes.
### Cross-Client Synchronization
Imagine that you are writing some key to Redis and then you read the same key from a different Redis client instance. In this case, the second client’s read request may not reflect the write made by the first client, as the sync tokens are updated independently in the two clients.
Consider these two example functions, each representing separate API endpoints:
```ts theme={"system"}
export const writeRequest = async () => {
const redis = Redis.fromEnv();
const randomKey = nanoid();
await redis.set(randomKey, "value");
return randomKey;
};
export const readRequest = async (randomKey: string) => {
const redis = Redis.fromEnv();
const value = await redis.get(randomKey);
return value;
};
```
If these functions are called in sequence, they will create two separate clients:
```ts theme={"system"}
const randomKey = await writeRequest();
await readRequest(randomKey);
```
As explained above, in rare cases, one of your [read replicas](/redis/features/globaldatabase#primary-region-and-read-regions) can serve the `read` request before it receives the `write` update from the primary replica. To avoid this, if you are using `@upstash/redis` version 1.34.1 or later, you can pass the `readYourWritesSyncToken` from the first client to the second:
```ts theme={"system"}
export const writeRequest = async () => {
const redis = Redis.fromEnv();
const randomKey = nanoid();
await redis.set(randomKey, "value");
// Get the token **after** making the write
const token = redis.readYourWritesSyncToken;
return { randomKey, token };
};
export const readRequest = async (
randomKey: string,
token: string | undefined
) => {
const redis = Redis.fromEnv();
// Set the token **before** making the read
redis.readYourWritesSyncToken = token;
const value = await redis.get(randomKey);
return value;
};
const { randomKey, token } = await writeRequest();
await readRequest(randomKey, token);
```
Remember to get the sync token after the write request is completed, as the session token changes with each request.
For REST users or the Upstash Python SDK, a similar approach can be used. In Python, use `Redis._sync_token` instead of `readYourWritesSyncToken`.
# Terraform Provider
Source: https://upstash.com/docs/redis/howto/terraformprovider
You can use Upstash terraform provider to create your resources. API key is
required in order to create resources.
### Configure Provider
Provider requires your email address and api key which can be created in
console.
```
provider "upstash" {
email = ""
api_key = ""
}
```
### Create Database
As input you need to give database name, region and type.
```
resource "upstash_database" "mydb" {
database_name = "testdblstr"
region = "eu-west-1"
type = "free"
}
```
You can output database credentials as following
```
output "endpoint" {
value = "${upstash_database.mydb.endpoint}"
}
output "port" {
value = "${upstash_database.mydb.port}"
}
output "password" {
value = "${upstash_database.mydb.password}"
}
```
See our
[Terraform Provider Github Repository](https://github.com/upstash/terraform-provider-upstash)
for details and examples about Upstash Terraform Provider.
# Upgrade Your Database
Source: https://upstash.com/docs/redis/howto/upgradedatabase
Free tier has followings restrictions:
* Max 500K commands per month
* Max 256MB data size
* One free database per account
If you think your database is close to reaching any of these limits, we
recommend you to upgrade to pay-as-you-go plan which includes:
* No limit on requests per day
* Data size up to 100 GB
To upgrade your database, you need to have a payment method. You can add a
payment method as described [here](/common/account/addapaymentmethod). After you add
a payment method, Upstash restarts your database and your new database starts
with the pay-as-you-go plan.
See the [Pricing & Limits](../overall/pricing) for limits of the
pay-as-you-go and fixed plans. If you think, your use case will exceed those quotas,
contact us ([support@upstash.com](mailto:support@upstash.com)) for our [Enterprise Plan](../overall/enterprise)
where you can customize the limits.
During the upgrade process, you will not lose any data but your database will
experience a downtime about 1-2 seconds. Your existing clients will be
disconnected. So it is recommended to upgrade your database when there is the
least activity.
# Vercel - Upstash Redis Integration
Source: https://upstash.com/docs/redis/howto/vercelintegration
If you are using [Vercel](https://vercel.com/) then you can integrate Upstash
Redis, Vector, Search or QStash to your project easily. Upstash is the perfect serverless
solution for your applications thanks to its:
* Low latency data
* Per request pricing
* Durable storage
* Ease of use
Below are the steps of the integration.
## Add Integration to Your Vercel Account
Visit the [Upstash Integration](https://vercel.com/integrations/upstash) on
Vercel and click the `Install` button. If you are installing an Upstash integration
for the first time, you will be prompted to choosing between connecting an existing Upstash
account or letting Vercel manage an Upstash account for you.
In both cases, you will be able to create and use a redis database as usual. If you let Vercel
manage your Upstash account, you can handle payments, database creation and deletion directly from the Vercel dashboard.
If you choose to connect an existing Upstash account, you will be able to utilize features on Upstash Console
such as teams and audit logs.
### Option 1: "Create New Upstash Account"
If you choose this option, Vercel will prompt you to choose one of the products available on Upstash,
configure the database (by choosing database name, regions, plan). After you finish the configuration,
Vercel will create the Upstash account and the selected resources for you and redirect you to the
page of the created resource on Vercel dashboard.
On the Vercel dashboard, you will be able to find the credentials of the database, change the database
name, update the regions or plan.
You can also go to the `Settings` tab and connect your apps on Vercel to the database, making the credentials
of the database available to the app as environment variables.
### Option 2: "Link Existing Upstash Account"
Vercel will redirect you to Upstash, where you can select your Vercel project
and Upstash resources that you want to integrate.
You should login to [the Upstash Console](https://console.upstash.com/) with your account if you
are not logged in before clicking continue.
If you do not have a Redis database yet, you can create one
from the dropdown menu.
Once you have selected all resources, click the `Save` button at the bottom of
the page.
After all environment variables are created, you will be forwarded to Vercel. Go
to your project settings where you can see all added environment variables.
You need to redeploy your app for the environment variable to be used.
The [Integration Dashboard](https://console.upstash.com/integration/vercel)
allows you to see all your integrations, link new projects or manage existing
ones.
## Use Upstash in Your App
If you completed the integration steps above and redeploy your app, the added
environment variables will be accessible inside your Vercel application. You can
now use them in your clients to connect
### Redis
```ts theme={"system"}
import { Redis } from "@upstash/redis";
import { type NextRequest, NextResponse } from "next/server";
const redis = Redis.fromEnv();
export const POST = async (request: NextRequest) => {
await redis.set("foo", "bar");
const bar = await redis.get("foo");
return NextResponse.json({
body: `foo: ${bar}`,
});
}
```
### QStash
**Client**
```ts theme={"system"}
import { Client } from "@upstash/qstash";
const client = new Client({
token: process.env.QSTASH_TOKEN,
});
const res = await client.publishJSON({
url: "https://my-api...",
body: {
hello: "world",
},
});
```
**Receiver**
```ts theme={"system"}
import { Receiver } from "@upstash/qstash";
const receiver = new Receiver({
currentSigningKey: process.env.QSTASH_CURRENT_SIGNING_KEY,
nextSigningKey: process.env.QSTASH_NEXT_SIGNING_KEY,
});
const isValid = await receiver.verify({
signature: "..."
body: "..."
})
```
### Vector
```ts theme={"system"}
import { Index } from "@upstash/vector";
const index = new Index({
url: process.env.UPSTASH_VECTOR_REST_URL,
token: process.env.UPSTASH_VECTOR_REST_TOKEN,
});
await index.upsert({
id: "1",
data: "Hello world!",
metadata: { "category": "greeting" }
})
```
### Search
```ts theme={"system"}
import { Search } from "@upstash/search";
const client = new Search({
url: process.env.UPSTASH_SEARCH_REST_URL,
token: process.env.UPSTASH_SEARCH_REST_TOKEN,
});
const index = client.index("my-index");
await index.upsert({
id: "1",
content: { text: "Hello world!" },
metadata: { category: "greeting" }
});
```
## Support
If you have any issue you can ask in our
[Discord server](https://discord.gg/w9SenAtbme) or send email at
[support@upstash.com](mailto:support@upstash.com)
# BullMQ with Upstash Redis
Source: https://upstash.com/docs/redis/integrations/bullmq
You can use BullMQ and Bull with Upstash Redis. BullMQ is a Node.js queue library that is built on top of Bull. It is a Redis-based queue library so you can use Upstash Redis as its storage.
## Install
```bash theme={"system"}
npm install bullmq upstash-redis
```
## Usage
```javascript theme={"system"}
import { Queue } from 'bullmq';
const myQueue = new Queue('foo', { connection: {
host: "UPSTASH_REDIS_ENDPOINT",
port: 6379,
password: "UPSTASH_REDIS_PASSWORD",
tls: {}
}});
async function addJobs() {
await myQueue.add('myJobName', { foo: 'bar' });
await myQueue.add('myJobName', { qux: 'baz' });
}
await addJobs();
```
## Billing Optimization
BullMQ accesses Redis regularly, even when there is no queue activity. This can incur extra costs because Upstash charges per request on the Pay-As-You-Go plan. With the introduction of [our Fixed plans](/redis/overall/pricing#all-plans-and-limits), **we recommend switching to a Fixed plan to avoid increased command count and high costs in BullMQ use cases.**
# Celery with Upstash Redis
Source: https://upstash.com/docs/redis/integrations/celery
You can use **Celery** with Upstash Redis to build scalable and serverless task queues. Celery is a Python library that manages asynchronous task execution, while Upstash Redis acts as both the broker (queue) and the result backend.
## Setup
### Install Celery
To get started, install the necessary libraries using `pip`:
```bash theme={"system"}
pip install "celery[redis]"
```
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com). Export the `UPSTASH_REDIS_HOST`, `UPSTASH_REDIS_PORT`, and `UPSTASH_REDIS_PASSWORD` to your environment:
```bash theme={"system"}
export UPSTASH_REDIS_HOST=
export UPSTASH_REDIS_PORT=
export UPSTASH_REDIS_PASSWORD=
```
You can also use `python-dotenv` to load environment variables from a `.env` file:
```text .env theme={"system"}
UPSTASH_REDIS_HOST=
UPSTASH_REDIS_PORT=
UPSTASH_REDIS_PASSWORD=",
token: "",
// 👇 Enable caching for all queries (optional, default false)
global: true,
// 👇 Default cache behavior (optional)
config: { ex: 60 },
}),
})
```
***
### Cache Behavior
* **Per-query caching (opt-in, default):**\
No queries are cached unless you explicitly call `.$withCache()`.
```ts theme={"system"}
await db.insert(users).value({ email: "cacheman@upstash.com" });
// 👇 reads from cache
await db.select().from(users).$withCache()
```
* **Global caching:**\
When setting `global: true`, all queries will read from cache by default.
```ts theme={"system"}
const db = drizzle(process.env.DB_URL!, {
cache: upstashCache({ global: true }),
})
// 👇 reads from cache (no more explicit `$withCache()`)
await db.select().from(users)
```
You can always turn off caching for a specific query:
```ts theme={"system"}
await db.select().from(users).$withCache(false)
```
***
### Manual Cache Invalidation
Cache invalidation is fully automatic by default. If you ever need to, you can manually invalidate cached queries by table name or custom tags:
```ts theme={"system"}
// 👇 invalidate all queries that use the `users` table
await db.$cache?.invalidate({ tables: ["usersTable"] })
// 👇 invalidate all queries by custom tag (defined in previous queries)
await db.$cache?.invalidate({ tags: ["custom_key"] })
```
***
For more details on this integration, refer to the [Drizzle ORM caching documentation](https://cache.drizzle-orm-fe.pages.dev/docs/cache).
# Upstash MCP
Source: https://upstash.com/docs/redis/integrations/mcp
We provide an open source Upstash MCP to use natural language to interact with your Upstash account, e.g.:
* "Create a new Redis database in us-east-1"
* "List my databases"
* "Show all keys starting with "user:" in my users-db"
* "Create a backup"
* "Show me the throughput spikes for the last 7 days"
***
## Quickstart
### Step 1: Get your API Key
1. Go to `Account > Management API > Create API key` and create an API key.
2. Note down your `` and ``.
***
### Step 2: Locate `mcp.json`
* **Cursor**: Navigate to `Cursor Settings > Features > MCP` and click `+ Add new global MCP server`. This will open the `mcp.json` file.
* **Claude**: Navigate to `Settings > Developer` and click `Edit Config`. This will open the `claude_desktop_config.json` file. [Refer to the MCP documentation for more details](https://modelcontextprotocol.io/quickstart/user).
* **Copilot**: Create a `.vscode/mcp.json` file in your workspace directory. For Copilot, first update the `mcp.json` file as described in the next step on this page, then follow the [Copilot documentation (starting from step 2)](https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp#configuring-mcp-servers-in-visual-studio-code) to configure MCP servers in VS Code Chat.
***
### Step 3: Configure the MCP File
There are two transport modes for MCP servers: `stdio` and `sse`.
* **Stdio**: Best for local development. The server runs locally, and the client connects directly to it.
* **SSE**: Designed for server deployments. However, since clients don't yet support SSE connections with all the features we need, you need a proxy server. The proxy acts as a `stdio` server for the client and communicates with the SSE server in the background.
#### Option 1: Stdio Server
Add the following configuration to your MCP file:
```json Claude & Cursor theme={"system"}
{
"mcpServers": {
"upstash": {
"command": "npx",
"args": [
"-y",
"@upstash/mcp-server",
"run",
"",
""
]
}
}
}
```
```json Copilot theme={"system"}
{
"servers": {
"upstash": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@upstash/mcp-server",
"run",
"",
""
]
}
}
}
```
#### Option 2: SSE Server with Proxy
SSE (Server-Sent Events) is the next stage in MCP transport modes after `stdio`. It is designed for server deployments and will eventually be followed by an HTTP-based transport mode. However, since clients currently do not support direct connections to SSE servers, we use a proxy to bridge the gap.
The proxy, powered by `supergateway`, acts as a `stdio` server locally while communicating with the SSE server in the background. This allows you to use the SSE server seamlessly with your client.
Add the following configuration to your `mcp.json` file:
```json Claude & Cursor theme={"system"}
{
"mcpServers": {
"upstash": {
"command": "npx",
"args": [
"-y",
"supergateway",
"--sse",
"https://mcp.upstash.io/sse",
"--oauth2Bearer",
":"
]
}
}
}
```
```json Copilot theme={"system"}
{
"servers": {
"upstash": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"supergateway",
"--sse",
"https://mcp.upstash.io/sse",
"--oauth2Bearer",
":"
]
}
}
}
```
***
### Step 4: Use MCP with Your Client
Once your MCP is configured, your client can now interact with the MCP server for tasks like:
* Seeding data
* Querying databases
* Creating new databases
* Managing backups
* Analyzing performance metrics
For example, you can ask your client to "add ten users to my Redis database" or "show me the throughput spikes for the last 7 days."
# n8n with Upstash Redis
Source: https://upstash.com/docs/redis/integrations/n8n
## Quickstart
In this quickstart we're going to set up an Redis node in n8n using Upstash Redis, and go over an example use case step by step.
***
### Step 1: Get Your Upstash Redis Credentials
1. Go to Upstash Console and create a Redis database if you don't have any
2. Note down your credentials in the details page, we will be using those to connect Redis
Node in n8n to our Upstash Redis instance.
***
### Step 2: Set Up an n8n Project
1. Go to [https://n8n.io](https://n8n.io) and create a new project
2. Create a Trigger as Webhook with default settings, this will be our entry point. Our Redis instances gonna watch the visits to this url.
***
### Step 3: Create a Redis Node
Now, Let's create a redis node and connect it to our Upstash Redis instance
1. Search for redis in nodes, and select increment action.
2. In the opening window, click select credentials, and create new credentials.
Later, for other redis nodes, this will be saved and used automatically.
3. Fill the credentials.
* Pass your Upstash Token to the password field.
* Leave the user field blank
* Pass your Upstash Redis endpoint to the host field. (Leave the https\:// part out)
* If your Upstash Database has a port other than the default 6379, change it here.
4. Enable SSL (Upstash Redis requires SSL) and hit the save button.
***
### Redis Example: Store the Visit Count per Visitor
1. Track the users with `x-real-ip`
2. Add another redis node with get action to see the visit counts
3. Read the set visit count with redis get
***
### Test Redis Example
Run the workflow and visit the webhook URL, This will send a get request and trigger the workflow run.
Then from the headers your ip will be fetched and in the redis instance you will see `user:user-ip` set to `1`.
As you visit the page it will be incremented and at the end of the workflow you can track and confirm this setup with
the get request.
***
# Prometheus - Upstash Redis Integration
Source: https://upstash.com/docs/redis/integrations/prometheus
To monitor your Upstash database in Prometheus and visualize metrics in Grafana, follow these steps:
**Integration Scope**
Upstash Prometheus Integration only covers Pro databases or those included in the Enterprise Plan.
## **Step 1: Log in to Your Upstash Account**
1. Open your web browser and navigate to [Upstash](https://console.upstash.com/).
2. Navigate to the main dashboard, where you’ll see a list of your databases.
## **Step 2: Select your Database**
1. Select the database you want to integrate with Prometheus.
2. This will open the database settings, where you can manage various configuration options for your selected database.
3. Enable Prometheus by toggling the switch. This allows you to monitor metrics related to your Upstash database performance, usage, and other key metrics.
## **Step 3: Connect Accounts**
1. After enabling Prometheus, a monitoring token is generated and displayed.
2. Copy this token. This token is unique to your database and is required to authenticate Prometheus with the Upstash metrics endpoint.
**Header Format**
You should add monitoring token according to this format `Bearer `
## **Step 4: Set Up Prometheus Connection**
### **Grafana Dashboard Setup**
1. Open your Grafana instance, navigate to the Data Sources section, and select Prometheus as the data source.
2. Enter the data source name, set `https://api.upstash.com/monitoring/prometheus` as the data source address, and then add your monitoring token in the HTTP Headers section.
3. Then, click Test and Save to verify that the data source is working properly.
### **Prometheus Federation Setup**
Federation lets your Prometheus pull metrics from Upstash’s API and store them in **your own** Prometheus instance, so Grafana can query your Prometheus instead of hitting the Upstash endpoint directly.
#### When to use federation
* You already run Prometheus and want to persist Upstash metrics locally.
* You want to control retention, recording rules, or alerts on Upstash metrics
1. Set up a new scrape job in your Prometheus configuration file (`prometheus.yml`):
```yaml theme={"system"}
scrape_configs:
# Federation job: pull from Upstash API
- job_name: "federate_upstash"
honor_labels: true
metrics_path: "/monitoring/prometheus/federate"
scheme: https
params:
match[]:
- 'upstash_db_metrics{}'
static_configs:
- targets:
- "api.upstash.com"
authorization:
type: Bearer
credentials: ""
```
This configuration assumes you want to pull all metrics. You can adjust the `match[]` parameter to filter specific metrics if needed.
* `upstash_db_metrics{database_id="your_database_id"}` can be used to pull metrics for a specific database
* `upstash_db_metrics{replica_id=~"us-east-1.*"}` can be used to pull metrics for replicas in a specific region
2. Verify the Federation Target
* Reload (or restart) your Prometheus server to apply the new configuration.
* Visit **Prometheus → Status → Targets** and confirm `federate_upstash` is **UP**
## **Step 5: Wait for Metrics Availability**
To visualize your Upstash metrics, you can use a pre-built Grafana dashboard.
Select your Prometheus data source when prompted, and complete the import.
Please check this address to access Upstash Grafana Dashboard Dashboard
## **Conclusion**
You've now integrated your database with Upstash Prometheus, providing access to improved monitoring and analytics.
Feel free to explore Upstash's features and dashboards to gain deeper insights into your system's performance.
If you encounter any issues or have questions, please refer to the Upstash support documentation or contact our support team for assistance.
# Configure Upstash Ratelimit Strapi Plugin
Source: https://upstash.com/docs/redis/integrations/ratelimit/strapi/configurations
After setting up the plugin, it's possible to customize the ratelimiter algorithm and rates. You can also define different rate limits and rate limit algorithms for different routes.
## General Configurations
Enable or disable the plugin.
## Database Configurations
The token to authenticate with the Upstash Redis REST API. You can find this
credential on Upstash Console with the name `UPSTASH_REDIS_REST_TOKEN`
The URL for the Upstash Redis REST API. You can find this credential on
Upstash Console with the name `UPSTASH_REDIS_REST_URL`
The prefix for the rate limit keys. The plugin uses this prefix to store the
rate limit data in Redis.
For example, if the prefix is `@strapi`, the key will be
`@strapi:::`.
Enable analytics for the rate limit. When enabled, the plugin extra insights
related to your ratelimits. You can use this data to analyze the rate limit
usage on [Upstash Console](https://console.upstash.com/ratelimit).
## Strategy
The plugin uses a strategy array to define the rate limits per route. Each strategy object has the following properties:
An array of HTTP methods to apply the rate limit.
For example, `["GET", "POST"]`
The path to apply the rate limit. You can use wildcards to match multiple
routes. For example, `*` matches all routes.
Some examples:
* `path: "/api/restaurants/:id"`
* `path: "/api/restaurants"`
The source to identifiy the user. Requests with the same identifier will be
rate limited under the same limit.
Available sources are:
* `ip`: The IP address of the user.
* `header`: The value of a header key. You should pass the source in the `header.` format.
For example, `header.Authorization` will use the value of the `Authorization`
Enable debug mode for the route. When enabled, the plugin logs the remaining
limits and the block status for each request.
The limiter configuration for the route. The limiter object has the following
properties:
The rate limit algorithm to use. For more information related to algorithms, see docs [**here**](/redis/sdks/ratelimit-ts/algorithms).
* `fixed-window`: The fixed-window algorithm divides time into fixed intervals. Each interval has a set limit of allowed requests. When a new interval starts, the count resets.
* `sliding-window`:
The sliding-window algorithm uses a rolling time frame. It considers requests from the past X time units, continuously moving forward. This provides a smoother distribution of requests over time.
* `token-bucket`: The token-bucket algorithm uses a bucket that fills with tokens at a steady rate. Each request consumes a token. If the bucket is empty, requests are denied. This allows for bursts of traffic while maintaining a long-term rate limit.
The number of tokens allowed in the time window.
The time window for the rate limit. Available units are `"ms" | "s" | "m" | "h" | "d"`
For example, `20s` means 20 seconds.
The rate at which the bucket refills. **This property is only used for the token-bucket algorithm.**
## Examples
```json Apply rate limit for all routes theme={"system"}
{
"strapi-plugin-upstash-ratelimit":{
"enabled":true,
"resolve":"./src/plugins/strapi-plugin-upstash-ratelimit",
"config":{
"enabled":true,
"token":"process.env.UPSTASH_REDIS_REST_TOKEN",
"url":"process.env.UPSTASH_REDIS_REST_URL",
"strategy":[
{
"methods":[
"GET",
"POST"
],
"path":"*",
"identifierSource":"header.Authorization",
"limiter":{
"algorithm":"fixed-window",
"tokens":10,
"window":"20s"
}
}
],
"prefix":"@strapi"
}
}
}
```
```json Apply rate limit with IP theme={"system"}
{
"strapi-plugin-upstash-ratelimit": {
"enabled": true,
"resolve": "./src/plugins/strapi-plugin-upstash-ratelimit",
"config": {
"enabled": true,
"token": "process.env.UPSTASH_REDIS_REST_TOKEN",
"url": "process.env.UPSTASH_REDIS_REST_URL",
"strategy": [
{
"methods": ["GET", "POST"],
"path": "*",
"identifierSource": "ip",
"limiter": {
"algorithm": "fixed-window",
"tokens": 10,
"window": "20s"
}
}
],
"prefix": "@strapi"
}
}
}
```
```json Routes with different rate limit algorithms theme={"system"}
{
"strapi-plugin-upstash-ratelimit": {
"enabled": true,
"resolve": "./src/plugins/strapi-plugin-upstash-ratelimit",
"config": {
"enabled": true,
"token": "process.env.UPSTASH_REDIS_REST_TOKEN",
"url": "process.env.UPSTASH_REDIS_REST_URL",
"strategy": [
{
"methods": ["GET", "POST"],
"path": "/api/restaurants/:id",
"identifierSource": "header.x-author",
"limiter": {
"algorithm": "fixed-window",
"tokens": 10,
"window": "20s"
}
},
{
"methods": ["GET"],
"path": "/api/restaurants",
"identifierSource": "header.x-author",
"limiter": {
"algorithm": "tokenBucket",
"tokens": 10,
"window": "20s",
"refillRate": 1
}
}
],
"prefix": "@strapi"
}
}
}
```
# Upstash Ratelimit Strapi Integration
Source: https://upstash.com/docs/redis/integrations/ratelimit/strapi/getting-started
Strapi is an open-source, Node.js based, Headless CMS that saves developers a lot of development time, enabling them to build their application backends quickly by decreasing the lines of code necessary.
You can use Upstash's HTTP and Redis based [Ratelimit package](https://github.com/upstash/ratelimit-js) integration with Strapi to protect your APIs from abuse.
## Getting started
### Installation
```bash npm theme={"system"}
npm install --save @upstash/strapi-plugin-upstash-ratelimit
```
```bash yarn theme={"system"}
yarn add @upstash/strapi-plugin-upstash-ratelimit
```
### Create database
Create a new redis database on [Upstash Console](https://console.upstash.com/). See [related docs](/redis/overall/getstarted) for further info related to creating a database.
### Set up environment variables
Get the environment variables from [Upstash Console](https://console.upstash.com/), and set it to `.env` file as below:
```shell .env theme={"system"}
UPSTASH_REDIS_REST_TOKEN=""
UPSTASH_REDIS_REST_URL=""
```
### Configure the plugin
You can use
```typescript /config/plugins.ts theme={"system"}
export default () => ({
"strapi-plugin-upstash-ratelimit": {
enabled: true,
resolve: "./src/plugins/strapi-plugin-upstash-ratelimit",
config: {
enabled: true,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
url: process.env.UPSTASH_REDIS_REST_URL,
strategy: [
{
methods: ["GET", "POST"],
path: "*",
limiter: {
algorithm: "fixed-window",
tokens: 10,
window: "20s",
},
},
],
prefix: "@strapi",
},
},
});
```
```javascript /config/plugins.js theme={"system"}
module.exports = () => ({
"strapi-plugin-upstash-ratelimit": {
enabled: true,
resolve: "./src/plugins/strapi-plugin-upstash-ratelimit",
config: {
enabled: true,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
url: process.env.UPSTASH_REDIS_REST_URL,
strategy: [
{
methods: ["GET", "POST"],
path: "*",
limiter: {
algorithm: "fixed-window",
tokens: 10,
window: "20s",
},
},
],
prefix: "@strapi",
},
},
});
```
# Replit Templates
Source: https://upstash.com/docs/redis/integrations/replit-templates
## Overview
Explore our collection of example templates showcasing Upstash's capabilities with different frameworks and use cases. Each template comes with a live demo and source code on Replit.
Cache SQL queries using Upstash Redis to speed up read requests
Implement robust rate limiting using Upstash Redis in a web application
Build a real-time chat application using Upstash Redis Pub/Sub with Python
Create an AI chat app with context retrieval using Upstash Vector and Redis
Implement powerful web search using Upstash Vector Hybrid Search
# Sidekiq with Upstash Redis
Source: https://upstash.com/docs/redis/integrations/sidekiq
You can use Sidekiq with Upstash Redis. Sidekiq is a Ruby based queue library with a Redis-based queue storage so you can use with Upstash Redis.
## Example Application
```bash theme={"system"}
bundle init
bundle add sidekiq
```
```python theme={"system"}
require "sidekiq"
require "sidekiq/api"
connection_url = ENV['UPSTASH_REDIS_LINK']
Sidekiq.configure_client do |config|
config.redis = {url: connection_url}
end
Sidekiq.configure_server do |config|
config.redis = {url: connection_url}
end
class EmailService
include Sidekiq::Worker
def perform(id, type)
# Logic goes here. Let's assume sending email by printing to console.
puts "Emailed to: " + id + ": " + "'Congrats on " + type + " plan.'"
end
end
def updateEmail(id, newType)
jobFound = false
a = Sidekiq::ScheduledSet.new
a.each do |job|
if job.args[0] == id
job.delete
jobFound = true
end
end
if jobFound
EmailService.perform_async(id, ("starting using our service and upgrading it to " + newType))
else
EmailService.perform_async(id, ("upgrading to " + newType))
end
end
def sendEmail(id, type)
case type
when "free"
# if free, delay for 10 seconds.
EmailService.perform_in("10", id, "free")
when "paid"
# if paid, delay for 5 seconds.
EmailService.perform_in("5", id, "paid")
when "enterprise"
# if enterprise, immediately queue.
EmailService.perform_async(id, "enterprise")
when "enterprise10k"
EmailService.perform_async(id, "enterprise10k")
else
puts "Only plans are: `free`, `paid` and `enterprise`"
end
end
def clearSchedules()
Sidekiq::ScheduledSet.new.clear
Sidekiq::Queue.new.clear
end
```
## Billing Optimization
Sidekiq accesses Redis regularly, even when there is no queue activity. This can incur extra costs because Upstash charges per request on the Pay-As-You-Go plan. With the introduction of [our Fixed plans](/redis/overall/pricing#all-plans-and-limits), **we recommend switching to a Fixed plan to avoid increased command count and high costs in Sidekiq use cases.**
# Changelog
Source: https://upstash.com/docs/redis/overall/changelog
Added [`EVAL_RO`](https://redis.io/docs/latest/commands/eval_ro/) and [`EVALSHA_RO`](https://redis.io/docs/latest/commands/evalsha_ro/)
commands introduced in Redis 7.
* Added REST API support for [`MONITOR`](https://redis.io/docs/latest/commands/monitor/) and [`SUBSCRIBE`](https://redis.io/docs/latest/commands/subscribe/)
commands using [SSE](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events).
See [Monitor](../features/restapi#monitor-command) and [Subscribe](../features/restapi#subscribe-command) docs.
* Added [`JSON.MSET`](https://redis.io/docs/latest/commands/json.mset/) and [`JSON.MERGE`](https://redis.io/docs/latest/commands/json.merge/) commands.
* Introduced the `IP Allowlist` feature for enhanced security on newly created databases. By default, all IP addresses will be allowed.
However, access can be restricted by specifying permitted IP addresses or CIDR ranges.
* Added AWS AP-NorthEast-1 Japan region.
* Added an option to return REST response in [`RESP2`](https://redis.io/docs/latest/develop/reference/protocol-spec/) format instead of `JSON`.
See [REST API docs](/redis/features/restapi#resp2-format-responses) for more information.
* Implemented [`MONITOR`](https://redis.io/docs/latest/commands/monitor/) command
* Implemented Redis [keyspace notifications](/redis/howto/keyspacenotifications)
* Implemented [`WAIT`](https://redis.io/docs/latest/commands/wait/) and [`WAITAOF`](https://redis.io/docs/latest/commands/waitaof/) commands
* Added `lag` field to [`XINFO GROUPS`](https://redis.io/docs/latest/commands/xinfo-groups/)
* Added [`CLIENT ID`](https://redis.io/docs/latest/commands/client-id/) subcommand
* Added password strength check to [`ACL SETUSER`](https://redis.io/docs/latest/commands/acl-setuser/) command
* Fixed JSON commands with empty keys
* Fixed a panic on `XTRIM` and `XDEL`
* Added `CLIENT SETNAME/NAME/LIST` subcommands
* Implemented near exact trim for streams
* Implemented some missing Redis commands:
* `DUMP`
* `RESTORE`
* `ZMPOP`
* `BZMPOP`
* `LMPOP`
* `BLMPOP`
* `SINTERCARD`
* Added support for `BIT/BYTE` flag to `BITPOS` and `BITCOUNT` commands
* Added support for `XX`, `NX`, `GT`, and `LT` arguments to `EXPIRE` commands
* Allowed `NX` and `GET` args to be used together in `SET` command
# Compare
Source: https://upstash.com/docs/redis/overall/compare
In this section, we will compare Upstash with alternative cloud based solutions.
## AWS ElastiCache
* **Serverless Pricing:** Elasticache does not have a serverless pricing model.
The price does not scale to zero. You need to pay for the instances even when
you do not use them. Upstash charges per request.
* **REST API:** Unlike ElastiCache, Upstash has a built-in REST API, so you can
access from environments where TCP connections are not allowed such as edge
functions at Cloudflare Workers.
* **Access:** Elasticache is designed to be used inside AWS VPC. You can access
Upstash from anywhere.
* **Durability:** Upstash persists your data to the block storage in addition to
memory so you can use it as your primary database.
## AWS MemoryDB
* **Serverless Pricing:** Similar to Elasticache, MemoryDB does not offer a
serverless pricing model. The pricing does not scale down to zero, and even
the most affordable instance costs over \$200 per month. This means you are
required to pay for the instances regardless of usage. In contrast, Upstash
follows a different approach by charging per request. With Upstash, you only
incur charges when actively using your Redis database, ensuring that you do
not have to pay when it's not in use.
* **REST API:** Unlike MemoryDB, Upstash has a built-in REST API, so you can
access from environments where TCP connections are not allowed such as edge
functions at Cloudflare Workers.
* **Access:** MemoryDB is designed to be used inside AWS VPC. You can access
Upstash from anywhere.
## Redis Labs
* **Serverless Pricing:** Redis Labs does not have a serverless pricing model
either. The price does not scale to zero. You need to pay for the instances
even when you do not use them. Upstash charges per request, so you only pay
for your real usage.
* **REST API:** Unlike Redis Labs, Upstash has a built-in REST API, so you can
access from environments where TCP connections are not allowed such as edge
functions at Cloudflare Workers.
* **Durability:** Upstash persists your data to the block storage instantly in
addition to the memory, so you can use it as your primary database.
## AWS DynamoDB
* **Latency:** DynamoDB is a disk based data storage. Both write and read
latency are much higher than Redis. Check our
[benchmark app](https://serverless-battleground.vercel.app/) to get an idea.
* **Complex Pricing:** Initially, DynamoDB may appear cost-effective, but if you
begin utilizing advanced features such as DAX or Global Tables, you might
encounter unexpected expenses on your AWS bill. In contrast, Upstash offers a
more transparent pricing policy, ensuring that you are not taken by surprise.
With Upstash, there are limits in place to cap your maximum costs, providing
clarity and preventing any unwelcome surprises in your billing.
* **Portability:** DynamoDB is exclusive to AWS and cannot be used outside of
the AWS platform. However, Redis is supported by numerous cloud providers and
can also be self-hosted. Upstash provides compatibility with Redis, ensuring
vendor neutrality.
* **Testability:** Running a local Redis for testing purposes is much easier
than running a local DynamoDB. Check
[this](https://stackoverflow.com/questions/26901613/easier-dynamodb-local-testing).
## FaunaDB
* **Latency:** FaunaDB is a globally consistent database. Consistency at global
level comes with performance cost. Check our
[benchmark app](https://serverless-battleground.vercel.app/) to get an idea.
* **Complex Pricing:** FaunaDB has a complicated pricing. It has 6 different
dimensions to calculate the price. Check
[this article](https://docs.fauna.com/fauna/current/manage/plans-billing/billing/)
where the pricing is explained. If your use case is write heavy and
if your requests have bigger payloads, then it can become expensive very easily.
On the other hand, Upstash has different options for different needs and
pricing is simple for all options. You pay per request in addition to
storage cost which is generally much smaller amount.
* **Portability:** FaunaDB is only supported by Fauna Inc. On the other hand,
you can use Redis almost in all cloud providers as well as you can host Redis
yourself. Upstash does not lock you to any vendor.
* **Testability:** Running a local Redis for testing purposes is much easier
than running a local FaunaDB. Check
[this](https://dev.to/englishcraig/how-to-set-up-faunadb-for-local-development-5ha7).
## What makes Upstash different?
You have a new project and you do not know how many requests will it receive?
You love the performance and simplicity of Redis. But all Redis Cloud services
charge you per instance or per GB of memory. But maybe your application will not
receive big traffic at first, then why will you pay the full amount?
Unfortunately none of the current Redis cloud products provides a real
`pay-per-use` pricing model.
Let's do a simple calculation. Say I have a 1GB Redis database and I receive 1
million requests per month. For ElastiCache (cache.t3.small, \$0.034 hourly) this
costs at least \$24 not including data transfer and storage cost. For RedisLabs,
the 1GB plan costs \$22 per month. For Upstash the price is \$0.2 per 100k
requests. For 1 million, it is \$2 plus the storage cost that is \$0.25. So for
1GB, 1M request per months, ElastiCache is \$24, RedisLabs is \$22, Upstash is
\$2.25.
**What if your product becomes popular and starts to gain high and steady
traffic?**
Most of the serverless products start to lose their spell if the service
receives steady and high traffic as it starts to cost higher than
server/instance based pricing models. To overcome this situation we give you
option to purchase Pro Plan. In Pro plan you can set fixed
price per month with a restriction on max throughput and data size. For high and
steady throughput use cases, enterprise databases cost less than serverless one.
The good thing is you can start your database with pay-as-you-go pricing and
move it enterprise when you want. See [enterprise plans](/redis/overall/enterprise) for more
information.
Even if you choose not to upgrade to the Pro plan, Upstash guarantees
transparent billing without any unexpected surprises. Each Upstash database has
a predefined monthly maximum price, known as the "Ceiling Price." For
pay-as-you-go (PAYG) databases, this ceiling price is set at \$360 per month.
Therefore, even if your application experiences a significant surge in traffic,
such as reaching the front page of HackerNews, your Upstash database will not
exceed a maximum cost of \$360 per month.
# Prod Pack & Enterprise
Source: https://upstash.com/docs/redis/overall/enterprise
Upstash has Prod Pack and Enterprise plans for customers with critical production workloads. Prod Pack and Enterprise plans include additional monitoring and security features in addition to higher capacity limits and more powerful resources
Prod Pack -> Per database
Enterprise contract -> Per account
Prod Pack are an add-on per database available to both pay-as-you-go and fixed-price plans, not per account. You can have databases on different plans in the same account and each is charged separately. Meanwhile, Enterprise plans are per account, not per database. All of your databases can be included in the same Enterprise plan covering all of your databases.
All features of Prod Pack and Enterprise plan for Upstash Redis are detailed below.
## How to Upgrade
You can activate Prod Pack on the database details page in the [Upstash Console](https://upstash.com/dashboard/redis). For the Enterprise plan, please create a request through the or contact [support@upstash.com](mailto:support@upstash.com).
# Prod Pack Features
These features are available on databases with Prod Pack.
### Uptime SLA
All Prod Pack databases come with an SLA guaranteeing 99.99% uptime. For mission-critical data where uptime is crucial, we recommend Prod Pack plans. Learn more about [Uptime SLA](/common/help/sla).
### SOC-2 Type 2 Compliance & Report
Upstash Redis is SOC-2 Type 2 compliant with Prod Pack. Once you enable Prod Pack, you can request access to the report by going to [Upstash Trust Center](https://trust.upstash.com/) or contacting [support@upstash.com](mailto:support@upstash.com).
### RBAC
Role-Based Access Control (RBAC) is a security model that manages database access. You can create multiple users with different roles to control their actions on your databases.
We recommend using RBAC if your database is accessible to multiple developers.
### High Availability for Read Regions
With Prod Pack add-on, read regions of your database are [highly available](/redis/features/replication#high-availability). This ensures that if one read replica fails, you can read from another read replica in the same region without any additional latency.
### More Backup Capability
Backups up to 3 days are possible with the Prod Pack add-on.
### Encryption at Rest
Encrypts the block storage where your data is persisted and stored.
### Prometheus Metrics
Prometheus is an open-source monitoring system widely used for monitoring and alerting in cloud-native and containerized environments.
Upstash Prod Pack and Enterprise plans offer Prometheus metrics collection, enabling you to monitor your Redis databases with Prometheus in addition to console metrics. Learn more about [Prometheus integration](/redis/integrations/prometheus).
### Datadog Integration
Upstash Prod Pack and Enterprise plans include integration with Datadog, allowing you to monitor your Redis databases with Datadog in addition to console metrics. Learn more about [Datadog integration](/redis/howto/datadog).
### More metrics on the Console
Max interval of the metrics that are available on the Upstash Console increases from one week to one month for databases with Prod Pack.
# Enterprise Features
All Prod Pack features are included in the Enterprise plan. Additionally, Enterprise plans include:
### Custom Limits
Get a custom-tailored plan for your Upstash Redis databases to handle the growing demands of your business at any scale, such as 100K+ commands per second, unlimited bandwidth, and higher storage limits.
### SAML SSO
Single Sign-On (SSO) allows you to use your existing identity provider to authenticate users for your Upstash account. This feature is available upon request for Enterprise customers.
### Unlimited Database Count
Enterprise plans include unlimited database count, allowing you to scale your infrastructure without database count restrictions.
### Professional Support
All of the databases in the Enterprise plan get access to our professional support. The plan includes response time SLAs and priority access to our support team. Check out the [support page](/common/help/prosupport) for more details.
### Dedicated Resources for Isolation
Enterprise customers receive dedicated resources to ensure isolation and consistent performance for their database workloads.
### VPC Peering and Private Links
VPC Peering and Private Links enable you to connect your databases to your VPCs and other private networks, enhancing isolation and security while reducing data transfer costs. This feature is available upon request for Enterprise customers.
### Configurable Backups
Hourly backups with customizable retention are available upon request for Enterprise customers.
### Access Logs
Enterprise customers can request access logs to the databases.
### HIPAA Compliance
A Business Associate Agreement (BAA) and HIPAA compliance enablement is available with our Enterprise plan.
# Getting Started
Source: https://upstash.com/docs/redis/overall/getstarted
Create an Upstash Redis database in seconds
Upstash Redis is a **highly available, infinitely scalable** Redis-compatible database:
* 99.99% uptime guarantee with auto-scaling ([Prod Pack](/redis/overall/enterprise#prod-pack-features))
* Ultra-low latency worldwide
* Multi-region replication
* Durable, persistent storage without sacrificing performance
* Automatic backups
* Optional SOC-2 compliance, encryption at rest and much more
***
## 1. Create an Upstash Redis Database
Once you're logged in, create a database by clicking `+ Create Database` in the upper right corner. A dialog opens up:
**Database Name:** Enter a name for your database.
**Primary Region and Read Regions:** For optimal performance, select the Primary Region closest to where most of your writes will occur. Select the read region(s) where most of your reads will occur.
Once you click `Next` and select a plan, your database is running and ready to connect:
***
## 2. Connect to Your Database
You can connect to Upstash Redis with any Redis client. For simplicity, we'll use `redis-cli`. See the [Connect Your Client](../howto/connectclient) section for connecting via our TypeScript or Python SDKs and other clients.
The Redis CLI is included in the official Redis distribution. If you don't
have Redis installed, you can get it [here](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/).
Connect to your database and execute commands on it:
```bash theme={"system"}
> redis-cli --tls -a PASSWORD -h ENDPOINT -p PORT
ENDPOINT:PORT> set counter 0
OK
ENDPOINT:PORT> get counter
"0"
ENDPOINT:PORT> incr counter
(int) 1
ENDPOINT:PORT> incr counter
(int) 2
```
As you run commands, you'll see updates to your database metrics in (almost) real-time. These database metrics are refreshed every 10 seconds.
Congratulations! You have created an ultra-fast Upstash Redis database! 🎉
**New: Manage Upstash Redis From Cursor (optional)**
Manage Upstash Redis databases from Cursor and other AI tools by using our [MCP server](/redis/integrations/mcp).
# llms.txt
Source: https://upstash.com/docs/redis/overall/llms-txt
# Pricing & Limits
Source: https://upstash.com/docs/redis/overall/pricing
## Free Tier
* 256MB data size
* 500K commands per month
* One free database per account
## Pay-as-you-go Pricing
Flexible pricing for variable traffic.
* Request Price: \$0.20 per 100K requests
* Bandwidth Price: First 200GB free, then \$0.03/GB
* Storage Price: \$0.25/GB
## Fixed Plan Pricing
For consistent loads with predictable costs.
* No request price, unlimited request volume
* Bandwidth and storage usage included in the price
* Ability to auto-upgrade upon hitting bandwidth and storage limits
## All Plans and Limits
| Plan | Price | Read Region Price | Max Data Size | Max Bw GB Monthly | Max Req Per Sec | Max Request Size | Max Record | Max Connections |
| ------------: | -----: | ----------------: | ------------: | ----------------: | --------------: | ---------------: | ---------: | --------------: |
| Free | \$0 | \$0 | 256MB | 10G | 10000 | 10MB | 100MB | 10000 |
| Pay-as-you-go | \$0 | \$0 | 100GB | Unlimited | 10000 | 10MB | 100MB | 10000 |
| Fixed 250MB | \$10 | \$5 | 250MB | 50GB | 10000 | 10MB | 100MB | 10000 |
| Fixed 1GB | \$20 | \$10 | 1GB | 100GB | 10000 | 10MB | 200MB | 10000 |
| Fixed 5GB | \$100 | \$50 | 5GB | 500GB | 10000 | 20MB | 300MB | 10000 |
| Fixed 10GB | \$200 | \$100 | 10GB | 1TB | 10000 | 30MB | 400MB | 10000 |
| Fixed 50GB | \$400 | \$200 | 50GB | 5TB | 10000 | 50MB | 500MB | 10000 |
| Fixed 100GB | \$800 | \$400 | 100GB | 10TB | 16000 | 75MB | 1GB | 10000 |
| Fixed 500GB | \$1500 | \$750 | 500GB | 20TB | 16000 | 100MB | 5GB | 100000 |
| Enterprise | Custom | Custom | 10TB | Unlimited | Custom | 500MB | 5GB | 100000 |
## Prod Pack
Can be enabled separately for PAYG and all Fixed plans
* \$200/month per database
* Uptime SLA
* SOC 2 Type 2 report
* Advanced monitoring (Prometheus, Grafana, Datadog)
* High Availability for Read Regions
* Role-based access control (RBAC)
* Encryption at Rest
## Enterprise subscription
* All features of Prod pack for all your databases
* Dedicated professional support
* Dedicated technical account manager
* Unlimited databases
* HIPAA compliance
* VPC peering
* SSO integration
* Custom pricing with monthly or annual contract options
## Custom Quota Pricing (Pay-as-you-go)
### Request Size Limits
| Max Request Size | Value \$ per month |
| ---------------: | -----------------: |
| 50MB | \$80 |
| 100MB | \$120 |
| more | contact us |
### Collection Size Limits
| Max Record Size | Value \$ per month |
| --------------: | -----------------: |
| 250MB | \$60 |
| 500MB | \$100 |
| 1GB | \$180 |
| more | contact us |
### Number of Databases
| Number of Databases | Price per month |
| ------------------: | ----------------------------------------: |
| First 10 | Free |
| 10-100 | \$0.5 per DB |
| more | [contact us](https://upstash.com/contact) |
## FAQs
### How can I upgrade to pay as you go from free tier?
Once you enter your credit card, your database will be upgraded to the pay-as-you-go plan and limits will be updated.
### What is included in free tier?
In free tier includes 256MB data size and 500K commands per month.
### Are paid database's first 256MB data and 500K commands free?
No. Once you upgrade to paid tier, you will be charged for the data size and commands.
### How does the budget work?
Budget is only available for pay-as-you-go plan.
With the Pay-as-you-go plan, you can set a maximum monthly budget for your database so that you won't be charged beyond this chosen limit. We'll keep you informed by sending email notifications once you reach 70% and 90% of your monthly budget. This notifications will let you either adjust your budget limit or upgrade to a Fixed plan. Note that if your usage exceeds your monthly budget cap, your database will be rate limited and your cost will not exceed your chosen budget limit.
Please set your budget limit high enough to avoid service disruption.
If you change from a Fixed plan to Pay-as-you-go mid-month, your budget will only track your Pay-as-you-go spending.
### Do Fixed Plans have command count pricing/limit?
Fixed plans have no command-count billing; you pay for data size / bandwidth / throughput limits, not per command.
### Are all Redis commands counted in billing?
Operational commands like AUTH, HELLO, SELECT, COMMAND, CONFIG, INFO, PING, RESET, QUIT will not be charged.
### Are databases faster in higher plans?
Ops/sec limit is same in most initial plans, while our higher plans provide higher throughput as well as increasing other limits. There is no performance difference between plans within the limits.
### Are read and write commands same price?
Yes. But for Global databases, the write commands are replicated to all read regions in addition to primary region. Replications (write operations) are also counted as commands. For example, if you have 1 primary 1 read region, 100K writes will cost $0.4 ($0.2 x 2)
### How is the storage cost calculated for pay-as-you-go plan?
For each database the first 1GB is free. Beyond that, the storage cost is charged at a rate of \$0.25 per GB total storage. Total storage is determined by adding up the storage at all replicas and regions. Even if you do not access your data, we have to keep it persistent in Cloud Provider's block storage (eg AWS EBS) in multiple replicas for durability and high availability. To calculate the total storage cost, we take daily average of your total data size in all replicas and multiply with the rate at the end of the month. If you are using your database as a cache; then it is a good practice to set a timeout (EXPIRE) for your keys to minimize the cost.
### What happens when I hit limits on pay-as-you-go plan?
For each limit exceeded, you will be notified via email. We will do our best to keep your database running but we may rate limit depending on the case.
For concurrent connections, if you hit the limit, your database will start rejecting new connections. This can cause extra latency on your clients.
For max request size, the requests exceeding the limit will be rejected with an exception.
For max record size, the collection that exceeds the limit will stop accepting new records.
For bandwidth and storage, there are no limits but you can set a budget limit to avoid unexpected charges.
### What happens when I hit limits on fixed plans?
For each limit exceeded, you will be notified via email.
When your database hits the bandwidth and storage limits and if you have enabled auto-upgrade, your database will be upgraded to the one upper tier. When auto-upgrade is not enabled, your database will be rate limited which means your traffic will be blocked for bandwidth case, your write operations will be blocked for storage case.
For concurrent connections, if you hit the limit, your database will start rejecting new connections. This can cause extra latency on your clients.
For max request size, the requests exceeding the limit will be rejected with an exception.
For max record size, the collection that exceeds the limit will stop accepting new records.
### Are there free trials?
Yes, we can provide free trials for testing and PoC purposes. Email us at [support@upstash.com](mailto:support@upstash.com)
### How many databases can I create?
You can create up to 10 databases for free and beyond this you will be charged \$0.5 per database up to 100 databases. For more than 100 databases, please contact us at [support@upstash.com](mailto:support@upstash.com)
The charge is calculated based on the number of active databases at the end of the month.
### What happens if I delete my database after 2-3 days?
For fixed plans, you'll be charged pro-rata for the days the database was active (in this case, 2-3 days), regardless of whether you actively used the database or not. For pay-as-you-go plans, you'll only be charged for your actual usage during those 2-3 days.
### How much is the price for bandwidth?
For pay is you go plan, it is free up to monthly bandwidth limit of 200GB. Beyond that, we charge \$0.03 for each additional GB data transfer.
For fixed plans, bandwidth is included in the price, so you will not be charged for it.
For use cases with high volume, you may consider VPC Peering which minimizes the data transfer cost. VPC Peering requires Enterprise contract. Contact us at [support@upstash.com](mailto:support@upstash.com) for details.
Bandwidth price depends on cloud provider's fee for the traffic so it is subject to change. In case of any changes, we will notify you via email.
### Can I purchase Prod Pack for any plan?
Yes, you can purchase Prod Pack for any plan except Free tier. You can enable it in your [Upstash Dashboard](https://upstash.com/dashboard/redis) database details page.
### What is included in Prod Pack?
It includes uptime SLA, SOC 2 Type 2 report, advanced monitoring (Prometheus, Grafana, Datadog), and role-based access control (RBAC).
### What is included in Enterprise subscription?
All the features of Prod pack will be available for all your databases. Moreover, dedicated professional support, HIPAA compliance, VPC peering, Private link and SSO integration will be available at request.
### How is the Enterprise subscription priced?
For Enterprise subscription, a custom price is set based on specific requirements of the customer. For more information email us at [sales@upstash.com](mailto:sales@upstash.com)
### Do you have the Professional Support plan?
Professional support includes a dedicated service desk along and a Slack/Discord channel with a committed response time SLA. Check [Professional Support](/common/help/prosupport) for details.
# Pricing & Limits
Source: https://upstash.com/docs/redis/overall/pricingold
# Python SDK
Source: https://upstash.com/docs/redis/overall/pythonredis
# Rate Limit SDK
Source: https://upstash.com/docs/redis/overall/ratelimit
# Typescript SDK
Source: https://upstash.com/docs/redis/overall/redis
# Redis® API Compatibility
Source: https://upstash.com/docs/redis/overall/rediscompatibility
Upstash supports Redis client protocol up to version `6.2`. We are also gradually adding changes introduced in versions `7.0` and `7.2`,
such as `EXPIRETIME`, `LMPOP`, `ZINTERCARD` and `EVAL_RO`.
The following table shows the most recent list of the supported Redis commands:
| Feature | Supported? | Supported Commands |
| ------------------------------------------------------------- | :--------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [String](https://redis.io/commands/?group=string) | ✅ | APPEND - DECR - DECRBY - GET - GETDEL - GETEX - GETRANGE - GETSET - INCR - INCRBY - INCRBYFLOAT - MGET - MSET - MSETNX - PSETEX - SET - SETEX - SETNX - SETRANGE - STRLEN |
| [Bitmap](https://redis.io/commands/?group=bitmap) | ✅ | BITCOUNT - BITFIELD - BITFIELD\_RO - BITOP - BITPOS - GETBIT - SETBIT |
| [Hash](https://redis.io/commands/?group=hash) | ✅ | HDEL - HEXISTS - HGET - HGETALL - HINCRBY - HINCRBYFLOAT - HKEYS - HLEN - HMGET - HMSET - HSCAN - HSET - HSETNX - HSTRLEN - HRANDFIELD - HVALS |
| [List](https://redis.io/commands/?group=list) | ✅ | BLMOVE - BLMPOP - BLPOP - BRPOP - BRPOPLPUSH - LINDEX - LINSERT - LLEN - LMOVE - LMPOP - LPOP - LPOS - LPUSH - LPUSHX - LRANGE - LREM - LSET - LTRIM - RPOP - RPOPLPUSH - RPUSH - RPUSHX |
| [Set](https://redis.io/commands/?group=set) | ✅ | SADD - SCARD - SDIFF - SDIFFSTORE - SINTER - SINTERCARD - SINTERSTORE - SISMEMBER - SMEMBERS - SMISMEMBER - SMOVE - SPOP - SRANDMEMBER - SREM - SSCAN - SUNION - SUNIONSTORE |
| [SortedSet](https://redis.io/commands/?group=sorted_set) | ✅ | BZMPOP - BZPOPMAX - BZPOPMIN - ZADD - ZCARD - ZCOUNT - ZDIFF - ZDIFFSTORE - ZINCRBY - ZINTER - ZINTERCARD - ZINTERSTORE - ZLEXCOUNT - ZMPOP - ZMSCORE - ZPOPMAX - ZPOPMIN - ZRANDMEMBER - ZRANGE - ZRANGESTORE - ZRANGEBYLEX - ZRANGEBYSCORE - ZRANK - ZREM - ZREMRANGEBYLEX - ZREMRANGEBYRANK - ZREMRANGEBYSCORE - ZREVRANGE - ZREVRANGEBYLEX - ZREVRANGEBYSCORE - ZREVRANK - ZSCAN - ZSCORE - ZUNION - ZUNIONSTORE |
| [Geo](https://redis.io/commands/?group=geo) | ✅ | GEOADD - GEODIST - GEOHASH - GEOPOS - GEORADIUS - GEORADIUS\_RO - GEORADIUSBYMEMBER - GEORADIUSBYMEMBER\_RO - GEOSEARCH - GEOSEARCHSTORE |
| [HyperLogLog](https://redis.io/commands/?group=hyperloglog) | ✅ | PFADD - PFCOUNT - PFMERGE |
| [Scripting](https://redis.io/commands/?group=scripting) | ✅ | EVAL - EVALSHA - EVAL\_RO - EVALSHA\_RO - SCRIPT EXISTS - SCRIPT LOAD - SCRIPT FLUSH |
| [Pub/Sub](https://redis.io/commands/?group=pubsub) | ✅ | SUBSCRIBE - PSUBSCRIBE - UNSUBSCRIBE - PUNSUBSCRIBE - PUBLISH - PUBSUB |
| [Transactions](https://redis.io/commands/?group=transactions) | ✅ | DISCARD - EXEC - MULTI - UNWATCH - WATCH |
| [Generic](https://redis.io/commands/?group=generic) | ✅ | COPY - DEL - DUMP - EXISTS - EXPIRE - EXPIREAT - EXPIRETIME - KEYS - PERSIST - PEXPIRE - PEXPIREAT - PEXPIRETIME - PTTL - RANDOMKEY - RENAME - RENAMENX - RESTORE - SCAN - TOUCH - TTL - TYPE - UNLINK |
| [Connection](https://redis.io/commands/?group=connection) | ✅ | AUTH - HELLO - ECHO - PING - QUIT - RESET - SELECT |
| [Server](https://redis.io/commands/?group=server) | ✅ | ACL(\*) - DBSIZE - FLUSHALL - FLUSHDB - MONITOR - TIME |
| [JSON](https://redis.io/commands/?group=json) | ✅ | JSON.ARRAPPEND - JSON.ARRINSERT - JSON.ARRINDEX - JSON.ARRLEN - JSON.ARRPOP - JSON.ARRTRIM - JSON.CLEAR - JSON.DEL - JSON.FORGET - JSON.GET - JSON.MERGE - JSON.MGET - JSON.MSET - JSON.NUMINCRBY - JSON.NUMMULTBY - JSON.OBJKEYS - JSON.OBJLEN - JSON.RESP - JSON.SET - JSON.STRAPPEND - JSON.STRLEN - JSON.TOGGLE - JSON.TYPE |
| [Streams](https://redis.io/commands/?group=stream) | ✅ | XACK - XADD - XAUTOCLAIM - XCLAIM - XDEL - XGROUP - XINFO GROUPS - XINFO CONSUMERS - XLEN - XPENDING - XRANGE - XREAD - XREADGROUP - XREVRANGE - XTRIM |
| [Cluster](https://redis.io/commands#cluster) | ❌ | |
We run command integration tests from following Redis clients after each code
change and also periodically:
* **[Node-Redis](https://github.com/redis/node-redis)**
[Command Tests](https://github.com/redis/node-redis/tree/v3.1.2/test/commands)
* **[Jedis](https://github.com/redis/jedis)**
[Command Tests](https://github.com/redis/jedis/tree/v4.1.1/src/test/java/redis/clients/jedis/commands)
* **[Lettuce](https://github.com/lettuce-io/lettuce-core)**
[Command Tests](https://github.com/lettuce-io/lettuce-core/tree/6.1.6.RELEASE/src/test/java/io/lettuce/core/commands)
* **[Go-Redis](https://github.com/go-redis/redis)**
[Command Tests](https://github.com/go-redis/redis/blob/master/commands_test.go)
* **[Redis-py](https://github.com/redis/redis-py)**
[Command Tests](https://github.com/redis/redis-py/tree/v4.4.0/tests)
Most of the unsupported items are in our roadmap. If you need a feature that we
do not support, please drop a note to
[support@upstash.com](mailto:support@upstash.com). So we can inform you when we
are planning to support it.
# Use Cases
Source: https://upstash.com/docs/redis/overall/usecases
The data store behind Upstash is [compatible](../overall/rediscompatibility)
with almost all Redis® API. So you can use Upstash for the Redis®' popular use
cases such as:
* General caching
* Session caching
* Leaderboards
* Queues
* Usage metering (counting)
* Content filtering
Check Salvatore's [blog](http://antirez.com/post/take-advantage-of-redis-adding-it-to-your-stack.html) post. You can find lots of similar articles about the common use cases of Redis.
## Key Value Store and Caching for Next.js Application
Next.js is increasingly becoming the preferred method for developing dynamic and fast web applications in an agile manner. It owes its popularity to its server-side rendering capabilities and API routes supported by serverless functions, including Vercel serverless and edge functions. Upstash Redis is a great fit with Next.js applications due to its serverless model and its REST-based APIs. The REST API plays a critical role in enabling access from edge functions while also addressing connection issues in serverless functions.
Check the blog post:
[Speed up your Next.js application with Redis](https://upstash.com/blog/nextjs-caching-with-redis)
## Redis for Vercel Functions
Vercel stands out as one of the most popular cloud platform for web developers, offering continuous integration, deployment, CDN and serverless functions. However, when it comes to databases, you'll need to rely on external data services to support dynamic applications.
That's where Upstash comes into play as one of the most favored data solutions within the Vercel platform. Here are some reasons that contribute to Upstash's popularity in the Vercel ecosystem:
* No connection problems thanks to
[Upstash SDK](https://github.com/upstash/upstash-redis) built on Upstash REST
API.
* Edge runtime does not allow TCP based connections. You can not use regular
Redis clients. [Upstash SDK](https://github.com/upstash/upstash-redis) works
on edge runtimes without a problem.
* Upstash has a [Vercel add on](https://vercel.com/integrations/upstash) where
you can easily integrate Upstash to your Vercel projects.
## Storage For Lambda Functions (FaaS)
People use Lambda functions for various reasons, with one of the primary advantages being their cost-effectiveness – you only pay for what you actually use, which is great. However, when it comes to needing a storage layer, AWS recommends DynamoDB. DynamoDB does offer a serverless mode, which sounds promising until you encounter its latency when connecting and operating within Lambda Functions. Unfortunately, DynamoDB's latency may not be ideal for Lambda Functions, where every second of latency can have a significant impact on costs. At this point, AWS suggests using ElastiCache for low-latency data storage, which is also a Redis® cache as a service – a positive aspect. However, it's worth noting that ElastiCache is not serverless, and you have to pay based on what you provision, rather than what you use. To be honest, the pricing may not be the most budget-friendly option. This leaves you with two alternatives:
* DynamoDB: Serverless but high latency
* ElastiCache: Low latency but not serverless.
Until you meet the Upstash. Our sole mission is to provide a Redis® API
compatible database that you love in the serverless model. In Upstash, you pay
per the number of requests you have sent to your database. So if you are not
using the database you pay almost nothing. (Almost, because we charge for the
storage. It is a very low amount but still it is there.)
We believe that Upstash is the best storage for your Lambda Functions because:
* Serverless just like Lambda functions itself
* Designed for low latency data access
* The lovely simple Redis® API
# AWS Lambda
Source: https://upstash.com/docs/redis/quickstarts/aws-lambda
You can find the project source code on GitHub.
### Prerequisites
* Complete all steps in [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
### Project Setup
Create and navigate to a directory named `counter-cdk`. The CDK CLI uses this directory name to name things in your CDK code, so if you decide to use a different name, don't forget to make the appropriate changes when applying this tutorial.
```shell theme={"system"}
mkdir counter-cdk && cd counter-cdk
```
Initialize a new CDK project.
```shell theme={"system"}
cdk init app --language typescript
```
Install `@upstash/redis`.
```shell theme={"system"}
npm install @upstash/redis
```
### Counter Function Setup
Create `/api/counter.ts`.
```ts /api/counter.ts theme={"system"}
import { Redis } from '@upstash/redis';
const redis = Redis.fromEnv();
export const handler = async function() {
const count = await redis.incr("counter");
return {
statusCode: 200,
body: JSON.stringify('Counter: ' + count),
};
};
```
### Counter Stack Setup
Update `/lib/counter-cdk-stack.ts`.
```ts /lib/counter-cdk-stack.ts theme={"system"}
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as nodejs from 'aws-cdk-lib/aws-lambda-nodejs';
export class CounterCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const counterFunction = new nodejs.NodejsFunction(this, 'CounterFunction', {
entry: 'api/counter.ts',
handler: 'handler',
runtime: lambda.Runtime.NODEJS_20_X,
environment: {
UPSTASH_REDIS_REST_URL: process.env.UPSTASH_REDIS_REST_URL || '',
UPSTASH_REDIS_REST_TOKEN: process.env.UPSTASH_REDIS_REST_TOKEN || '',
},
bundling: {
format: nodejs.OutputFormat.ESM,
target: "node20",
nodeModules: ['@upstash/redis'],
},
});
const counterFunctionUrl = counterFunction.addFunctionUrl({
authType: lambda.FunctionUrlAuthType.NONE,
});
new cdk.CfnOutput(this, "counterFunctionUrlOutput", {
value: counterFunctionUrl.url,
})
}
}
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
### Deploy
Run in the top folder:
```shell theme={"system"}
cdk synth
cdk bootstrap
cdk deploy
```
Visit the output URL.
# Azure Functions
Source: https://upstash.com/docs/redis/quickstarts/azure-functions
You can find the project source code on GitHub.
### Prerequisites
1. [Create an Azure account.](https://azure.microsoft.com/en-us/free/)
2. [Set up Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
3. [Install the Azure Functions Core Tools](https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-typescript)
### Project Setup
Initialize the project:
```shell theme={"system"}
func init --typescript
```
Install `@upstash/redis`
```shell theme={"system"}
npm install @upstash/redis
```
### Counter Function Setup
Create a new function from template.
```shell theme={"system"}
func new --name CounterFunction --template "HTTP trigger" --authlevel "anonymous"
```
Update `/src/functions/CounterFunction.ts`
```ts /src/functions/CounterFunction.ts theme={"system"}
import { app, HttpRequest, HttpResponseInit, InvocationContext } from "@azure/functions";
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN
});
export async function CounterFunction(request: HttpRequest, context: InvocationContext): Promise {
const count = await redis.incr("counter");
return { status: 200, body: `Counter: ${count}` };
};
app.http('CounterFunction', {
methods: ['GET', 'POST'],
authLevel: 'anonymous',
handler: CounterFunction
});
```
### Create Azure Resources
You can use the command below to find the `name` of a region near you.
```shell theme={"system"}
az account list-locations
```
Create a resource group.
```shell theme={"system"}
az group create --name AzureFunctionsQuickstart-rg --location
```
Create a storage account.
```shell theme={"system"}
az storage account create --name --location --resource-group AzureFunctionsQuickstart-rg --sku Standard_LRS --allow-blob-public-access false
```
Create your Function App.
```shell theme={"system"}
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location --runtime node --runtime-version 18 --functions-version 4 --name --storage-account
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and set `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` in your Function App's settings.
```shell theme={"system"}
az functionapp config appsettings set --name --resource-group AzureFunctionsQuickstart-rg --settings UPSTASH_REDIS_REST_URL= UPSTASH_REDIS_REST_TOKEN=
```
### Deploy
Take a build of your application.
```shell theme={"system"}
npm run build
```
Publish your application.
```shell theme={"system"}
func azure functionapp publish
```
Visit the given Invoke URL.
# Cloudflare Workers
Source: https://upstash.com/docs/redis/quickstarts/cloudflareworkers
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or
[Upstash CLI](https://github.com/upstash/cli).
### Project Setup
We will use **C3 (create-cloudflare-cli)** command-line tool to create our application. You can open a new terminal window and run C3 using the prompt below.
```shell npm theme={"system"}
npm create cloudflare@latest -- upstash-redis-worker
```
```shell yarn theme={"system"}
yarn create cloudflare upstash-redis-worker
```
```shell pnpm theme={"system"}
pnpm create cloudflare upstash-redis-worker
```
This will create a new Cloudflare Workers project:
```text theme={"system"}
➜ npm create cloudflare@latest -- upstash-redis-worker
> npx
> create-cloudflare upstash-redis-worker
─────────────────────────────────────────────────────────────────────────────────────────────────
👋 Welcome to create-cloudflare v2.50.8!
🧡 Let's get started.
📊 Cloudflare collects telemetry about your usage of Create-Cloudflare.
Learn more at: https://github.com/cloudflare/workers-sdk/blob/main/packages/create-cloudflare/telemetry.md
─────────────────────────────────────────────────────────────────────────────────────────────────
╭ Create an application with Cloudflare Step 1 of 3
│
├ In which directory do you want to create your application?
│ dir ./upstash-redis-worker
│
├ What would you like to start with?
│ category Hello World example
│
├ Which template would you like to use?
│ type Worker only
│
├ Which language do you want to use?
│ lang TypeScript
│
├ Copying template files
│ files copied to project directory
│
├ Updating name in `package.json`
│ updated `package.json`
│
├ Installing dependencies
│ installed via `npm install`
│
╰ Application created
...
────────────────────────────────────────────────────────────
🎉 SUCCESS Application created successfully!
```
We will also install the **Upstash Redis SDK** to connect to Redis.
```bash theme={"system"}
npm install @upstash/redis
```
### The Code
Here is a Worker template to configure and test Upstash Redis connection.
```ts src/index.ts theme={"system"}
import { Redis } from "@upstash/redis/cloudflare";
export interface Env {
UPSTASH_REDIS_REST_URL: string;
UPSTASH_REDIS_REST_TOKEN: string;
}
export default {
async fetch(request, env, ctx): Promise {
const redis = Redis.fromEnv(env);
const count = await redis.incr("counter");
return new Response(JSON.stringify({ count }));
},
} satisfies ExportedHandler;
```
```js src/index.js theme={"system"}
import { Redis } from "@upstash/redis/cloudflare";
export default {
async fetch(request, env, ctx) {
const redis = Redis.fromEnv(env);
const count = await redis.incr("counter");
return new Response(JSON.stringify({ count }));
},
};
```
### Configure Credentials
There are two methods for setting up the credentials for Redis. One for worker level, the other for account level.
#### Using Cloudflare Secrets (Worker Level Secrets)
This is the common way of creating secrets for your worker, see [Workflow Secrets](https://developers.cloudflare.com/workers/configuration/secrets/)
* Navigate to [Upstash Console](https://console.upstash.com) and get your Redis credentials.
* In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Compute (Workers)** > **Workers & Pages**.
* Select your worker and go to **Settings** > **Variables and Secrets**.
* Add your Redis credentials as secrets here:
#### Using Cloudflare Secrets Store (Account Level Secrets)
This method requires a few modifications in the worker code, see [Access to Secret on Env Object](https://developers.cloudflare.com/secrets-store/integrations/workers/#3-access-the-secret-on-the-env-object)
```ts src/index.ts theme={"system"}
import { Redis } from "@upstash/redis/cloudflare";
export interface Env {
UPSTASH_REDIS_REST_URL: SecretsStoreSecret;
UPSTASH_REDIS_REST_TOKEN: SecretsStoreSecret;
}
export default {
async fetch(request, env, ctx): Promise {
const redis = Redis.fromEnv({
UPSTASH_REDIS_REST_URL: await env.UPSTASH_REDIS_REST_URL.get(),
UPSTASH_REDIS_REST_TOKEN: await env.UPSTASH_REDIS_REST_TOKEN.get(),
});
const count = await redis.incr("counter");
return new Response(JSON.stringify({ count }));
},
} satisfies ExportedHandler;
```
After doing these modifications, you can deploy the worker to Cloudflare with `npx wrangler deploy`, and
follow the steps below to define the secrets:
* Navigate to [Upstash Console](https://console.upstash.com) and get your Redis credentials.
* In [Cloudflare Dashboard](https://dash.cloudflare.com/), Go to **Secrets Store** and add Redis credentials as secrets.
* Under **Compute (Workers)** > **Workers & Pages**, find your worker and add these secrets as bindings.
### Deployment
Newer deployments may revert the configurations you did in the dashboard.
While worker level secrets persist, the bindings will be gone!
Deploy your function to Cloudflare with `npx wrangler deploy`
The endpoint of the function will be provided to you, once the deployment is done.
### Testing
Open a different terminal and test the endpoint. Note the destination
url is the same that was printed in the previous deploy step.
```bash theme={"system"}
curl -X POST 'https://..workers.dev' \
-H 'Content-Type: application/json'
```
The response will be in the format of `{"count":20}`
In the logs you should see something like this:
```bash theme={"system"}
$ npx wrangler tail
⛅️ wrangler 4.43.0
--------------------
Successfully created tail, expires at 2025-10-16T18:59:18Z
Connected to , waiting for logs...
POST https://..workers.dev/ - Ok @ 10/16/2025, 4:05:30 PM
```
## Repositories
Javascript:
[https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers)
Typescript:
[https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers-with-typescript](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers-with-typescript)
# Deno Deploy
Source: https://upstash.com/docs/redis/quickstarts/deno-deploy
This is a step-by-step guide on how to use Upstash Redis to create a view
counter in your Deno deploy project.
### Create a database
Create a Redis database using [Upstash Console](https://console.upstash.com) or
[Upstash CLI](https://github.com/upstash/cli). Select the global to minimize the
latency from all edge locations. Copy the `UPSTASH_REDIS_REST_URL` and
`UPSTASH_REDIS_REST_TOKEN` for the next steps.
### Create a Deno deploy project
Go to [https://dash.deno.com/projects](https://dash.deno.com/projects) and
create a new playground project.
### 2. Edit the handler function
Then paste the following code into the browser editor:
```ts theme={"system"}
import { serve } from "https://deno.land/std@0.142.0/http/server.ts";
import { Redis } from "https://deno.land/x/upstash_redis@v1.14.0/mod.ts";
serve(async (_req: Request) => {
if (!_req.url.endsWith("favicon.ico")) {
const redis = new Redis({
url: "UPSTASH_REDIS_REST_URL",
token: "UPSTASH_REDIS_REST_TOKEN",
});
const counter = await redis.incr("deno-counter");
return new Response(JSON.stringify({ counter }), { status: 200 });
}
});
```
### 3. Deploy and Run
Simply click on `Save & Deploy` at the top of the screen.
# DigitalOcean
Source: https://upstash.com/docs/redis/quickstarts/digitalocean
Upstash has native integration with [DigitalOcean Add-On
Marketplace](https://marketplace.digitalocean.com/add-ons/upstash-redis).
This quickstart shows how to create an Upstash for Redis® Database from
DigitalOcean Add-On Marketplace.
### Database Setup
Creating Upstash for Redis Database requires a DigitalOcean account.
[Login or Sign-up](https://cloud.digitalocean.com/login) for DigitalOcean
account. Then navigate the
[Upstash Redis Marketplace](https://marketplace.digitalocean.com/add-ons/upstash-redis)
page.
Click `Add Upstash Redis` button. Now setup page will open and it will ask
`Database Name / Plan / Region` info.
After selecting Name, Plan and Region, click `Add Upstash Redis` button.
### Connecting to Database - SSO
After creating database, Overview/Details page will be opened.
Environment variables can be shown in that page.
While creating a Droplet, Upstash Addon can be selected and environment
variables are automatically injected to Droplet.
These Steps can be followed: `Create --> Droplets --> Marketplace Add-Ons` then
select the previously created Upstash Redis Addon.
Upstash also support Single Sign-On from DigitalOcean to Upstash Console.
So databases created from DigitalOcean can benefit from Upstash Console
features.
In order to access Upstash Console from DigitalOcean just click `Dashboard` link
when you create the Upstash addon.
# Django
Source: https://upstash.com/docs/redis/quickstarts/django
### Introduction
In this quickstart tutorial, we will demonstrate how to use Django with Upstash Redis to build a simple web application that increments a counter every time the homepage is accessed.
### Environment Setup
First, install Django and the Upstash Redis client for Python:
```shell theme={"system"}
pip install django
pip install upstash-redis
```
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment:
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
You can also use `python-dotenv` to load environment variables from your `.env` file.
### Project Setup
Create a new Django project:
```shell theme={"system"}
django-admin startproject myproject
cd myproject
python manage.py startapp myapp
```
In `myproject/settings.py`, add your new app (`myapp`) to the `INSTALLED_APPS` list.
### Application Setup
In `myapp/views.py`, add the following:
```python theme={"system"}
from django.http import HttpResponse
from upstash_redis import Redis
redis = Redis.from_env()
def index(request):
count = redis.incr('counter')
return HttpResponse(f'Page visited {count} times.')
```
In `myproject/urls.py`, connect the view to a URL pattern:
```python theme={"system"}
from django.urls import path
from myapp import views
urlpatterns = [
path('', views.index),
]
```
### Running the Application
Run the development server:
```shell theme={"system"}
python manage.py runserver
```
Visit `http://127.0.0.1:8000/` in your browser, and the counter will increment with each page refresh.
### Code Breakdown
1. **Redis Setup**: We use the Upstash Redis client to connect to our Redis database using the environment variables `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN`. The `Redis.from_env()` method initializes this connection.
2. **Increment Counter**: In the `index` view, we increment the `counter` key each time the homepage is accessed. If the key doesn't exist, Redis creates it and starts counting from 1.
3. **Display the Count**: The updated count is returned as an HTTP response each time the page is loaded.
# Elixir
Source: https://upstash.com/docs/redis/quickstarts/elixir
Tutorial on Using Upstash Redis In Your Phoenix App and Deploying it on Fly.
This tutorial showcases how one can use [fly.io](https://fly.io) to deploy a Phoenix
app using Upstash Redis to store results of external API calls.
See [code](https://github.com/upstash/examples/tree/master/examples/elixir-with-redis) and
[demo](https://elixir-redis.fly.dev/).
### `1` Create a Elixir app with Phoenix
To create an app, run the following command:
```
mix phx.new redix_demo --no-ecto
```
Phoenix apps are initialized with a datastore. We pass `--no-ecto` flag to disable
the datastore since we will only use Redis. See
[Phoenix documentation](https://hexdocs.pm/phoenix/up_and_running.html) for more details.
Navigate to the new directory by running
```
cd redix_demo
```
### `2` Add Redix
To connect to the Upstash Redis, we will use the
[Redix client](https://github.com/whatyouhide/redix.git) written for Elixir.
To add Redix to our project, we will first update the dependencies of our project. Simply
add the following two entries to the dependencies in the `mix.exs` file
(See [Redix documentation](https://github.com/whatyouhide/redix.git)):
```elixir theme={"system"}
defp deps do
[
{:redix, "~> 1.1"},
{:castore, ">= 0.0.0"}
]
end
```
Then, run `mix deps.get` to install the new dependencies.
Next, we will add Redix to our app. In our case, we will add a single global Redix instance.
Open the `application.ex` file and find the `children` list in the `start` function.
First, add a method to read the connection parameters from the `REDIS_URL` environment variable.
We choose this name for the environment variable because Fly will create a secret with this name
when we launch the app with a Redis store. Use regex to extract the password, host and port
information from the Redis URL:
```elixir theme={"system"}
def start(_type, _args) do
[_, password, host, port] = Regex.run(
~r{(.+):(.+)@(.+):(\d+)},
System.get_env("REDIS_URL"),
capture: :all_but_first
)
port = elem(Integer.parse(port), 0)
# ...
end
```
Next, add the Redix client to the project by adding it to the `children` array.
([See Redix Documentation for more details](https://hexdocs.pm/redix/real-world-usage.html#single-named-redix-instance))
```elixir theme={"system"}
children = [
# ...
{
Redix,
name: :redix,
host: host,
port: port,
password: password,
socket_opts: [:inet6]
}
]
```
Here, we would like to draw attention to the `socket_opts` parameter. If you wish to test
your app locally by creating an Upstash Redis yourself without Fly, you must define Redix
client **without the `socket_opts: [:inet6]` field**.
### `3` Testing the Connection
At this point, our app should now be able to communicate with Redix. To test if this
connection works as expected, we will first add a status page to our app.
To add this page, we will change the default landing page of our Phoenix app. Go to the
`lib/redix_demo_web/controllers/page_html/home.html.heex` file. Replace the content of
the file with:
```html theme={"system"}
<.flash_group flash={@flash} />
Redix Demo
<%= if @text do %>
<%= @text %>
<% end %>
<%= if @weather do %>
<%= if @location do %>
Location:
<%= @location %>
<% end %>
Weather:
<%= @weather %> °C
<% end %>
```
This HTML will show different content depending on the parameters we
pass it. It has a form at the top which is where the user will enter
some location. Below, we will show the weather information.
Next, open the `lib/redix_demo_web/router.ex` file. In this file,
URL paths are defined with the `scope` keyword. Update the scope
in the following way:
```
scope "/", RedixDemoWeb do
pipe_through :browser
get "/status", PageController, :status
get "/", PageController, :home
get "/:text", PageController, :home
end
```
Our website will have a `/status` path, which will be rendered with the
`status` method we will define. The website will also render the home
page in `/` and in `/:text`. `/:text` will essentially match any route
and the route will be available to our app as a parameter when rendering.
Finally, we will define the status page in
`lib/redix_demo_web/controllers/page_controller.ex`. We will define a struct
`Payload` and a private method `render_home`. Then, we will define the home
page and the status page:
```elixir theme={"system"}
defmodule RedixDemoWeb.PageController do
use RedixDemoWeb, :controller
defmodule Payload do
defstruct text: nil, weather: nil, location: nil
end
def status(conn, _params) do
case Redix.command(:redix, ["PING"]) do
{:ok, response} ->
render_home(conn, %Payload{text: "Redis Connection Status: Success! Response to 'PING': '#{response}'"})
{:error, response} ->
render_home(conn, %Payload{text: "Redis Connection Status: Error. Reason: #{response.reason}"})
end
end
def home(conn, _params) do
render_home(conn, %Payload{text: "Enter a location above to get the weather info!"})
end
defp render_home(conn, %Payload{} = payload) do
render(conn, "home.html", text: payload.text, weather: payload.weather, location: payload.location)
end
end
```
The home page simply renders our home page. The status page renders the same page, but
shows the response of a `PING` request to our Redis server.
We are now ready to deploy the app on Fly!
### `4` Deploy on Fly
To deploy the app on Fly, first
[install Fly CLI](https://fly.io/docs/hands-on/install-flyctl/) and authenticate. Then,
launch the app with:
```
fly launch
```
If you haven't set `REDIS_URL` environment variable in your environment, `fly launch` command will show
an error when compiling the app but don't worry. You can still continue with launching the app.
Fly will add this environment variable itself.
Fly will at some point ask if we want to tweak the settings of the app. Choose yes (`y`):
```
>>> fly launch
Detected a Phoenix app
Creating app in /Users/examples/redix_demo
We're about to launch your Phoenix app on Fly.io. Here's what you're getting:
Organization: C. Arda (fly launch defaults to the personal org)
Name: redix_demo (derived from your directory name)
Region: Bucharest, Romania (this is the fastest region for you)
App Machines: shared-cpu-1x, 1GB RAM (most apps need about 1GB of RAM)
Postgres: (not requested)
Redis: (not requested)
Sentry: false (not requested)
? Do you want to tweak these settings before proceeding? (y/N)
```
This will open the settings on the browser. Two settings are relevant to this guide:
* Region: Upstash is not available in all regions. Choose Amsterdam.
* Redis: Choose "Redis with Upstash"
If you already have a Redis on Fly you want to use, you may want to not choose the
"Redis with Upstash". Instead, you can get the `REDIS_URL` from [the Upstash Fly console](https://console.upstash.com/flyio/redis)
and add it as a secret with `fly secrets set REDIS_URL=****`. Note that the `REDIS_URL`
will be in `redis://default:****@fly-****.upstash.io:****` format.
Once the app is launched, deploy it with:
```
fly deploy
```
The website will become avaiable after some time. Check the `/status` page to see that
the redis connection is correctly done.
In the rest of our tutorial, we will work on caching the responses from an external api.
If you are only interested in how a Phoenix app with Redis can be deployed on Fly, you
may not need to read the rest of the tutorial.
### `5` Using Redix to Cache External API Responses
Finally, we will now build our website to offer weather information. We will use the API
of [WeatherAPI](https://www.weatherapi.com/) to get the weather information upon user
request. We will cache the results of our calls in Upstash Redis to reduce the number
of calls we make to the external API and to reduce the response time of our app.
In the end, we will have a method `def home(conn, %{"text" => text})` in the
`lib/redix_demo_web/controllers/page_controller.ex` file. To see the final file, find the
[`page_controller.ex` file Upstash examples repository](https://github.com/upstash/examples/blob/main/examples/elixir-with-redis/lib/redix_demo_web/controllers/page_controller.ex).
First, we need to define some private methods to handle the request logic. We start off
with a function to fetch the weather. The method gets the location string and replaces
the empty characters with `%20`. Then it calls `fetch_weather_from_cache` method we will
define. Depending on the result, it either returns the result from cache, or fetches the
result from the api.
```elixir theme={"system"}
defp fetch_weather(location) do
location = String.replace(location, " ", "%20")
case fetch_weather_from_cache(location) do
{:ok, cached_weather} ->
{:ok, cached_weather}
{:error, :not_found} ->
fetch_weather_from_api(location)
{:error, reason} ->
{:error, reason}
end
end
```
Now, we will define the `fetch_weather_from_cache` method. This method will use
Redix to fetch the weather from the location. If it's not found, we will return
`{:error, :not_found}`. If it's found, we will return after decoding it into a
map.
```elixir theme={"system"}
defp fetch_weather_from_cache(location) do
case Redix.command(:redix, ["GET", "weather:#{location}"]) do
{:ok, nil} ->
{:error, :not_found}
{:ok, cached_weather_json} ->
{:ok, Jason.decode!(cached_weather_json)}
{:error, _reason} ->
{:error, "Failed to fetch weather data from cache."}
end
end
```
Next, we will define the `fetch_weather_from_api` method. This method
requests the weather information from the external API. If the request
is successfull, it also saves the result in the cache with the
`cache_weather_response` method.
```elixir theme={"system"}
defp fetch_weather_from_api(location) do
weather_api_key = System.get_env("WEATHER_API_KEY")
url = "http://api.weatherapi.com/v1/current.json?key=#{weather_api_key}&q=#{location}&aqi=no"
case HTTPoison.get(url) do
{:ok, %{status_code: 200, body: body}} ->
weather_info = body
|> Jason.decode!()
|> get_weather_info()
# Cache the weather response in Redis for 8 hours
cache_weather_response(location, Jason.encode!(weather_info))
{:ok, weather_info}
{:ok, %{status_code: status_code, body: body}} ->
{:error, "#{body} (#{status_code})"}
{:error, _reason} ->
{:error, "Failed to fetch weather data."}
end
end
```
In the `cache_weather_response` method, we simply store the weather
information in our Redis:
```elixir theme={"system"}
defp cache_weather_response(location, weather_data) do
case Redix.command(:redix, ["SET", "weather:#{location}", weather_data, "EX", 8 * 60 * 60]) do
{:ok, _} ->
:ok
{:error, _reason} ->
{:error, "Failed to cache weather data."}
end
end
```
Finally, we define the `get_weather_info` and `home` methods.
```elixir theme={"system"}
def home(conn, %{"text" => text}) do
case fetch_weather(text) do
{:ok, %{"location" => location, "temp" => temp_c, "condition" => condition_text}} ->
render_home(conn, %Payload{weather: "#{condition_text}, #{temp_c}", location: location})
{:error, reason} ->
render_home(conn, %Payload{text: reason})
end
end
defp get_weather_info(%{
"location" => %{
"name" => name,
"region" => region
},
"current" => %{
"temp_c" => temp_c,
"condition" => %{
"text" => condition_text
}
}
}) do
%{"location" => "#{name}, #{region}", "temp" => temp_c, "condition" => condition_text}
end
```
### `6` Re-deploying the App
To deploy the app after adding the home page logic, only a few steps remain to deploy the
finished app.
First, add `{:httpoison, "~> 1.5"}` dependency to `mix.exs` file and run `mix deps.get`.
Then, get an API key from [WeatherAPI](https://www.weatherapi.com/) and set it as secret in
fly with:
```
fly secrets set WEATHER_API_KEY=****
```
Now, you can run `fly deploy` in your directory to deploy the completed app!
# FastAPI
Source: https://upstash.com/docs/redis/quickstarts/fastapi
You can find the project source code on GitHub.
### Environment Setup
Install FastAPI and `upstash-redis`.
```shell theme={"system"}
pip install fastapi
pip install upstash-redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
### API Setup
Create `main.py`:
```py main.py theme={"system"}
from fastapi import FastAPI
from upstash_redis import Redis
app = FastAPI()
redis = Redis.from_env()
@app.get("/")
def read_root():
count = redis.incr('counter')
return {"count": count}
```
### Run
Run the app locally with `fastapi dev main.py`, check `http://127.0.0.1:8000/`
# Fastly
Source: https://upstash.com/docs/redis/quickstarts/fastlycompute
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or
[Upstash CLI](https://github.com/upstash/cli). Select the global to minimize the
latency from all edge locations. Copy the `UPSTASH_REDIS_REST_URL` and
`UPSTASH_REDIS_REST_TOKEN` for the next steps.
### Project Setup
We will use Fastly CLI for deployment, so please install
[Fastly CLI](https://developer.fastly.com/reference/cli/).
Create a folder for your project and run `fastly init`. Select `[2] JavaScript`,
then `[2] Empty starter for JavaScript`
```shell theme={"system"}
> fastly compute init
Creating a new Compute@Edge project.
Press ^C at any time to quit.
Name: [fastly-upstash]
Description:
Author: [enes@upstash.com]
Language:
[1] Rust
[2] JavaScript
[3] AssemblyScript (beta)
[4] Other ('bring your own' Wasm binary)
Choose option: [1] 2
Starter kit:
[1] Default starter for JavaScript
A basic starter kit that demonstrates routing, simple synthetic responses and
overriding caching rules.
https://github.com/fastly/compute-starter-kit-javascript-default
[2] Empty starter for JavaScript
An empty application template for the Fastly Compute@Edge environment which simply
returns a 200 OK response.
https://github.com/fastly/compute-starter-kit-javascript-empty
Choose option or paste git URL: [1] 2
```
Install @upstash/redis:
```shell theme={"system"}
npm install @upstash/redis
```
Now, we will create a Fastly Compute service by running,
`fastly compute publish`. You need to add your Upstash database's endpoint as a
backend and select 443 as its port.
```shell theme={"system"}
> fastly compute publish
✓ Initializing...
✓ Verifying package manifest...
✓ Verifying local javascript toolchain...
✓ Building package using javascript toolchain...
✓ Creating package archive...
SUCCESS: Built package 'fastly-upstash' (pkg/fastly-upstash.tar.gz)
There is no Fastly service associated with this package. To connect to an existing service
add the Service ID to the fastly.toml file, otherwise follow the prompts to create a
service now.
Press ^C at any time to quit.
Create new service: [y/N] y
✓ Initializing...
✓ Creating service...
Domain: [supposedly-included-corgi.edgecompute.app]
Backend (hostname or IP address, or leave blank to stop adding backends): global-concise-scorpion-30984.upstash.io
Backend port number: [80] 443
Backend name: [backend_1] upstash
Backend (hostname or IP address, or leave blank to stop adding backends):
✓ Creating domain 'supposedly-smart-corgi.edgecompute.app'...
✓ Creating backend 'upstash' (host: global-concise-scorpion-30984.upstash.io, port: 443)...
✓ Uploading package...
✓ Activating version...
```
### The Code
Update `src/index.js` as below:
```js theme={"system"}
import { Redis } from "@upstash/redis/fastly";
addEventListener("fetch", (event) => event.respondWith(handleRequest(event)));
async function handleRequest(event) {
const redis = new Redis({
url: "UPSTASH_REDIS_REST_URL",
token: "UPSTASH_REDIS_REST_TOKEN",
backend: "upstash",
});
const data = await redis.incr("count");
return new Response("View Count:" + data, { status: 200 });
}
```
### Deploy
Deploy: `fastly compute deploy`
After deployment, the CLI logs the endpoint. You can check the logs with:
`fastly log-tail --service-id=`
### Run Locally
To run the function locally add the backend to your `fastly.toml` as below:
```toml theme={"system"}
[local_server.backends.upstash]
url = "UPSTASH_REDIS_REST_URL"
```
Then run: `fastly compute serve`
# Flask
Source: https://upstash.com/docs/redis/quickstarts/flask
### Introduction
In this quickstart tutorial, we will explore how to use Flask with Upstash Redis to build a simple web application that increments a counter each time a user accesses the homepage.
### Environment Setup
First, install Flask and the Upstash Redis client for Python.
```shell theme={"system"}
pip install flask
pip install upstash-redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
You can also use `python-dotenv` to load environment variables from your `.env` file.
### Application Setup
Create `app.py`:
```py app.py theme={"system"}
from flask import Flask
from upstash_redis import Redis
app = Flask(__name__)
redis = Redis.from_env()
@app.route('/')
def index():
count = redis.incr('counter')
return f'Page visited {count} times.'
if __name__ == '__main__':
app.run(debug=True)
```
### Running the Application
Run the Flask app locally:
```shell theme={"system"}
python app.py
```
Visit `http://127.0.0.1:5000/` in your browser, and you will see the `counter` increment with each refresh.
Code Breakdown
1. **Redis Setup:** We first import Flask and the Upstash Redis client. Using `Redis.from_env()`, we initialize the connection to our Redis database using the environment variables exported earlier.
2. **Increment Counter:** Each time the root route (`/`) is accessed, Redis increments the `counter` key. This key-value pair is automatically created in Redis if it does not exist, and its value is incremented on each request.
3. **Display the Count:** The number of visits is returned in the response as plain text.
# Fly.io
Source: https://upstash.com/docs/redis/quickstarts/fly
Fly.io has a native integration with Upstash where the databases are hosted in
Fly. You can still access a Redis from Fly to Upstash but for the best
latency, we recommend creating Redis (Upstash) inside Fly platform. Check
[here](https://fly.io/docs/reference/redis/) for details.
In this tutorial, we'll walk you through the process of deploying a Redis by
Upstash and connecting it to an application hosted on Fly.io. We'll be using
Node.js and Express for our example application, but the process can be easily
adapted to other languages and frameworks.
### Redis Setup
Create a Redis database using
[Fly CLI](https://fly.io/docs/hands-on/install-flyctl/)
```shell theme={"system"}
> flyctl redis create
? Select Organization: upstash (upstash)
? Choose a Redis database name (leave blank to generate one):
? Choose a primary region (can't be changed later) San Jose, California (US) (sjc)
Upstash Redis can evict objects when memory is full. This is useful when caching in Redis. This setting can be changed later.
Learn more at https://fly.io/docs/reference/redis/#memory-limits-and-object-eviction-policies
? Would you like to enable eviction? No
? Optionally, choose one or more replica regions (can be changed later):
? Select an Upstash Redis plan 3G: 3 GB Max Data Size
Your Upstash Redis database silent-tree-6201 is ready.
Apps in the upstash org can connect to at redis://default:978ba2e07tyrt67598acd8ac916a@fly-silent-tree-6201.upstash.io
If you have redis-cli installed, use fly redis connect to connect to your database.
```
### Set up the Node.js application
* Create a new folder for your project and navigate to it in the terminal.
* Run `npm init -y` to create a `package.json` file.
* Install Express and the Redis client: `npm install express redis`
* Create an `index.js` file in the project folder with the following content:
```js theme={"system"}
const express = require("express");
const redis = require("redis");
const { promisify } = require("util");
const app = express();
const client = redis.createClient(process.env.REDIS_URL);
const getAsync = promisify(client.get).bind(client);
const setAsync = promisify(client.set).bind(client);
app.get("/", async (req, res) => {
const value = await getAsync("counter");
await setAsync("counter", parseInt(value || 0) + 1);
res.send(`Hello, visitor number ${value || 0}!`);
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));
```
This code creates a simple Express server that increments a counter in Redis and
returns the visitor number.
### Configure the Fly.io application
* Run `fly init "your-app-name"` to initialize a new Fly.io application.
* Choose the "Node.js (14.x)" builder, and accept the defaults for the remaining
prompts.
* Open the `fly.toml` file that was generated and add the following environment
variable under the `[[services]]` section:
```toml theme={"system"}
[env]
REDIS_URL = ""
```
Replace `your-upstash-redis-url` with the Redis URL from your Upstash instance.
### Deploy the application to Fly.io
* Run fly deploy to build and deploy your application.
* After the deployment is complete, run fly status to check if the application
is running.
* Visit the URL provided in the output (e.g., [https://your-app-name.fly.dev](https://your-app-name.fly.dev)) to
test your application.
### Conclusion
You have successfully deployed a Node.js application on Fly.io that uses an
Upstash Redis instance as its data store. You can now build and scale your
application as needed, leveraging the benefits of both Fly.io and Upstash.
### Availability of Redis URL for Local Development and Testing
#### Understanding Fly.io and Redis Setup
* **Redis Instance on Fly.io**: When you create a Redis instance using `fly redis create`, Fly.io establishes a Redis server in its cloud environment, designed specifically for applications running on the Fly.io platform.
* **Connection String**: This command generates a connection string. However, it's important to note that this string is intended primarily for applications deployed within Fly.io's network. Due to security and network configurations, it's not directly accessible from external networks, like your local development environment.
#### Creating a Tunnel for Local Testing
* **Fly Redis Connect**: For local access to your Redis instance, use `fly redis connect`. This command establishes a secure tunnel between your local machine and the Redis instance on Fly.io.
* **How it Works**:
* The tunnel maps a local port to the remote Redis port on Fly.io.
* Once established, you can connect to Redis as if it were running locally, typically at `localhost` with the mapped port.
* **Setting Up the Tunnel**:
* Execute `fly redis connect` in your terminal.
* The command provides a local address (e.g., `localhost:10000`).
* Use this address as your Redis connection URL in your local development setup.
* **Considerations**:
* This tunnel is a temporary solution, ideal for development and testing, not for production.
* Ensure compatibility with your local firewall and network settings.
#### Additional Notes
* **Security Considerations**: Exercise caution regarding security. Although the tunnel is secure, it exposes your Redis instance to your local network.
* **Alternative Approaches**: Some developers opt to run a local Redis instance for development to bypass these complexities.
#### Summary
To connect to a Redis instance hosted on Fly.io from your local machine, a secure tunnel is necessary. This tunnel effectively simulates a local Redis instance, enabling testing and development activities without exposing your Redis instance over the internet.
#### Example Code for Setting Up and Using the Fly.io Redis Tunnel
##### Step 1: Establish the Tunnel
To establish a tunnel between your local machine and the Redis instance on Fly.io, run the following command in your terminal:
```shell theme={"system"}
fly redis connect
```
After running this command, you'll receive a local address, such as `localhost:10000`. This address will act as your local Redis endpoint.
##### Step 2: Connect to Redis in Your Application
In your application, you should typically use an environment variable for the Redis URL. When developing locally, set this environment variable to the local address provided by the `fly redis connect` command.
Here's an example in a Node.js application:
```js theme={"system"}
const redis = require("redis");
// Local Redis URL for development
const LOCAL_REDIS_URL = 'redis://localhost:10000'; // Replace with your actual local address
const REDIS_URL = process.env.NODE_ENV === 'development' ? LOCAL_REDIS_URL : process.env.REDIS_URL;
const client = redis.createClient({
url: REDIS_URL
});
client.on("error", function(error) {
console.error(error);
});
// Rest of your Redis-related code
```
##### Step 3: Running Your Application Locally
Ensure that the Fly.io Redis tunnel is active when you run your application locally. Your application will connect to Redis through this tunnel, simulating a local instance.
```shell theme={"system"}
npm start
```
**Important Notes:**
* The `fly redis connect` tunnel should only be used for development and testing purposes.
* Replace `LOCAL_REDIS_URL` in the sample code with the actual local address provided by `fly redis connect`.
* Set your application's environment to 'development' when running locally to use the local Redis URL.
# Google Cloud Functions
Source: https://upstash.com/docs/redis/quickstarts/google-cloud-functions
You can find the project source code on GitHub.
### Prerequisites
1. [Create a Google Cloud Project.](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
2. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#console)
3. Enable Cloud Functions, Cloud Build, Artifact Registry, Cloud Run, Logging, and Pub/Sub APIs.
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli). Copy `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` for the next steps.
### Counter Function Setup & Deploy
1. Go to [Cloud Functions](https://console.cloud.google.com/functions/list) in Google Cloud Console.
2. Click **Create Function**.
3. Setup **Basics and Trigger** Configuration like below:
4. Using your `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN`, setup **Runtime environment variables** under **Runtime, build, connections and privacy settings** like below.
5. Click **Next**.
6. Set **Entry point** to `counter`.
7. Update `index.js`
```js index.js theme={"system"}
const { Redis } = require("@upstash/redis");
const functions = require('@google-cloud/functions-framework');
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN
});
functions.http('counter', async (req, res) => {
const count = await redis.incr("counter");
res.send("Counter:" + count);
});
```
8. Update `package.json` to include `@upstash/redis`.
```json package.json theme={"system"}
{
"dependencies": {
"@google-cloud/functions-framework": "^3.0.0",
"@upstash/redis": "^1.31.6"
}
}
```
9. Click **Deploy**.
10. Visit the given URL.
# Ion
Source: https://upstash.com/docs/redis/quickstarts/ion
You can find the project source code on GitHub.
### Prerequisites
You need to have AWS credentials configured locally and SST CLI installed.
1. [Create an AWS account](https://aws.amazon.com/)
2. [Create an IAM user](https://sst.dev/chapters/create-an-iam-user.html)
3. [Configure the AWS CLI](https://sst.dev/chapters/configure-the-aws-cli.html)
4. [Setup SST CLI](https://ion.sst.dev/docs/reference/cli/)
### Project Setup
Let's create a new Next.js application.
```shell theme={"system"}
npx create-next-app@latest
cd my-app
```
Let's initialize SST in our app.
```shell theme={"system"}
sst init
```
Install the `@upstash/redis` package.
```shell theme={"system"}
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
```
### Pass the Environment Variables
```ts /sst.config.ts theme={"system"}
///
export default $config({
app(input) {
return {
name: "my-app",
removal: input?.stage === "production" ? "retain" : "remove",
home: "aws",
};
},
async run() {
new sst.aws.Nextjs("MyWeb", {
environment: {
UPSTASH_REDIS_REST_URL: process.env.UPSTASH_REDIS_REST_URL || "",
UPSTASH_REDIS_REST_TOKEN: process.env.UPSTASH_REDIS_REST_TOKEN || "",
},
});
},
});
```
### Home Page Setup
Update `/app/page.tsx`:
```tsx /app/page.tsx theme={"system"}
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv();
export default async function Home() {
const count = await redis.incr("counter");
return (
Counter: {count}
)
}
```
### Run
Run the SST app.
```shell theme={"system"}
npm run dev
```
Check `http://localhost:3000/`
### Deploy
Deploy with SST.
```shell theme={"system"}
sst deploy
```
Check the output URL.
# ioredis note
Source: https://upstash.com/docs/redis/quickstarts/ioredisnote
This example uses ioredis, you can copy the connection string from the `Node`
tab in the console.
# Koyeb
Source: https://upstash.com/docs/redis/quickstarts/koyeb
Integrate a serverless Upstash Redis database with your Koyeb applications. Combine the serverless features of Koyeb on the application side and Upstash for your key-value storage to deploy and scale applications globally with ease.
This guide explains how to connect an Upstash Redis data store as a database cache with an application running on Koyeb. To successfully follow this documentation, you will need to have:
* A [Koyeb account](https://app.koyeb.com/) to deploy the application. You can optionally install the [Koyeb CLI](https://www.koyeb.com/docs/quickstart/koyeb-cli) to deploy the application from the command line
* An [Upstash account](https://console.upstash.com/) to deploy the database
* [Node.js](https://nodejs.org/en) and `npm` installed on your local machine to create the demo application.
If you already have a freshly created Upstash Redis database running and want to quickly preview how to connect your Upstash database to an application running on Koyeb, use the [Deploy to Koyeb](https://www.koyeb.com/docs/deploy-to-koyeb-button) button below.
[](https://app.koyeb.com/deploy?type=git\&repository=github.com/koyeb/example-koyeb-upstash\&branch=main\&name=example-koyeb-upstash\&env\[UPSTASH_REDIS_REST_URL]=REPLACE_ME\&env\[UPSTASH_REDIS_REST_TOKEN]=REPLACE_ME\&env\[PORT]=8000)
*Make sure to replace the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` environment variables with the values for your Upstash database.*
## Create an Upstash Redis database
To create an Upstash Redis database, sign into your [Upstash account](https://console.upstash.com/).
In the Upstash console, select **Redis** from the top navigation bar. On the Redis page, click **Create database**:
1. In the **Name** field, choose a name for your database. In this example, we'll use `example-koyeb-upstash`.
2. Select the **Type** of deployment you want. Because this is demo does not have global requirements, we will use "Regional" in this guide to limit the choices we have to make.
3. In the **Region** drop-down menu, choose a location that's geographically convenient for your database and users. We use "N. Virginia (us-east-1)".
4. Select your preferred options. In this example, we will select "TLS (SSL) Enabled" so that connections to the database are encrypted and "Eviction" so that older data will be purged when we run out of space.
5. Click **Create** to provision the Redis database.
### Retrieve your Upstash URL and token
On your Upstash Redis page, scroll down to the **REST API** section of the page.
Click on the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` buttons to copy their respective values to your clipboard. Paste the copied values to a safe location so that you can reference them later when testing and deploying your application.
Alternatively, you can click on the `@upstash/redis` tab to view a code snippet:
```javascript theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: "",
token: "",
});
const data = await redis.set("foo", "bar");
```
When you copy the code block using the provided copy button, the code snippet, along with your database's URL and access token will be copied to your clipboard. While this works well for private or demonstration code, it generally isn't good practice to hard-code sensitive data like tokens within your application. To avoid this, we will configure the application to get these values from the environment instead.
## Create a demo application
Next, you can create a simple Node.js application that uses your Upstash Redis database. The application will use the [Express](https://expressjs.com/) web framework to build and serve a simple page and Upstash's own [`@upstash/redis`](/redis/sdks/ts/overview) package to connect to the database.
### Install the dependencies
Create a new directory for your demo application and navigate to the new location:
```bash theme={"system"}
mkdir example-koyeb-upstash
cd example-koyeb-upstash
```
Within the new directory, generate a `package.json` file for the new project using the default settings:
```bash theme={"system"}
npm init -y
```
Next, install the `@upstash/redis` package so that you can connect to your Redis database from within the application and the `express` package so that we can build a basic web application:
```bash theme={"system"}
npm install @upstash/redis express
```
### Create the application file
Now, create a new file called `index.js` with the following contents:
```javascript theme={"system"}
// Note: if you are using Node.js version 17 or lower,
// change the first line to the following:
// const { Redis } = require ("@upstash/redis/with-fetch");
const { Redis } = require("@upstash/redis");
const express = require("express");
const app = express();
const redis = Redis.fromEnv();
app.get("/", async (req, res) => {
const value = await redis.get("counter");
await redis.set("counter", parseInt(value || 0) + 1);
res.send(`Counter: ${value || 0}`);
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
```
**Note:** If you are running Node.js version 17 or lower, you need to adjust the first line of the app to import from the `@upstash/redis/with-fetch` instead of `@upstash/redis`. Node.js versions prior to 18 did not natively support `fetch` API, so you need to change the import path in order to access that functionality.
This above code will introduce a simple `counter` key to your Redis database. It will use this key to store the number of times the page has been accessed and display that value on the page.
### Add the run scripts
Finally, edit the `package.json` file to define the scripts used to run the application. The `dev` script runs the application in debug mode while the `start` script starts the application normally:
```diff theme={"system"}
{
"name": "example-koyeb-upstash",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
+ "dev": "DEBUG=express:* node index.js",
+ "start": "node index.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@upstash/redis": "^1.20.6",
"express": "^4.18.2"
}
}
```
## Run the demo application locally
Now that the project is set up, you can run the application locally verify that it functions correctly.
In your shell, set and export the variables you copied from your Upstash Redis page:
```bash theme={"system"}
export UPSTASH_REDIS_REST_URL=""
export UPSTASH_REDIS_REST_TOKEN=""
```
In the same terminal, you should now be able to test your application by typing:
```bash theme={"system"}
npm run dev
```
The application server should start in debug mode, printing information about the process to the display. In your browser, navigate to `127.0.0.1:3000` to see your application. It should show the counter and number of visits you've made: "Counter: 0". The number should increase by one every time you refresh the page.
Press CTRL-c to stop the application when you are finished.
## Deploy the application to Koyeb using git-driven deployment
Once you've verified that the project runs locally, create a new Git repository to save your work.
Run the following commands to create a new Git repository within the project's root directory, commit the project files, and push changes to GitHub. Remember to replace the values of `` and `` with your own information:
```bash theme={"system"}
git init
echo 'node_modules' >> .gitignore
git add .
git commit -m "Initial commit"
git remote add origin git@github.com:/.git
git push -u origin main
```
You can deploy the demo application to Koyeb and connect it to the Upstash Redis database using the [control panel](#via-the-koyeb-control-panel) or via the [Koyeb CLI](#via-the-koyeb-cli).
### Via the Koyeb control panel
To deploy the using the [control panel](https://app.koyeb.com/), follow these steps:
1. Click **Create App** in the Koyeb control panel.
2. Select **GitHub** as the deployment option.
3. Choose the GitHub **repository** and **branch** containing your application code.
4. Name your service, for example `upstash-service`.
5. Click **Advanced** to view additional options. Under **Environment variables**, click **Add Variable** to add two new variables called `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN`. Populate them with the values you copied for your Upstash Redis database.
6. Name the App, for example `upstash-demo`.
7. Click the **Deploy** button.
A Koyeb App and Service will be created. Your application will be built and deployed to Koyeb. Once the build has finished, you will be able to access your application running on Koyeb by clicking the URL ending with `.koyeb.app`.
### Via the Koyeb CLI
To deploy the example application using the [Koyeb CLI](https://www.koyeb.com/docs/cli/installation), run the following command in your terminal:
```bash theme={"system"}
koyeb app init example-koyeb-upstash \
--git github.com// \
--git-branch main \
--ports 3000:http \
--routes /:3000 \
--env PORT=3000 \
--env UPSTASH_REDIS_REST_URL="" \
--env UPSTASH_REDIS_REST_TOKEN=""
```
*Make sure to replace `/` with your GitHub username and repository name and replace `` and `` with the values copied from your Upstash Redis page.*
#### Access deployment logs
To track the app deployment and view the build logs, execute the following command:
```bash theme={"system"}
koyeb service logs example-koyeb-upstash/example-koyeb-upstash -t build
```
#### Access your app
Once the deployment of your application has finished, you can retrieve the public domain to access your application by running the following command:
```bash theme={"system"}
$ koyeb app get example-koyeb-upstash
ID NAME DOMAINS CREATED AT
85c78d9a example-koyeb-upstash ["example-koyeb-upstash-myorg.koyeb.app"] 31 May 23 13:08 UTC
```
#### Access runtime logs
With your app running, you can track the runtime logs by running the following command:
```bash theme={"system"}
koyeb service logs example-koyeb-upstash/example-koyeb-upstash -t runtime
```
## Deploy the application to Koyeb using a pre-built container
As an alternative to using git-driven deployment, you can deploy a pre-built container from any public or private registry. This can be useful if your application needs specific system dependencies or you need more control over how the build is performed.
To dockerize the application, start by adding a file called `.dockerignore` to the project's root directory. Paste the following contents to limit the files copied to the Docker image:
```
Dockerfile
.dockerignore
.git
node_modules
npm-debug.log
/.cache
.env
README.md
```
Afterwards, create a `Dockerfile` in your project root directory and copy the content below:
```dockerfile theme={"system"}
FROM node:18-alpine AS base
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM base AS runner
WORKDIR /app
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
COPY --from=deps /app/node_modules ./node_modules
COPY . .
USER node
EXPOSE 3000
ENV PORT 3000
CMD ["npm", "run", "start"]
```
The Dockerfile above provides the minimum requirements to run the sample Node.js application. You can easily extend it depending on your needs.
*Be sure to set the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` environment variables to the values you copied from the Upstash console when you deploy the container in the Koyeb control panel.*
To build and push the Docker image to a registry and deploy it on Koyeb, refer to the [Deploy an app from a Docker image](https://www.koyeb.com/docs/quickstart/deploy-a-docker-application) documentation.
A Koyeb App and Service will be created. Your Docker image will be pulled and deployed to Koyeb. Once the deployment has finished, you will be able to access your application running on Koyeb by clicking the URL ending with `.koyeb.app`.
## Delete the example application and Upstash Redis database
To delete the example application and the Upstash Redis database and avoid incurring any charges, follow these steps:
* From the [Upstash console](https://console.upstash.com/), select your Redis database and scroll to the bottom of the **Details** page. Click **Delete this database** and follow the instructions.
* From the [Koyeb control panel](https://app.koyeb.com/), select your App. Click the **Settings** tab, and click the **Danger Zone**. Click **Delete App** and follow the instructions. Alternatively, from the CLI, you can delete your Koyeb App and service by typing `koyeb app delete example-koyeb-upstash`.
# Laravel
Source: https://upstash.com/docs/redis/quickstarts/laravel
## Project Setup
To get started, let’s create a new Laravel application. If you don’t have the Laravel CLI installed globally, install it first using Composer:
```shell theme={"system"}
composer global require laravel/installer
```
After installation, create your Laravel project:
```shell theme={"system"}
laravel new example-app
cd example-app
```
Alternatively, if you don’t want to install the Laravel CLI, you can create a project using Composer:
```shell theme={"system"}
composer create-project laravel/laravel example-app
cd example-app
```
## Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com). Go to the **Connect to your database** section and click on Laravel. Copy those values into your .env file:
```shell .env theme={"system"}
REDIS_HOST=""
REDIS_PORT=6379
REDIS_PASSWORD=""
```
## Framework Integration
Upstash Redis integrates seamlessly with Laravel, allowing it to be used as a driver for multiple framework components.
### Interact with Redis
The Redis Facade in Laravel provides a convenient way to interact with your Redis database. For example:
```php theme={"system"}
use Illuminate\Support\Facades\Redis;
// Storing a value in Redis
Redis::set('key', 'value');
// Retrieving a value from Redis
$value = Redis::get('key');
```
This can be particularly useful for simple caching or temporary data storage.
### Cache
To use Upstash Redis as your caching driver, update the CACHE\_STORE in your .env file:
```shell .env theme={"system"}
CACHE_STORE="redis"
REDIS_CACHE_DB="0"
```
With this configuration, you can use Laravel’s caching functions, such as:
```php theme={"system"}
Cache::put('key', 'value', now()->addMinutes(10));
$value = Cache::get('key');
```
For more advanced cache configurations, see the [Laravel Cache Documentation](https://laravel.com/docs/cache).
### Session
Laravel can store session data in Upstash Redis. To enable this, set the SESSION\_DRIVER in your .env file:
```shell .env theme={"system"}
SESSION_DRIVER="redis"
```
This ensures that session data is stored in your Upstash Redis database, providing fast and reliable session management.
### Queue
Upstash Redis can also serve as a driver for Laravel’s queue system, enabling job processing. To configure this, update the QUEUE\_CONNECTION in your .env file:
```shell .env theme={"system"}
QUEUE_CONNECTION="redis"
```
For detailed queue configurations and usage, refer to the [Laravel Queues Documentation](https://laravel.com/docs/queues).
# App Router
Source: https://upstash.com/docs/redis/quickstarts/nextjs-app-router
***
## Quickstart: Upstash Redis in Next 15
***
## 1. Install package
In your Next.js app, install our `@upstash/redis` package:
```bash theme={"system"}
npm install @upstash/redis
```
***
## 2. Connect to Redis
1. Grab your Redis credentials from the Upstash dashboard
2. Paste them into your Next environment variables:
```bash title=".env" theme={"system"}
UPSTASH_REDIS_REST_URL=https://holy-kite-17499.upstash.io
UPSTASH_REDIS_REST_TOKEN=AURbAAIncDEyYjM4M...
```
3. Create a Redis instance, for example in `lib/redis.ts`
```typescript title="lib/redis.ts" theme={"system"}
import { Redis } from "@upstash/redis"
// 👇 we can now import our redis client anywhere we need it
export const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
})
```
***
## 3. Using our Redis Client
We can now connect to Upstash Redis from any server component or API route. For example:
```typescript title="app/page.tsx" theme={"system"}
import { redis } from "@/lib/redis"
// 👇 server component
const Page = async () => {
const count = await redis.get("count")
return
count: {count}
}
export default Page
```
Because this `count` doesn't exist yet, let's create a Next API route to populate it.
***
## 3. Storing data in Redis
Let's create a super simple API that, every time when called, increments an integer value we call `count`. This is the same value we display in our page above:
```typescript title="app/api/counter/route.ts" theme={"system"}
import { redis } from "@/lib/redis"
export const POST = async () => {
await redis.incr("count")
return new Response("OK")
}
```
Perfect! Every time we now call this API, we increment the count in our Redis database:
The server component fetches the most recent count at render-time and displays the up-to-date value automatically. For a video demo, check the video at the top of this article.
***
## Examples
You can find the project source code on GitHub.
If you're already on Vercel, you can create Upstash projects directly through Vercel: [Read more](../howto/vercelintegration).
# Pages Router
Source: https://upstash.com/docs/redis/quickstarts/nextjs-pages-router
You can find the project source code on GitHub.
### Project Setup
Let's create a new Next.js application with Pages Router and install `@upstash/redis` package.
```shell theme={"system"}
npx create-next-app@latest
cd my-app
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
```
### Home Page Setup
Update `/pages/index.tsx`:
```tsx /pages/index.tsx theme={"system"}
import type { InferGetServerSidePropsType, GetServerSideProps } from 'next'
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv();
export const getServerSideProps = (async () => {
const count = await redis.incr("counter");
return { props: { count } }
}) satisfies GetServerSideProps<{ count: number }>
export default function Home({
count,
}: InferGetServerSidePropsType) {
return (
Counter: {count}
)
}
```
### Run & Deploy
Run the app locally with `npm run dev`, check `http://localhost:3000/`
Deploy your app with `vercel`
You can also integrate your Vercel projects with Upstash using Vercel
Integration module. Check [this article](../howto/vercelintegration).
# AWS Lambda
Source: https://upstash.com/docs/redis/quickstarts/python-aws-lambda
You can find the project source code on GitHub.
### Prerequisites
* Complete all steps in [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
### Project Setup
Create and navigate to a directory named `counter-cdk`. CDK CLI uses this directory name to name things in your CDK code, so if you decide to use a different name, don't forget to make the appropriate changes when applying this tutorial.
```shell theme={"system"}
mkdir counter-cdk && cd counter-cdk
```
Initialize a new CDK project.
```shell theme={"system"}
cdk init app --language typescript
```
### Counter Function Setup
Create a folder named `api` under `lib`
```shell theme={"system"}
mkdir lib/api
```
Create `/lib/api/requirements.txt`
```txt /lib/api/requirements.txt theme={"system"}
upstash-redis
```
Create `/lib/api/index.py`
```py /lib/api/index.py theme={"system"}
from upstash_redis import Redis
redis = Redis.from_env()
def handler(event, context):
count = redis.incr('counter')
return {
'statusCode': 200,
'body': f'Counter: {count}'
}
```
### Counter Stack Setup
Update `/lib/counter-cdk-stack.ts`
```ts /lib/counter-cdk-stack.ts theme={"system"}
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as path from 'path';
export class CounterCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const counterFunction = new lambda.Function(this, 'CounterFunction', {
code: lambda.Code.fromAsset(path.join(__dirname, 'api'), {
bundling: {
image: lambda.Runtime.PYTHON_3_9.bundlingImage,
command: [
'bash', '-c',
'pip install -r requirements.txt -t /asset-output && cp -au . /asset-output'
],
},
}),
runtime: lambda.Runtime.PYTHON_3_9,
handler: 'index.handler',
environment: {
UPSTASH_REDIS_REST_URL: process.env.UPSTASH_REDIS_REST_URL || '',
UPSTASH_REDIS_REST_TOKEN: process.env.UPSTASH_REDIS_REST_TOKEN || '',
},
});
const counterFunctionUrl = counterFunction.addFunctionUrl({
authType: lambda.FunctionUrlAuthType.NONE,
});
new cdk.CfnOutput(this, "counterFunctionUrlOutput", {
value: counterFunctionUrl.url,
})
}
}
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
### Deploy
Run in the top folder:
```shell theme={"system"}
cdk synth
cdk bootstrap
cdk deploy
```
Visit the output url.
# SST v2
Source: https://upstash.com/docs/redis/quickstarts/sst-v2
You can find the project source code on GitHub.
### Prerequisites
You need to have AWS credentials configured locally.
1. [Create an AWS account](https://aws.amazon.com/)
2. [Create an IAM user](https://sst.dev/chapters/create-an-iam-user.html)
3. [Configure the AWS CLI](https://sst.dev/chapters/configure-the-aws-cli.html)
### Project Setup
Let's create a new SST + Next.js application.
```shell theme={"system"}
npx create-sst@latest --template standard/nextjs
cd my-sst-app
npm install
```
Install the `@upstash/redis` package.
```shell theme={"system"}
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell theme={"system"}
npx sst secrets set UPSTASH_REDIS_REST_URL
npx sst secrets set UPSTASH_REDIS_REST_TOKEN
```
### Bind the Secrets
```ts /stacks/Default.ts theme={"system"}
import { Config, StackContext, NextjsSite } from "sst/constructs";
export function Default({ stack }: StackContext) {
const UPSTASH_REDIS_REST_URL = new Config.Secret(stack, "UPSTASH_REDIS_REST_URL");
const UPSTASH_REDIS_REST_TOKEN = new Config.Secret(stack, "UPSTASH_REDIS_REST_TOKEN");
const site = new NextjsSite(stack, "site", {
bind: [UPSTASH_REDIS_REST_URL, UPSTASH_REDIS_REST_TOKEN],
path: "packages/web",
});
stack.addOutputs({
SiteUrl: site.url,
});
}
```
### API Setup
```ts /packages/web/pages/api/hello.ts theme={"system"}
import { Redis } from "@upstash/redis";
import type { NextApiRequest, NextApiResponse } from "next";
import { Config } from "sst/node/config";
const redis = new Redis({
url: Config.UPSTASH_REDIS_REST_URL,
token: Config.UPSTASH_REDIS_REST_TOKEN,
});
export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
const count = await redis.incr("counter");
res.status(200).json({ count });
}
```
### Run
Run the SST app.
```shell theme={"system"}
npm run dev
```
After prompted, run the Next.js app.
```shell theme={"system"}
cd packages/web
npm run dev
```
Check `http://localhost:3000/api/hello`
### Deploy
Set the secrets for the prod stage.
```shell theme={"system"}
npx sst secrets set --stage prod UPSTASH_REDIS_REST_URL
npx sst secrets set --stage prod UPSTASH_REDIS_REST_TOKEN
```
Deploy with SST.
```shell theme={"system"}
npx sst deploy --stage prod
```
Check `/api/hello` with the given SiteUrl.
# Supabase Functions
Source: https://upstash.com/docs/redis/quickstarts/supabase
The below is an example for a Redis counter that stores a
[hash](https://redis.io/commands/hincrby/) of Supabase function invocation count
per region.
## Redis database setup
Create a Redis database using the
[Upstash Console](https://console.upstash.com/) or
[Upstash CLI](https://github.com/upstash/cli).
Select the `Global` type to minimize the latency from all edge locations. Copy
the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your .env file.
You'll find them under **Details > REST API > .env**.
```shell theme={"system"}
cp supabase/functions/upstash-redis-counter/.env.example supabase/functions/upstash-redis-counter/.env
```
## Code
Make sure you have the latest version of the
[Supabase CLI installed](https://supabase.com/docs/guides/cli#installation).
Create a new function in your project:
```shell theme={"system"}
supabase functions new upstash-redis-counter
```
And add the code to the `index.ts` file:
```ts index.ts theme={"system"}
import { serve } from "https://deno.land/std@0.177.0/http/server.ts";
import { Redis } from "https://deno.land/x/upstash_redis@v1.19.3/mod.ts";
console.log(`Function "upstash-redis-counter" up and running!`);
serve(async (_req) => {
try {
const redis = new Redis({
url: Deno.env.get("UPSTASH_REDIS_REST_URL")!,
token: Deno.env.get("UPSTASH_REDIS_REST_TOKEN")!,
});
const deno_region = Deno.env.get("DENO_REGION");
if (deno_region) {
// Increment region counter
await redis.hincrby("supa-edge-counter", deno_region, 1);
} else {
// Increment localhost counter
await redis.hincrby("supa-edge-counter", "localhost", 1);
}
// Get all values
const counterHash: Record | null = await redis.hgetall(
"supa-edge-counter"
);
const counters = Object.entries(counterHash!)
.sort(([, a], [, b]) => b - a) // sort desc
.reduce(
(r, [k, v]) => ({
total: r.total + v,
regions: { ...r.regions, [k]: v },
}),
{
total: 0,
regions: {},
}
);
return new Response(JSON.stringify({ counters }), { status: 200 });
} catch (error) {
return new Response(JSON.stringify({ error: error.message }), {
status: 200,
});
}
});
```
## Run locally
```bash theme={"system"}
supabase start
supabase functions serve upstash-redis-counter --no-verify-jwt --env-file supabase/functions/upstash-redis-counter/.env
```
Navigate to [http://localhost:54321/functions/v1/upstash-redis-counter](http://localhost:54321/functions/v1/upstash-redis-counter).
## Deploy
```bash theme={"system"}
supabase functions deploy upstash-redis-counter --no-verify-jwt
supabase secrets set --env-file supabase/functions/upstash-redis-counter/.env
```
# App Router
Source: https://upstash.com/docs/redis/quickstarts/vercel-functions-app-router
You can find the project source code on GitHub.
### Project Setup
Let's create a new Next.js application with App Router and install `@upstash/redis` package.
```shell theme={"system"}
npx create-next-app@latest
cd my-app
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
```
### Function Setup
This is a Vercel Serverless Function. If you want to use Edge Runtime, you can add the `export const runtime = 'edge'` line to this Route Handler.
Create `/app/api/hello/route.ts`:
```ts /app/api/hello/route.ts theme={"system"}
import { Redis } from "@upstash/redis";
import { NextResponse } from "next/server";
const redis = Redis.fromEnv();
export async function GET() {
const count = await redis.incr("counter");
return NextResponse.json({ count });
}
export const dynamic = 'force-dynamic'
```
### Run & Deploy
Run the app locally with `npm run dev`, check `http://localhost:3000/api/hello`
Deploy your app with `vercel`
You can also integrate your Vercel projects with Upstash using Vercel
Integration module. Check [this article](../howto/vercelintegration).
# Pages Router
Source: https://upstash.com/docs/redis/quickstarts/vercel-functions-pages-router
You can find the project source code on GitHub.
This is a quickstart for Vercel Serverless Functions. If you want to use Edge Runtime, Vercel recommends icrementally adopting the App Router.
### Project Setup
Let's create a new Next.js application with Pages Router and install `@upstash/redis` package.
```shell theme={"system"}
npx create-next-app@latest
cd my-app
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
```
### Function Setup
Update `/pages/api/hello.ts`:
```ts /pages/api/hello.ts theme={"system"}
import { Redis } from "@upstash/redis";
import type { NextApiRequest, NextApiResponse } from "next";
const redis = Redis.fromEnv();
export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
const count = await redis.incr("counter");
res.status(200).json({ count });
}
```
### Run & Deploy
Run the app locally with `npm run dev`, check `http://localhost:3000/api/hello`
Deploy your app with `vercel`
You can also integrate your Vercel projects with Upstash using Vercel
Integration module. Check [this article](../howto/vercelintegration).
# Vercel Python Runtime
Source: https://upstash.com/docs/redis/quickstarts/vercel-python-runtime
You can find the project source code on GitHub.
This quickstart uses django but you can easily adapt it to Flask, FastAPI or plain Python, see [Vercel Python Templates](https://vercel.com/templates?framework=python).
### Project Setup
Let's create a new django application from Vercel's template.
```shell theme={"system"}
npx create-next-app vercel-django --example "https://github.com/vercel/examples/tree/main/python/django"
cd vercel-django
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
### Environment Setup
Update `requirements.txt` to include `upstash-redis`.
```txt requirements.txt theme={"system"}
Django==4.1.3
upstash-redis
```
We will create a Conda environment with python version `3.12` to match Vercel Python Runtime and avoid conflicts on deployment, you can use any other environment management system.
```shell theme={"system"}
conda create --name vercel-django python=3.12
conda activate vercel-django
pip install -r requirements.txt
```
### View Setup
Update `/example/views.py`:
```py /example/views.py theme={"system"}
from datetime import datetime
from django.http import HttpResponse
from upstash_redis import Redis
redis = Redis.from_env()
def index(request):
count = redis.incr('counter')
html = f'''
Counter: { count }
'''
return HttpResponse(html)
```
### Run & Deploy
Run the app locally with `python manage.py runserver`, check `http://localhost:8000/`
Deploy your app with `vercel`
Set `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` in your project's Settings -> Environment Variables. Redeploy from Deployments tab.
You can also integrate your Vercel projects with Upstash using Vercel
Integration module. Check [this article](../howto/vercelintegration).
# ECHO
Source: https://upstash.com/docs/redis/sdks/py/commands/auth/echo
Returns a message back to you. Useful for debugging the connection.
## Arguments
A message to send to the server.
## Response
The same message you sent.
```py Example theme={"system"}
assert redis.echo("hello world") == "hello world"
```
# PING
Source: https://upstash.com/docs/redis/sdks/py/commands/auth/ping
Send a ping to the server and get a response if the server is alive.
## Arguments
No arguments
## Response
`PONG`
```py Example theme={"system"}
assert redis.ping() == "PONG"
```
# BITCOUNT
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/bitcount
Count the number of set bits.
The `BITCOUNT` command in Redis is used to count the number of set bits (bits with a value of 1) in a range of bytes within a key that is stored as a binary string. It is primarily used for bit-level operations on binary data stored in Redis.
## Arguments
The key to get.
Specify the range of bytes within the binary string to count the set bits. If not provided, it counts set bits in the entire string.
Either specify both `start` and `end` or neither.
Specify the range of bytes within the binary string to count the set bits. If not provided, it counts set bits in the entire string.
Either specify both `start` and `end` or neither.
## Response
The number of set bits in the specified range.
```py Example theme={"system"}
redis.setbit("mykey", 7, 1)
redis.setbit("mykey", 8, 1)
redis.setbit("mykey", 9, 1)
# With range
assert redis.bitcount("mykey", 0, 10) == 3
# Without range
assert redis.bitcount("mykey") == 3
```
# BITFIELD
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/bitfield
Sets or gets parts of a bitfield
The `bitfield` function returns a `BitFieldCommands` instance that can be used
to execute multiple bitfield operations in a single command.
The encoding can be a signed or unsigned integer, by prefixing the type with
`i` or `u`. For example, `i4` is a signed 4-bit integer, and `u8` is an
unsigned 8-bit integer.
```py theme={"system"}
redis.set("mykey", "")
# Sets the first 4 bits to 1
# Increments the next 4 bits by 1
result = redis.bitfield("mykey")
.set("u4", 0, 16)
.incr("u4", 4, 1)
.execute()
assert result == [0, 1]
```
## Commands
### `get(type: str, offset: int)`
Returns a value from the bitfield at the given offset.
### `set(type: str, offset: int, value: int)`
Sets a value and returns the old value.
### `incr(type: str, offset: int, increment: int)`
Increments a value and returns the new value.
## Arguments
The string key to operate on.
## Response
A list of integers, one for each operation.
```py Get theme={"system"}
redis.set("mykey", "\x05\x06\x07")
result = redis.bitfield("mykey") \
.get("u8", 0) \
.get("u8", 8) \
.get("u8", 16) \
.execute()
assert result == [5, 6, 7]
```
```py Set theme={"system"}
redis.set("mykey", "")
result = redis.bitfield("mykey") \
.set("u4", 0, 16) \
.set("u4", 4, 1) \
.execute()
assert result == [0, 1]
```
```py Incr theme={"system"}
redis.set("mykey", "")
# Increment offset 0 by 16, return
# Increment offset 4 by 1
result = redis.bitfield("mykey") \
.incr("u4", 0, 16) \
.incr("u4", 4, 1) \
.execute()
assert result == [0, 1]
```
# BITOP
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/bitop
Perform bitwise operations between strings.
The `BITOP` command in Redis is used to perform bitwise operations on multiple keys (or Redis strings) and store the result in a destination key. It is primarily used for performing logical AND, OR, XOR, and NOT operations on binary data stored in Redis.
## Arguments
Specifies the type of bitwise operation to perform, which can be one of the
following: `AND`, `OR`, `XOR`, or `NOT`.
The key to store the result of the operation in.
One or more keys to perform the operation on.
## Response
The size of the string stored in the destination key.
```py Example theme={"system"}
# key1 = 00000001
# key2 = 00000010
redis.setbit("key1", 0, 1)
redis.setbit("key2", 0, 0)
redis.setbit("key2", 1, 1)
assert redis.bitop("AND", "dest", "key1", "key2") == 1
# result = 00000000
assert redis.getbit("dest", 0) == 0
assert redis.getbit("dest", 1) == 0
```
# BITPOS
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/bitpos
Find the position of the first set or clear bit (bit with a value of 1 or 0) in a Redis string key.
## Arguments
The key to search in.
The key to store the result of the operation in.
The index to start searching at.
The index to stop searching at.
## Response
The index of the first occurrence of the bit in the string.
```py Example theme={"system"}
redis.setbit("mykey", 7, 1)
redis.setbit("mykey", 8, 1)
assert redis.bitpos("mykey", 1) == 7
assert redis.bitpos("mykey", 0) == 0
# With a range
assert redis.bitpos("mykey", 1, 0, 2) == 0
assert redis.bitpos("mykey", 1, 2, 3) == -1
```
```py With Range theme={"system"}
redis.bitpos("key", 1, 5, 20)
```
# GETBIT
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/getbit
Retrieve a single bit.
## Arguments
The key of the bitset
Specify the offset at which to get the bit.
## Response
The bit value stored at offset.
```py Example theme={"system"}
bit = redis.getbit(key, 4)
```
# SETBIT
Source: https://upstash.com/docs/redis/sdks/py/commands/bitmap/setbit
Set a single bit in a string.
## Arguments
The key of the bitset
Specify the offset at which to set the bit.
The bit to set
## Response
The original bit value stored at offset.
```py Example theme={"system"}
original_bit = redis.setbit(key, 4, 1)
```
# DEL
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/del
Removes the specified keys. A key is ignored if it does not exist.
## Arguments
One or more keys to remove.
## Response
The number of keys that were removed.
```py Example theme={"system"}
redis.set("key1", "Hello")
redis.set("key2", "World")
redis.delete("key1", "key2")
assert redis.get("key1") is None
assert redis.get("key2") is None
```
# EXISTS
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/exists
Check if a key exists.
## Arguments
One or more keys to check.
## Response
The number of keys that exist
```py Example theme={"system"}
redis.set("key1", "Hello")
redis.set("key2", "World")
assert redis.exists("key1", "key2") == 2
redis.delete("key1")
assert redis.exists("key1", "key2") == 1
```
# EXPIRE
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/expire
Sets a timeout on key. The key will automatically be deleted.
## Arguments
The key to set the timeout on.
The timeout in seconds as int or datetime.timedelta object
Set expiry only when the key has no expiry
Set expiry only when the key has an existing expiry
Set expiry only when the new expiry is greater than current one
Set expiry only when the new expiry is less than current one
## Response
`True` if the timeout was set
```py Example theme={"system"}
# With seconds
redis.set("mykey", "Hello")
redis.expire("mykey", 5)
assert redis.get("mykey") == "Hello"
time.sleep(5)
assert redis.get("mykey") is None
# With a timedelta
redis.set("mykey", "Hello")
redis.expire("mykey", datetime.timedelta(seconds=5))
```
# EXPIREAT
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/expireat
Sets a timeout on key. The key will automatically be deleted.
## Arguments
The key to set the timeout on.
The timeout in unix seconds timestamp as int or a datetime.datetime object.
Set expiry only when the key has no expiry
Set expiry only when the key has an existing expiry
Set expiry only when the new expiry is greater than current one
Set expiry only when the new expiry is less than current one
## Response
`True` if the timeout was set
```py Example theme={"system"}
# With a datetime object
redis.set("mykey", "Hello")
redis.expireat("mykey", datetime.datetime.now() + datetime.timedelta(seconds=5))
# With a unix timestamp
redis.set("mykey", "Hello")
redis.expireat("mykey", int(time.time()) + 5)
```
# KEYS
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/keys
Returns all keys matching pattern.
This command may block the DB for a long time, depending on its size. We advice against using it in production. Use [SCAN](/redis/sdks/py/commands/generic/scan) instead.
## Arguments
A glob-style pattern. Use `*` to match all keys.
## Response
Array of keys matching the pattern.
```py Example theme={"system"}
keys = redis.keys("prefix*")
```
```py Match All theme={"system"}
keys = redis.keys("*")
```
# PERSIST
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/persist
Remove any timeout set on the key.
## Arguments
The key to persist
## Response
`True` if the timeout was set
```py Example theme={"system"}
redis.set("key1", "Hello")
redis.expire("key1", 10)
assert redis.ttl("key1") == 10
redis.persist("key1")
assert redis.ttl("key1") == -1
```
# PEXPIRE
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/pexpire
Sets a timeout on key. After the timeout has expired, the key will automatically be deleted.
## Arguments
The key to expire.
The timeout in milliseconds as int or datetime.timedelta
Set expiry only when the key has no expiry
Set expiry only when the key has an existing expiry
Set expiry only when the new expiry is greater than current one
Set expiry only when the new expiry is less than current one
## Response
`True` if the timeout was set
```py Example theme={"system"}
# With milliseconds
redis.set("mykey", "Hello")
redis.expire("mykey", 500)
# With a timedelta
redis.set("mykey", "Hello")
redis.expire("mykey", datetime.timedelta(milliseconds=500))
```
# PEXPIREAT
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/pexpireat
Sets a timeout on key. After the timeout has expired, the key will automatically be deleted.
## Arguments
The key to expire.
The timeout in unix milliseconds timestamp as int or a datetime.datetime object.
Set expiry only when the key has no expiry
Set expiry only when the key has an existing expiry
Set expiry only when the new expiry is greater than current one
Set expiry only when the new expiry is less than current one
## Response
`True` if the timeout was set
```py Example theme={"system"}
# With a unix timestamp
redis.set("mykey", "Hello")
redis.pexpireat("mykey", int(time.time() * 1000) )
# With a datetime object
redis.set("mykey", "Hello")
redis.pexpireat("mykey", datetime.datetime.now() + datetime.timedelta(seconds=5))
```
# PTTL
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/pttl
Return the expiration in milliseconds of a key.
## Arguments
The key
## Response
The number of milliseconds until this expires, negative if the key does not exist or does not have an expiration set.
```py Example theme={"system"}
redis.set("key1", "Hello")
assert redis.pttl("key1") == -1
redis.expire("key1", 1000)
assert redis.pttl("key1") > 0
redis.persist("key1")
assert redis.pttl("key1") == -1
```
# RANDOMKEY
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/randomkey
Returns a random key from database
## Arguments
No arguments
## Response
A random key from database, or `None` when database is empty.
```py Example theme={"system"}
assert redis.randomkey() is None
redis.set("key1", "Hello")
redis.set("key2", "World")
assert redis.randomkey() is not None
```
# RENAME
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/rename
Rename a key
Renames a key and overwrites the new key if it already exists.
Throws an exception if the key does not exist.
## Arguments
The original key.
A new name for the key.
## Response
`True` if key was renamed
```py Example theme={"system"}
redis.set("key1", "Hello")
redis.rename("key1", "key2")
assert redis.get("key1") is None
assert redis.get("key2") == "Hello"
# Renaming a nonexistent key throws an exception
redis.rename("nonexistent", "key3")
```
# RENAMENX
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/renamenx
Rename a key if it does not already exist.
Renames a key, only if the new key does not exist.
Throws an exception if the key does not exist.
## Arguments
The original key.
A new name for the key.
## Response
`True` if key was renamed
```py Example theme={"system"}
redis.set("key1", "Hello")
redis.set("key2", "World")
# Rename failed because "key2" already exists.
assert redis.renamenx("key1", "key2") == False
assert redis.renamenx("key1", "key3") == True
assert redis.get("key1") is None
assert redis.get("key2") == "World"
assert redis.get("key3") == "Hello"
```
# SCAN
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/scan
Scan the database for keys.
## Arguments
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by field names.
Number of fields to return per call.
Filter by type.
For example `string`, `hash`, `set`, `zset`, `list`, `stream`.
## Response
The new cursor and the keys as a tuple.
If the new cursor is `0` the iteration is complete.
Use the new cursor for subsequent calls.
```py Example theme={"system"}
# Get all keys
cursor = 0
results = []
while True:
cursor, keys = redis.scan(cursor, match="*")
results.extend(keys)
if cursor == 0:
break
```
# TOUCH
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/touch
Alters the last access time of one or more keys
## Arguments
One or more keys.
## Response
The number of keys that were touched.
```py Example theme={"system"}
redis.touch("key1", "key2", "key3")
```
# TTL
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/ttl
Return the expiration in seconds of a key.
## Arguments
The key
## Response
The number of seconds until this expires, negative if the key does not exist or does not have an expiration set.
```py Example theme={"system"}
# Get the TTL of a key
redis.set("my-key", "value")
assert redis.ttl("my-key") == -1
redis.expire("my-key", 10)
assert redis.ttl("my-key") > 0
# Non existent key
assert redis.ttl("non-existent-key") == -2
```
# TYPE
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/type
Get the type of a key.
## Arguments
The key to get.
## Response
The type of the key.
One of `string` | `list` | `set` | `zset` | `hash` | `none`
```py Example theme={"system"}
redis.set("key1", "Hello")
assert redis.type("key1") == "string"
redis.lpush("key2", "Hello")
assert redis.type("key2") == "list"
assert redis.type("non-existent-key") == "none"
```
# UNLINK
Source: https://upstash.com/docs/redis/sdks/py/commands/generic/unlink
Removes the specified keys. A key is ignored if it does not exist.
## Arguments
One or more keys to unlink.
## Response
The number of keys that were unlinked.
```py Basic theme={"system"}
assert redis.unlink("key1", "key2", "key3") == 3
```
# HDEL
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hdel
Deletes one or more hash fields.
## Arguments
The key to get.
One or more fields to delete.
## Response
The number of fields that were removed from the hash.
```py Example theme={"system"}
redis.hset("myhash", "field1", "Hello")
redis.hset("myhash", "field2", "World")
assert redis.hdel("myhash", "field1", "field2") == 2
```
# HEXISTS
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hexists
Checks if a field exists in a hash.
## Arguments
The key to get.
The field to check.
## Response
`True` if the hash contains `field`. `False` if the hash does not contain `field`, or `key` does not exist.
```py Example theme={"system"}
redis.hset("key", "field", "value")
assert redis.hexists("key", "field") == True
```
# HEXPIRE
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hexpire
Set a timeout on a hash field in seconds.
## Arguments
The key of the hash.
The field or list of fields within the hash to set the expiry for.
The timeout in seconds as an integer or a `datetime.timedelta` object.
Set expiry only when the field has no expiry. Defaults to `False`.
Set expiry only when the field has an existing expiry. Defaults to `False`.
Set expiry only when the new expiry is greater than the current one. Defaults to `False`.
Set expiry only when the new expiry is less than the current one. Defaults to `False`.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HEXPIRE documentation](https://redis.io/commands/hexpire).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
assert redis.hexpire(hash_name, field, 1) == [1]
```
# HEXPIREAT
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hexpireat
Sets an expiration time for field(s) in a hash in seconds since the Unix epoch.
## Arguments
The key of the hash.
The field or list of fields to set an expiration time for.
The expiration time as a Unix timestamp in seconds.
Set expiry only when the field has no expiry. Defaults to `False`.
Set expiry only when the field has an existing expiry. Defaults to `False`.
Set expiry only when the new expiry is greater than the current one. Defaults to `False`.
Set expiry only when the new expiry is less than the current one. Defaults to `False`.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HEXPIREAT documentation](https://redis.io/commands/hexpireat).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
assert redis.hexpireat(hash_name, field, int(time.time()) + 10) == [1]
```
# HEXPIRETIME
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hexpiretime
Retrieves the expiration time of field(s) in a hash in seconds.
## Arguments
The key of the hash.
The field or list of fields to retrieve the expiration time for.
## Response
A list of integers representing the expiration time in seconds since the Unix epoch.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HEXPIRETIME documentation](https://redis.io/commands/hexpiretime).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
redis.hexpireat(hash_name, field, int(time.time()) + 10)
assert redis.hexpiretime(hash_name, field) == [1697059200]
```
# HGET
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hget
Retrieves the value of a hash field.
## Arguments
The key to get.
The field to get.
## Response
The value of the field, or `null`, when field is not present in the hash or key does not exist.
```py Example theme={"system"}
redis.hset("myhash", "field1", "Hello")
assert redis.hget("myhash", "field1") == "Hello"
assert redis.hget("myhash", "field2") is None
```
# HGETALL
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hgetall
Retrieves all fields from a hash.
## Arguments
The key to get.
## Response
An object with all fields in the hash.
```py Example theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hgetall("myhash") == {"field1": "Hello", "field2": "World"}
```
# HINCRBY
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hincrby
Increments the value of a hash field by a given amount
If the hash field does not exist, it is set to 0 before performing the operation.
## Arguments
The key of the hash.
The field to increment
How much to increment the field by. Can be negative to subtract.
## Response
The new value of the field after the increment.
```py Example theme={"system"}
redis.hset("myhash", "field1", 5)
assert redis.hincrby("myhash", "field1", 10) == 15
```
# HINCRBYFLOAT
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hincrbyfloat
Increments the value of a hash field by a given float value.
## Arguments
The key of the hash.
The field to increment
How much to increment the field by. Can be negative to subtract.
## Response
The new value of the field after the increment.
```py Example theme={"system"}
redis.hset("myhash", "field1", 5.5)
assert redis.hincrbyfloat("myhash", "field1", 10.1) - 15.6 < 0.0001
```
# HKEYS
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hkeys
Return all field names in the hash stored at key.
## Arguments
The key of the hash.
## Response
The field names of the hash
```py Example theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hkeys("myhash") == ["field1", "field2"]
```
# HLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hlen
Returns the number of fields contained in the hash stored at key.
## Arguments
The key of the hash.
## Response
How many fields are in the hash.
```py Example theme={"system"}
assert redis.hlen("myhash") == 0
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hlen("myhash") == 2
```
# HMGET
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hmget
Return the requested fields and their values.
## Arguments
The key of the hash.
One or more fields to get.
## Response
An object containing the fields and their values.
```py Example theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hmget("myhash", "field1", "field2") == ["Hello", "World"]
```
# HMSET
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hmset
Write multiple fields to a hash.
## Arguments
The key of the hash.
A dictionary of fields and their values.
## Response
The number of fields that were added.
```py Example theme={"system"}
# Set multiple fields
assert redis.hset("myhash"{
"field1": "Hello",
"field2": "World"
}) == 2
```
# HPERSIST
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hpersist
Remove the expiration from one or more hash fields.
## Arguments
The key of the hash.
The field or list of fields within the hash to remove the expiry from.
## Response
A list of integers indicating the result for each field:
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration set.
* `1` if the expiration was successfully removed.
For more details, see [HPERSIST documentation](https://redis.io/commands/hpersist).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
redis.hpexpire(hash_name, field, 1000)
assert redis.hpersist(hash_name, field) == [1]
```
# HPEXPIRE
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hpexpire
Set a timeout on a hash field in milliseconds.
## Arguments
The key of the hash.
The field or list of fields within the hash to set the expiry for.
The timeout in milliseconds as an integer or a `datetime.timedelta` object.
Set expiry only when the field has no expiry. Defaults to `False`.
Set expiry only when the field has an existing expiry. Defaults to `False`.
Set expiry only when the new expiry is greater than the current one. Defaults to `False`.
Set expiry only when the new expiry is less than the current one. Defaults to `False`.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HPEXPIRE documentation](https://redis.io/commands/hpexpire).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
assert redis.hpexpire(hash_name, field, 1000) == [1]
```
# HPEXPIREAT
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hpexpireat
Sets an expiration time for field(s) in a hash in milliseconds since the Unix epoch.
## Arguments
The key of the hash.
The field or list of fields to set an expiration time for.
The expiration time as a Unix timestamp in milliseconds.
Set expiry only when the field has no expiry. Defaults to `False`.
Set expiry only when the field has an existing expiry. Defaults to `False`.
Set expiry only when the new expiry is greater than the current one. Defaults to `False`.
Set expiry only when the new expiry is less than the current one. Defaults to `False`.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HPEXPIREAT documentation](https://redis.io/commands/hpexpireat).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
assert redis.hpexpireat(hash_name, field, int(time.time() * 1000) + 1000) == [1]
```
# HPEXPIRETIME
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hpexpiretime
Retrieves the expiration time of a field in a hash in milliseconds.
## Arguments
The key of the hash.
The field or list of fields to retrieve the expiration time for.
## Response
A list of integers representing the expiration time in milliseconds since the Unix epoch.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HPEXPIRETIME documentation](https://redis.io/commands/hpexpiretime).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
redis.hpexpireat(hash_name, field, int(time.time() * 1000) + 1000)
assert redis.hpexpiretime(hash_name, field) == [1697059200000]
```
# HPTTL
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hpttl
Retrieves the remaining time-to-live (TTL) for field(s) in a hash in milliseconds.
## Arguments
The key of the hash.
The field or list of fields to retrieve the TTL for.
## Response
A list of integers representing the remaining TTL in milliseconds for each field.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HPTTL documentation](https://redis.io/commands/hpttl).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
redis.hpexpire(hash_name, field, 1000)
assert redis.hpttl(hash_name, field) == [950]
```
# HRANDFIELD
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hrandfield
Return a random field from a hash
## Arguments
The key of the hash.
Optionally return more than one field.
Return the values of the fields as well.
## Response
An object containing the fields and their values.
```py Single theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hrandfield("myhash") in ["field1", "field2"]
```
```py Multiple theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hrandfield("myhash", count=2) in [
["field1", "field2"],
["field2", "field1"]
]
```
```py With Values theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hrandfield("myhash", count=1, withvalues=True) in [
{"field1": "Hello"},
{"field2": "World"}
]
```
# HSCAN
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hscan
Scan a hash for fields.
## Arguments
The key of the hash.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by field names.
Number of fields to return per call.
## Response
The new cursor and the fields.
If the new cursor is `0` the iteration is complete.
```py Basic theme={"system"}
# Get all members of a hash.
cursor = 0
results = []
while True:
cursor, keys = redis.hscan("myhash", cursor, match="*")
results.extend(keys)
if cursor == 0:
break
```
# HSET
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hset
Write one or more fields to a hash.
## Arguments
The key of the hash.
Field to set
Value to set
An object of fields and their values.
## Response
The number of fields that were added.
```py Single theme={"system"}
# Set a single field
assert redis.hset("myhash", "field1", "Hello") == 1
```
```py Multiple theme={"system"}
# Set multiple fields
assert redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
}) == 2
```
# HSETNX
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hsetnx
Write a field to a hash but only if the field does not exist.
## Arguments
The key of the hash.
The name of the field.
The value to set.
## Response
`True` if the field was set, `False` if it already existed.
```py Example theme={"system"}
assert redis.hsetnx("myhash", "field1", "Hello") == True
assert redis.hsetnx("myhash", "field1", "World") == False
```
# HSTRLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hstrlen
Returns the string length of a value in a hash.
## Arguments
The key of the hash.
The name of the field.
## Response
`0` if the hash or field does not exist. Otherwise the length of the string.
```py Example theme={"system"}
length = redis.hstrlen("key", "field")
```
# HTTL
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/httl
Retrieves the remaining time-to-live (TTL) for field(s) in a hash in seconds.
## Arguments
The key of the hash.
The field or list of fields to retrieve the TTL for.
## Response
A list of integers representing the remaining TTL in seconds for each field.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HTTL documentation](https://redis.io/commands/httl).
```py Example theme={"system"}
redis.hset(hash_name, field, value)
redis.hexpire(hash_name, field, 10)
assert redis.httl(hash_name, field) == [9]
```
# HVALS
Source: https://upstash.com/docs/redis/sdks/py/commands/hash/hvals
Returns all values in the hash stored at key.
## Arguments
The key of the hash.
## Response
All values in the hash, or an empty list when key does not exist.
```py Example theme={"system"}
redis.hset("myhash", values={
"field1": "Hello",
"field2": "World"
})
assert redis.hvals("myhash") == ["Hello", "World"]
```
# JSON.ARRAPPEND
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrappend
Append values to the array at path in the JSON document at key.
To specify a string as an array value to append, wrap the quoted string with an additional set of single quotes. Example: '"silver"'.
## Arguments
The key of the json entry.
The path of the array.
One or more values to append to the array.
## Response
The length of the array after the appending.
```py Example theme={"system"}
redis.json.arrappend("key", "$.path.to.array", "a")
```
# JSON.ARRINDEX
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrindex
Search for the first occurrence of a JSON value in an array.
## Arguments
The key of the json entry.
The path of the array.
The value to search for.
The start index.
The stop index.
## Response
The index of the first occurrence of the value in the array, or -1 if not found.
```py Example theme={"system"}
index = redis.json.arrindex("key", "$.path.to.array", "a")
```
# JSON.ARRINSERT
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrinsert
Insert the json values into the array at path before the index (shifts to the right).
## Arguments
The key of the json entry.
The path of the array.
The index where to insert the values.
One or more values to append to the array.
## Response
The length of the array after the insertion.
```py Example theme={"system"}
length = redis.json.arrinsert("key", "$.path.to.array", 2, "a", "b")
```
# JSON.ARRLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrlen
Report the length of the JSON array at `path` in `key`.
## Arguments
The key of the json entry.
The path of the array.
## Response
The length of the array.
```py Example theme={"system"}
length = redis.json.arrlen("key", "$.path.to.array")
```
# JSON.ARRPOP
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrpop
Remove and return an element from the index in the array. By default the last element from an array is popped.
## Arguments
The key of the json entry.
The path of the array.
The index of the element to pop.
## Response
The popped element or null if the array is empty.
```py Example theme={"system"}
element = redis.json.arrpop("key", "$.path.to.array")
```
```py First theme={"system"}
firstElement = redis.json.arrpop("key", "$.path.to.array", 0)
```
# JSON.ARRTRIM
Source: https://upstash.com/docs/redis/sdks/py/commands/json/arrtrim
Trim an array so that it contains only the specified inclusive range of elements.
## Arguments
The key of the json entry.
The path of the array.
The start index of the range.
The stop index of the range.
## Response
The length of the array after the trimming.
```py Example theme={"system"}
length = redis.json.arrtrim("key", "$.path.to.array", 2, 10)
```
# JSON.CLEAR
Source: https://upstash.com/docs/redis/sdks/py/commands/json/clear
Clear container values (arrays/objects) and set numeric values to 0.
## Arguments
The key of the json entry.
The path to clear
## Response
How many keys cleared from the objects.
```py Example theme={"system"}
redis.json.clear("key")
```
```py With path theme={"system"}
redis.json.clear("key", "$.my.key")
```
# JSON.DEL
Source: https://upstash.com/docs/redis/sdks/py/commands/json/del
Delete a key from a JSON document.
## Arguments
The key of the json entry.
The path to delete
## Response
How many paths were deleted.
```py Example theme={"system"}
redis.json.del("key", "$.path.to.value")
```
# JSON.FORGET
Source: https://upstash.com/docs/redis/sdks/py/commands/json/forget
Delete a key from a JSON document.
## Arguments
The key of the json entry.
The path to forget.
## Response
How many paths were deleted.
```py Example theme={"system"}
redis.json.forget("key", "$.path.to.value")
```
# JSON.GET
Source: https://upstash.com/docs/redis/sdks/py/commands/json/get
Get a single value from a JSON document.
## Arguments
The key of the json entry.
One or more paths to retrieve from the JSON document.
## Response
The value at the specified path or `null` if the path does not exist.
```py Example theme={"system"}
value = redis.json.get("key", "$.path.to.somewhere")
```
# JSON.MERGE
Source: https://upstash.com/docs/redis/sdks/py/commands/json/merge
Merges the JSON value at path in key with the provided value.
## Arguments
The key of the json entry.
The path of the value to set.
The value to merge with.
## Response
Returns true if the merge was successful.
```py Example theme={"system"}
redis.json.merge("key", "$.path.to.value", {"new": "value"})
```
# JSON.MGET
Source: https://upstash.com/docs/redis/sdks/py/commands/json/mget
Get the same path from multiple JSON documents.
## Arguments
One or more keys of JSON documents.
The path to get from the JSON document.
## Response
The values at the specified path or `null` if the path does not exist.
```py Example theme={"system"}
values = redis.json.mget(["key1", "key2"], "$.path.to.somewhere")
```
# JSON.MSET
Source: https://upstash.com/docs/redis/sdks/py/commands/json/mset
Sets multiple JSON values at multiple paths in multiple keys.
## Arguments
A list of tuples where each tuple contains a key, a path, and a value.
## Response
Returns true if the operation was successful.
```py Example theme={"system"}
redis.json.mset([(key, "$.path", value), (key2, "$.path2", value2)])
```
# JSON.NUMINCRBY
Source: https://upstash.com/docs/redis/sdks/py/commands/json/numincrby
Increment the number value stored at `path` by number.
## Arguments
The key of the json entry.
The path of the number.
The number to increment by.
## Response
The new value after incrementing
```py Example theme={"system"}
newValue = redis.json.numincrby("key", "$.path.to.value", 2)
```
# JSON.NUMMULTBY
Source: https://upstash.com/docs/redis/sdks/py/commands/json/nummultby
Multiply the number value stored at `path` by number.
## Arguments
The key of the json entry.
The path of the number.
The number to multiply by.
## Response
The new value after multiplying
```py Example theme={"system"}
newValue = redis.json.nummultby("key", "$.path.to.value", 2)
```
# JSON.OBJKEYS
Source: https://upstash.com/docs/redis/sdks/py/commands/json/objkeys
Return the keys in the object that`s referenced by path.
## Arguments
The key of the json entry.
The path of the object.
## Response
The keys of the object at the path.
```py Example theme={"system"}
keys = redis.json.objkeys("key", "$.path")
```
# JSON.OBJLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/json/objlen
Report the number of keys in the JSON object at `path` in `key`.
## Arguments
The key of the json entry.
The path of the object.
## Response
The number of keys in the objects.
```py Example theme={"system"}
lengths = redis.json.objlen("key", "$.path")
```
# JSON.RESP
Source: https://upstash.com/docs/redis/sdks/py/commands/json/resp
Return the value at the path in Redis serialization protocol format.
## Arguments
The key of the json entry.
The path of the object.
## Response
Return the value at the path in Redis serialization protocol format.
```py Example theme={"system"}
resp = redis.json.resp("key", "$.path")
```
# JSON.SET
Source: https://upstash.com/docs/redis/sdks/py/commands/json/set
Set the JSON value at path in key.
## Arguments
The key of the json entry.
The path of the value to set.
The value to set.
Sets the value at path only if it does not exist.
Sets the value at path only if it does exist.
## Response
Returns true if the value was set.
```py Example theme={"system"}
redis.json.set(key, "$.path", value)
```
```py NX theme={"system"}
value = ...
redis.json.set(key, "$.path", value, nx=true)
```
```py XX theme={"system"}
value = ...
redis.json.set(key, "$.path", value, xx=true)
```
# JSON.STRAPPEND
Source: https://upstash.com/docs/redis/sdks/py/commands/json/strappend
Append the json-string values to the string at path.
## Arguments
The key of the json entry.
The path of the string.
The value to append to the existing string.
## Response
The length of the string after the appending.
```py Example theme={"system"}
redis.json.strappend("key", "$.path.to.str", "abc")
```
# JSON.STRLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/json/strlen
Report the length of the JSON String at path in key
## Arguments
The key of the json entry.
The path of the string.
## Response
The length of the string at the path.
```py Example theme={"system"}
redis.json.strlen("key", "$.path.to.str")
```
# JSON.TOGGLE
Source: https://upstash.com/docs/redis/sdks/py/commands/json/toggle
Toggle a boolean value stored at `path`.
## Arguments
The key of the json entry.
The path of the boolean.
## Response
The new value of the boolean.
```py Example theme={"system"}
bool = redis.json.toggle("key", "$.path.to.bool")
```
# JSON.TYPE
Source: https://upstash.com/docs/redis/sdks/py/commands/json/type
Report the type of JSON value at `path`.
## Arguments
The key of the json entry.
The path of the value.
## Response
The type of the value at `path` or `null` if the value does not exist.
```py Example theme={"system"}
myType = redis.json.type("key", "$.path.to.value")
```
# LINDEX
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lindex
Returns the element at index index in the list stored at key.
The index is zero-based, so 0 means the first element, 1 the second element and so on. Negative indices can be used to designate elements starting at the tail of the list.
## Arguments
The key of the list.
The index of the element to return, zero-based.
## Response
The value of the element at index index in the list. If the index is out of range, `None` is returned.
```py Example theme={"system"}
redis.rpush("key", "a", "b", "c")
assert redis.lindex("key", 0) == "a"
```
# LINSERT
Source: https://upstash.com/docs/redis/sdks/py/commands/list/linsert
Insert an element before or after another element in a list
## Arguments
The key of the list.
Whether to insert the element before or after pivot.
The element to insert before or after.
The element to insert.
## Response
The list length after insertion, `0` when the list doesn't exist or `-1` when pivot was not found.
```py Example theme={"system"}
redis.rpush("key", "a", "b", "c")
redis.linsert("key", "before", "b", "x")
```
# LLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/list/llen
Returns the length of the list stored at key.
## Arguments
The key of the list.
## Response
The length of the list at key.
```py Example theme={"system"}
redis.rpush("key", "a", "b", "c")
assert redis.llen("key") == 3
```
# LMOVE
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lmove
Move an element from one list to another.
## Arguments
The key of the source list.
The key of the destination list.
The side of the source list from which the element was popped.
The side of the destination list to which the element was pushed.
## Response
The element that was moved.
```py Example theme={"system"}
redis.rpush("source", "one", "two", "three")
redis.lpush("destination", "four", "five", "six")
assert redis.lmove("source", "destination", "RIGHT", "LEFT") == "three"
assert redis.lrange("source", 0, -1) == ["one", "two"]
```
# LPOP
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lpop
Remove and return the first element(s) of a list
## Arguments
The key of the list.
How many elements to pop. If not specified, a single element is popped.
## Response
The popped element(s). If `count` was specified, an array of elements is
returned, otherwise a single element is returned. If the list is empty, `None`
is returned.
```py Single theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.lpop("mylist") == "one"
```
```py Multiple theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.lpop("mylist", 2) == ["one", "two"]
```
# LPOS
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lpos
Returns the index of matching elements inside a list.
## Arguments
The key of the list.
The element to match.
Which match to return. 1 to return the first match, 2 to return the second match, and so on.
1 by default.
The maximum number of elements to match. If specified, an array of elements
is returned instead of a single element.
Limit the number of comparisons to perform.
## Response
The index of the matching element or an array of indexes if `count` is
specified.
```py Example theme={"system"}
redis.rpush("key", "a", "b", "c");
assert redis.lpos("key", "b") == 1
```
```py With Rank theme={"system"}
redis.rpush("key", "a", "b", "c", "b");
assert redis.lpos("key", "b", rank=2) == 3
```
```py With Count theme={"system"}
redis.rpush("key", "a", "b", "b")
assert redis.lpos("key", "b", count=2) == [1, 2]
```
# LPUSH
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lpush
Push an element at the head of the list.
## Arguments
The key of the list.
One or more elements to push at the head of the list.
## Response
The length of the list after the push operation.
```py Example theme={"system"}
assert redis.lpush("mylist", "one", "two", "three") == 3
assert lrange("mylist", 0, -1) == ["three", "two", "one"]
```
# LPUSHX
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lpushx
Push an element at the head of the list only if the list exists.
## Arguments
The key of the list.
One or more elements to push at the head of the list.
## Response
The length of the list after the push operation.
`0` if the list did not exist and thus no element was pushed.
```py Example theme={"system"}
# Initialize the list
redis.lpush("mylist", "one")
assert redis.lpushx("mylist", "two", "three") == 3
assert lrange("mylist", 0, -1) == ["three", "two", "one"]
# Non existing key
assert redis.lpushx("non-existent-list", "one") == 0
```
# LRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lrange
Returns the specified elements of the list stored at key.
## Arguments
The key of the list.
The starting index of the range to return.
Use negative numbers to specify offsets starting at the end of the list.
The ending index of the range to return.
Use negative numbers to specify offsets starting at the end of the list.
## Response
The list of elements in the specified range.
```py Example theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.lrange("mylist", 0, 1) == ["one", "two"]
assert redis.lrange("mylist", 0, -1) == ["one", "two", "three"]
```
# LREM
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lrem
Remove the first `count` occurrences of an element from a list.
## Arguments
The key of the list.
How many occurrences of the element to remove.
The element to remove
## Response
The number of elements removed.
```py Example theme={"system"}
redis.rpush("mylist", "one", "two", "three", "two", "one")
assert redis.lrem("mylist", 2, "two") == 2
assert redis.lrange("mylist", 0, -1) == ["one", "three", "one"]
```
# LSET
Source: https://upstash.com/docs/redis/sdks/py/commands/list/lset
Set a value at a specific index.
## Arguments
The key of the list.
At which index to set the value.
The value to set.
## Response
Returns `True` if the index was in range and the value was set.
```py Example theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.lset("mylist", 1, "Hello") == True
assert redis.lrange("mylist", 0, -1) == ["one", "Hello", "three"]
assert redis.lset("mylist", 5, "Hello") == False
assert redis.lrange("mylist", 0, -1) == ["one", "Hello", "three"]
```
# LTRIM
Source: https://upstash.com/docs/redis/sdks/py/commands/list/ltrim
Trim a list to the specified range
## Arguments
The key of the list.
The index of the first element to keep.
The index of the first element to keep.
## Response
Returns `True` if the list was trimmed, `False` otherwise.
```py Example theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.ltrim("mylist", 0, 1) == True
assert redis.lrange("mylist", 0, -1) == ["one", "two"]
```
# RPOP
Source: https://upstash.com/docs/redis/sdks/py/commands/list/rpop
Remove and return the last element(s) of a list
## Arguments
The key of the list.
How many elements to pop. If not specified, a single element is popped.
## Response
The popped element(s). If `count` was specified, an array of elements is
returned, otherwise a single element is returned. If the list is empty, `null`
is returned.
```py Single theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.rpop("mylist") == "three"
```
```py Multiple theme={"system"}
redis.rpush("mylist", "one", "two", "three")
assert redis.rpop("mylist", 2) == ["three", "two"]
```
# RPUSH
Source: https://upstash.com/docs/redis/sdks/py/commands/list/rpush
Push an element at the end of the list.
## Arguments
The key of the list.
One or more elements to push at the end of the list.
## Response
The length of the list after the push operation.
```py Example theme={"system"}
assert redis.rpush("mylist", "one", "two", "three") == 3
assert lrange("mylist", 0, -1) == ["one", "two", "three"]
```
# RPUSHX
Source: https://upstash.com/docs/redis/sdks/py/commands/list/rpushx
Push an element at the end of the list only if the list exists.
## Arguments
The key of the list.
One or more elements to push at the end of the list.
## Response
The length of the list after the push operation.
`0` if the list did not exist and thus no element was pushed.
```py Example theme={"system"}
assert redis.rpushx("mylist", "one", "two", "three") == 3
assert lrange("mylist", 0, -1) == ["one", "two", "three"]
# Non existing key
assert redis.rpushx("non-existent-list", "one") == 0
```
# Overview
Source: https://upstash.com/docs/redis/sdks/py/commands/overview
Available Commands in upstash-redis
Echo the given string.
Ping the server.
Count set bits in a string.
Perform bitwise operations between strings.
Perform bitwise operations between strings.
Find first bit set or clear in a string.
Returns the bit value at offset in the string value stored at key.
Sets or clears the bit at offset in the string value stored at key.
Delete one or multiple keys.
Determine if a key exists.
Set a key's time to live in seconds.
Set the expiration for a key as a UNIX timestamp.
Find all keys matching the given pattern.
Remove the expiration from a key.
Set a key's time to live in milliseconds.
Set the expiration for a key as a UNIX timestamp specified in milliseconds.
Get the time to live for a key in milliseconds.
Return a random key from the keyspace.
Rename a key.
Rename a key, only if the new key does not exist.
Incrementally iterate the keys space.
Alters the last access time of a key(s). Returns the number of existing keys specified.
Get the time to live for a key.
Determine the type stored at key.
Delete one or more keys.
Publish messages to many clients
Append a value to a string stored at key.
Decrement the integer value of a key by one.
Decrement the integer value of a key by the given number.
Get the value of a key.
Get the value of a key and delete the key.
Get a substring of the string stored at a key.
Set the string value of a key and return its old value.
Increment the integer value of a key by one.
Increment the integer value of a key by the given amount.
Increment the float value of a key by the given amount.
Get the values of all the given keys.
Set multiple keys to multiple values.
Set multiple keys to multiple values, only if none of the keys exist.
Set the string value of a key.
Overwrite part of a string at key starting at the specified offset.
Get the length of the value stored in a key.
Acknowledge one or multiple messages as processed for a consumer group.
Append a new entry to a stream.
Transfer ownership of pending messages to another consumer automatically.
Transfer ownership of pending messages to another consumer.
Remove one or multiple entries from a stream.
Create a new consumer group for a stream.
Create a new consumer in a consumer group.
Delete a consumer from a consumer group.
Delete an entire consumer group.
Set the last delivered ID for a consumer group.
List all consumers in a consumer group.
List all consumer groups for a stream.
Get the number of entries in a stream.
Get information about pending messages in a consumer group.
Get entries from a stream within a range of IDs.
Read data from one or multiple streams.
Read data from streams as part of a consumer group.
Get entries from a stream within a range of IDs in reverse order.
Trim a stream to a specified size.
Run multiple commands in a transaction.
# PUBLISH
Source: https://upstash.com/docs/redis/sdks/py/commands/pubsub/publish
Publish a message to a channel
## Arguments
The channel to publish to.
The message to publish.
## Response
The number of clients who received the message.
```py Example theme={"system"}
listeners = redis.publish("my-topic", "my-message")
```
# EVAL
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/eval
Evaluate a Lua script server side.
## Arguments
The lua script to run.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```py Example theme={"system"}
script = """
local value = redis.call("GET", KEYS[1])
return value
"""
redis.set("mykey", "Hello")
assert redis.eval(script, keys=["mykey"]) == "Hello"
```
```py Accepting arguments theme={"system"}
assert redis.eval("return ARGV[1]", args=["Hello"]) == "Hello"
```
# EVAL_RO
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/eval_ro
Evaluate a read-only Lua script server side.
## Arguments
The read-only lua script to run.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```py Example theme={"system"}
script = """
local value = redis.call("GET", KEYS[1])
return value
"""
redis.set("mykey", "Hello")
assert redis.eval_ro(script, keys=["mykey"]) == "Hello"
```
```py Accepting arguments theme={"system"}
assert redis.eval_ro("return ARGV[1]", args=["Hello"]) == "Hello"
```
# EVALSHA
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/evalsha
Evaluate a cached Lua script server side.
`EVALSHA` is like `EVAL` but instead of sending the script over the wire every time, you reference the script by its SHA1 hash. This is useful for caching scripts on the server side.
## Arguments
The sha1 hash of the script.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```py Example theme={"system"}
result = redis.evalsha("fb67a0c03b48ddbf8b4c9b011e779563bdbc28cb", args=["hello"])
assert result = "hello"
```
# EVALSHA_RO
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/evalsha_ro
Evaluate a cached read-only Lua script server side.
`EVALSHA_RO` is like `EVAL_RO` but instead of sending the script over the wire every time, you reference the script by its SHA1 hash. This is useful for caching scripts on the server side.
## Arguments
The sha1 hash of the read-only script.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```py Example theme={"system"}
result = redis.evalsha_ro("fb67a0c03b48ddbf8b4c9b011e779563bdbc28cb", args=["hello"])
assert result = "hello"
```
# SCRIPT EXISTS
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/script_exists
Check if scripts exist in the script cache.
## Arguments
The sha1 of the scripts to check.
## Response
A list of booleans indicating if the script exists in the script cache.
```py Example theme={"system"}
# Script 1 exists
# Script 0 does not
await redis.scriptExists("", "") == [1, 0]
```
# SCRIPT FLUSH
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/script_flush
Removes all scripts from the script cache.
## Arguments
Whether to perform the flush asynchronously or synchronously.
```py Example theme={"system"}
redis.script_flush(flush_type="ASYNC")
```
# SCRIPT LOAD
Source: https://upstash.com/docs/redis/sdks/py/commands/scripts/script_load
Load the specified Lua script into the script cache.
## Arguments
The script to load.
## Response
The sha1 of the script.
```py Example theme={"system"}
sha1 = redis.script_load("return 1")
assert redis.evalsha(sha1) == 1
```
# DBSIZE
Source: https://upstash.com/docs/redis/sdks/py/commands/server/dbsize
Count the number of keys in the database.
## Arguments
This command has no arguments
## Response
The number of keys in the database
```py Example theme={"system"}
redis.dbsize()
```
# FLUSHALL
Source: https://upstash.com/docs/redis/sdks/py/commands/server/flushall
Deletes all keys permanently. Use with caution!
## Arguments
Whether to perform the operation asynchronously.
Defaults to synchronous.
```py Sync theme={"system"}
redis.flushall()
```
```py Async theme={"system"}
redis.flushall(flush_type="ASYNC")
```
# FLUSHDB
Source: https://upstash.com/docs/redis/sdks/py/commands/server/flushdb
Deletes all keys permanently. Use with caution!
## Arguments
Whether to perform the operation asynchronously.
Defaults to synchronous.
```py Sync theme={"system"}
redis.flushall()
```
```py Async theme={"system"}
redis.flushall(flush_type="ASYNC")
```
# SADD
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sadd
Adds one or more members to a set.
## Arguments
The key of the set.
One or more members to add to the set.
## Response
The number of elements that were added to the set, not including all the elements already present in the set.
```py Example theme={"system"}
assert redis.sadd("key", "a", "b", "c") == 3
```
# SCARD
Source: https://upstash.com/docs/redis/sdks/py/commands/set/scard
Return how many members are in a set
## Arguments
The key of the set.
## Response
How many members are in the set.
```py Example theme={"system"}
redis.sadd("key", "a", "b", "c");
assert redis.scard("key") == 3
```
# SDIFF
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sdiff
Return the difference between sets
## Arguments
The keys of the sets to perform the difference operation on.
## Response
The resulting set.
```py Example theme={"system"}
redis.sadd("set1", "a", "b", "c");
redis.sadd("set2", "c", "d", "e");
assert redis.sdiff("set1", "set2") == {"a", "b"}
```
# SDIFFSTORE
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sdiffstore
Write the difference between sets to a new set
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the difference operation on.
## Response
The number of elements in the resulting set.
```py Example theme={"system"}
redis.sadd("key1", "a", "b", "c")
redis.sadd("key2", "c", "d", "e")
# Store the result in a new set
assert redis.sdiffstore("res", "key1", "key2") == 2
assert redis.smembers("set") == {"a", "b"}
```
# SINTER
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sinter
Return the intersection between sets
## Arguments
The keys of the sets to perform the intersection operation on.
## Response
The resulting set.
```py Example theme={"system"}
redis.sadd("set1", "a", "b", "c");
redis.sadd("set2", "c", "d", "e");
assert redis.sinter("set1", "set2") == {"c"}
```
# SINTER
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sinterstore
Return the intersection between sets and store the resulting set in a key
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the intersection operation on.
## Response
The number of elements in the resulting set.
```py Example theme={"system"}
redis.sadd("set1", "a", "b", "c");
redis.sadd("set2", "c", "d", "e");
assert redis.sinter("destination", "set1", "set2") == 1
```
# SISMEMBER
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sismember
Check if a member exists in a set
## Arguments
The key of the set to check.
The member to check for.
## Response
`True` if the member exists in the set, `False` if not.
```py Example theme={"system"}
redis.sadd("set", "a", "b", "c")
assert redis.sismember("set", "a") == True
```
# SMEMBERS
Source: https://upstash.com/docs/redis/sdks/py/commands/set/smembers
Return all the members of a set
## Arguments
The key of the set.
## Response
The members of the set.
```py Example theme={"system"}
redis.sadd("set", "a", "b", "c");
assert redis.smembers("set") == {"a", "b", "c"}
```
# SMISMEMBER
Source: https://upstash.com/docs/redis/sdks/py/commands/set/smismember
Check if multiple members exist in a set
## Arguments
The key of the set to check.
The members to check
## Response
An array of `True` and `False` values.
`True` if the member exists in the set, `False` if not.
```py Example theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.smismember("myset", "one", "four") == [True, False]
assert redis.smismember("myset", "four", "five") == [False, False]
```
# SMOVE
Source: https://upstash.com/docs/redis/sdks/py/commands/set/smove
Move a member from one set to another
## Arguments
The key of the set to move the member from.
The key of the set to move the member to.
The members to move
## Response
`True` if the member was moved, `False` if it was not.
```py Example theme={"system"}
redis.sadd("src", "one", "two", "three")
redis.sadd("dest", "four")
assert redis.smove("src", "dest", "three") == True
assert redis.smembers("source") == {"one", "two"}
assert redis.smembers("destination") == {"three", "four"}
```
# SPOP
Source: https://upstash.com/docs/redis/sdks/py/commands/set/spop
Removes and returns one or more random members from a set.
## Arguments
The key of the set.
How many members to remove and return.
## Response
The popped member.
If `count` is specified, a set of members is returned.
```py Single theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.spop("myset") in {"one", "two", "three"}
```
```py With Count theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.spop("myset", 2) in {"one", "two", "three"}
```
# SRANDMEMBER
Source: https://upstash.com/docs/redis/sdks/py/commands/set/srandmember
Returns one or more random members from a set.
## Arguments
The key of the set.
How many members to return.
## Response
The random member.
If `count` is specified, an array of members is returned.
```py Single theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.srandmember("myset") in {"one", "two", "three"}
```
```py With Count theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.srandmember("myset", 2) in {"one", "two", "three"}
```
# SREM
Source: https://upstash.com/docs/redis/sdks/py/commands/set/srem
Remove one or more members from a set
## Arguments
The key of the set to remove the member from.
One or more members to remove from the set.
## Response
How many members were removed
```py Example theme={"system"}
redis.sadd("myset", "one", "two", "three")
assert redis.srem("myset", "one", "four") == 1
```
# SSCAN
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sscan
Scan a set
## Arguments
The key of the set.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by members.
Number of members to return per call.
## Response
The new cursor and the members.
If the new cursor is `0` the iteration is complete.
```py Example theme={"system"}
# Get all members of a set.
cursor = 0
results = set()
while True:
cursor, keys = redis.sscan("myset", cursor, match="*")
results.extend(keys)
if cursor == 0:
break
```
# SUNION
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sunion
Return the union between sets
## Arguments
The keys of the sets to perform the union operation on.
## Response
The resulting set
```py Example theme={"system"}
redis.sadd("key1", "a", "b", "c")
redis.sadd("key2", "c", "d", "e")
assert redis.sunion("key1", "key2") == {"a", "b", "c", "d", "e"}
```
# SUNIONSTORE
Source: https://upstash.com/docs/redis/sdks/py/commands/set/sunionstore
Return the union between sets and store the resulting set in a key
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the union operation on.
## Response
The members of the resulting set.
```py Example theme={"system"}
redis.sadd("set1", "a", "b", "c");
redis.sadd("set2", "c", "d", "e");
redis.sunionstore("destination", "set1", "set2")
```
# XACK
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xack
Removes one or multiple messages from the pending entries list of a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The ID(s) of the message(s) to acknowledge. Can be multiple IDs as separate arguments.
## Response
The number of messages successfully acknowledged.
```py Single message theme={"system"}
result = redis.xack("mystream", "mygroup", "1638360173533-0")
```
```py Multiple messages theme={"system"}
result = redis.xack("mystream", "mygroup", "1638360173533-0", "1638360173533-1")
```
# XADD
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xadd
Appends one or more new entries to a stream.
## Arguments
The key to of the stream.
The stream entry ID. If `*` is passed, a new ID will be generated
automatically.
Key-value data to be appended to the stream.
The maximum number of entries to keep in the stream. Mutually exclusive with `minid`.
Use approximate trimming (more efficient). When `True`, Redis may keep slightly more entries than specified. Defaults to `True`.
Prevent creating the stream if it does not exist. Defaults to `False`.
The minimum ID to keep. Entries with IDs lower than this will be removed. Mutually exclusive with `maxlen`.
Limit how many entries will be trimmed at most (only valid with approximate trimming).
## Response
The ID of the newly added entry.
```py Basic Example theme={"system"}
redis.xadd("mystream", "*", {"name": "John Doe", "age": 30})
```
```py With Custom ID theme={"system"}
redis.xadd("mystream", "1634567890123-0", {"temperature": 25.5, "humidity": 60})
```
```py Approximate trim with maxlen theme={"system"}
redis.xadd("mystream", "*", {"log_level": "error", "message": "Database connection failed"}, maxlen=100)
```
# XAUTOCLAIM
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xautoclaim
Changes the ownership of pending messages from one consumer to another in a stream consumer group automatically.
## Arguments
The key of the stream.
The consumer group name.
The consumer name that will claim the messages.
The minimum idle time in milliseconds for messages to be claimed.
The stream entry ID to start claiming from.
The maximum number of messages to claim.
Return only the message IDs instead of the full message data.
## Response
Returns a list containing:
* Next start ID for pagination
* List of claimed messages. If `justid` option is used, returns only message IDs.
* List of deleted message IDs
```py Example theme={"system"}
# Auto-claim messages that have been idle for more than 60 seconds
result = redis.xautoclaim(
"mystream",
"mygroup",
"consumer1",
60000, # 60 seconds
start="0-0"
)
```
```py With count and justid theme={"system"}
result = redis.xautoclaim(
"mystream",
"mygroup",
"consumer1",
60000,
start="0-0",
count=5,
justid=True
)
```
```py theme={"system"}
[
"1638360173533-1", # next start ID
[["1638360173533-0", ["field1", "value1", "field2", "value2"]]], # claimed messages
[] # deleted message IDs
]
```
# XCLAIM
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xclaim
Changes the ownership of pending messages from one consumer to another in a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The consumer name that will claim the messages.
The minimum idle time in milliseconds for messages to be claimed.
The ID(s) of the message(s) to claim. Can be multiple IDs as separate arguments.
Return only the message IDs instead of the full message data.
## Response
Returns a list of claimed messages. If `justid` option is used, returns only message IDs.
```py Example theme={"system"}
# Claim messages that have been idle for more than 60 seconds
result = redis.xclaim(
"mystream",
"mygroup",
"consumer1",
60000, # 60 seconds
"1638360173533-0", "1638360173533-1"
)
```
```py With justid option theme={"system"}
result = redis.xclaim(
"mystream",
"mygroup",
"consumer1",
60000,
"1638360173533-0",
justid=True
)
```
```py theme={"system"}
[
["1638360173533-0", ["field1", "value1", "field2", "value2"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]]
]
```
# XDEL
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xdel
Removes the specified entries from a stream, and returns the number of entries deleted.
## Arguments
The key of the stream.
The ID(s) of the message(s) to delete. Can be multiple IDs as separate arguments.
## Response
The number of entries actually deleted from the stream.
```py Single message theme={"system"}
result = redis.xdel("mystream", "1638360173533-0")
```
```py Multiple messages theme={"system"}
result = redis.xdel("mystream", "1638360173533-0", "1638360173533-1", "1638360173533-2")
```
# XGROUP CREATE
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xgroup_create
Create a new consumer group for a Redis stream.
## Arguments
The key of the stream.
The consumer group name.
The stream entry ID to start consuming from. Use '\$' to start from the end.
Create the stream if it doesn't exist.
## Response
Returns "OK" if the consumer group was created successfully.
```py Start from end theme={"system"}
result = redis.xgroup_create("mystream", "mygroup", "$")
```
```py Create stream if not exists theme={"system"}
result = redis.xgroup_create("newstream", "mygroup", "$", mkstream=True)
```
```py Start from beginning theme={"system"}
result = redis.xgroup_create("mystream", "mygroup2", "0-0")
```
# XGROUP CREATECONSUMER
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xgroup_createconsumer
Create a new consumer in an existing consumer group.
## Arguments
The key of the stream.
The consumer group name.
The consumer name to create.
## Response
Returns 1 if the consumer was created, 0 if it already existed.
```py Create new consumer theme={"system"}
result = redis.xgroup_createconsumer("mystream", "mygroup", "consumer1")
```
# XGROUP DELCONSUMER
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xgroup_delconsumer
Delete a consumer from a consumer group.
## Arguments
The key of the stream.
The consumer group name.
The consumer name to delete.
## Response
Returns the number of pending messages the consumer had.
```py Delete existing consumer theme={"system"}
result = redis.xgroup_delconsumer("mystream", "mygroup", "consumer1")
```
# XGROUP DESTROY
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xgroup_destroy
Delete an entire consumer group.
## Arguments
The key of the stream.
The consumer group name to destroy.
## Response
Returns 1 if the group was destroyed, 0 if it didn't exist.
```py Destroy existing group theme={"system"}
result = redis.xgroup_destroy("mystream", "mygroup")
```
# XGROUP SETID
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xgroup_setid
Set the last delivered ID for a consumer group.
## Arguments
The key of the stream.
The consumer group name.
The stream entry ID to set as the last delivered ID. Use '\$' for the last entry.
Set the number of entries read by the group.
## Response
Returns "OK" if the ID was set successfully.
```py Set to beginning theme={"system"}
result = redis.xgroup_setid("mystream", "mygroup", "0-0")
```
```py Set to end with entries count theme={"system"}
result = redis.xgroup_setid("mystream", "mygroup", "$", entries_read=10)
```
# XINFO CONSUMERS
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xinfo_consumers
List all consumers in a consumer group.
## Arguments
The key of the stream.
The consumer group name.
## Response
Returns a list of consumer information. Each consumer is represented as a list of key-value pairs.
```py Get consumers info theme={"system"}
result = redis.xinfo_consumers("mystream", "mygroup")
```
```py theme={"system"}
[
["name", "consumer1", "pending", 0, "idle", 1000, "inactive", 1000],
["name", "consumer2", "pending", 2, "idle", 2000, "inactive", 2000]
]
```
# XINFO GROUPS
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xinfo_groups
List all consumer groups for a stream.
## Arguments
The key of the stream.
## Response
Returns a list of consumer group information. Each group is represented as a list of key-value pairs.
```py Get groups info theme={"system"}
result = redis.xinfo_groups("mystream")
```
```py theme={"system"}
[
["name", "group1", "consumers", 2, "pending", 0, "last-delivered-id", "1638360173533-0"],
["name", "group2", "consumers", 0, "pending", 3, "last-delivered-id", "0-0"]
]
```
# XLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xlen
Returns the number of entries inside a stream.
## Arguments
The key of the stream.
## Response
The number of entries in the stream. Returns 0 if the stream does not exist.
```py Get stream length theme={"system"}
result = redis.xlen("mystream")
```
# XPENDING
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xpending
Returns information about pending messages in a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The minimum pending ID to return (use with end and count).
The maximum pending ID to return (use with start and count).
The maximum number of pending messages to return.
Filter results by a specific consumer.
Filter by minimum idle time in milliseconds.
## Response
When called without range arguments, returns a summary with total count and range info.
When called with range arguments, returns detailed pending message information.
```py Summary theme={"system"}
result = redis.xpending("mystream", "mygroup")
```
```py Detailed with range theme={"system"}
result = redis.xpending("mystream", "mygroup", start="-", end="+", count=10)
```
```py Specific consumer with idle filter theme={"system"}
result = redis.xpending("mystream", "mygroup", start="-", end="+", count=5, consumer="consumer1", idle=10000)
```
```py theme={"system"}
[
2, # total pending count
"1638360173533-0", # smallest pending ID
"1638360173533-1", # greatest pending ID
[["consumer1", "2"]] # consumers and their pending counts
]
```
# XRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xrange
Returns stream entries matching a given range of IDs.
## Arguments
The key of the stream.
The stream entry ID to start from. Use "-" for the first available ID.
The stream entry ID to end at. Use "+" for the last available ID.
The maximum number of entries to return.
## Response
A list of stream entries, where each entry is a tuple containing the stream ID and its associated fields and values.
```py All entries theme={"system"}
result = redis.xrange("mystream", "-", "+")
```
```py Range with specific IDs theme={"system"}
result = redis.xrange("mystream", "1548149259438-0", "1548149259438-5")
```
```py Limited count theme={"system"}
result = redis.xrange("mystream", "-", "+", count=10)
```
```py theme={"system"}
{
"1548149259438-0": {
"field1": "value1",
"field2": "value2"
},
"1548149259438-1": {
"field1": "value3",
"field2": "value4"
}
}
```
# XREAD
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xread
Reads data from one or multiple streams, starting from the specified IDs.
## Arguments
A dictionary mapping stream keys to their starting IDs.
Use "\$" to read only new messages added after the command is issued.
The maximum number of messages to return per stream.
## Response
Returns a list where each element represents a stream and contains:
* The stream key
* A list of messages (ID and field-value pairs)
Returns empty list if no data is available.
```py Single stream theme={"system"}
result = redis.xread({"mystream": "0-0"})
```
```py Multiple streams theme={"system"}
result = redis.xread({"stream1": "0-0", "stream2": "0-0"})
```
```py With count limit theme={"system"}
result = redis.xread({"mystream": "0-0"}, count=1)
```
```py Only new messages theme={"system"}
result = redis.xread({"mystream": "$"})
```
```py theme={"system"}
[
["mystream", [
["1638360173533-0", ["field1", "value1", "field2", "value2"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]]
]]
]
```
# XREADGROUP
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xreadgroup
Reads data from a stream as part of a consumer group.
## Arguments
The consumer group name.
The consumer name within the group.
A dictionary mapping stream keys to their starting IDs.
Use ">" to read messages never delivered to any consumer in the group.
The maximum number of messages to return per stream.
Don't add messages to the pending entries list (messages won't need acknowledgment).
## Response
Returns a list where each element represents a stream and contains:
* The stream key
* A list of messages (ID and field-value pairs)
Returns empty list if no data is available.
```py Read new messages theme={"system"}
result = redis.xreadgroup("mygroup", "consumer1", {"mystream": ">"})
```
```py Multiple streams theme={"system"}
result = redis.xreadgroup("mygroup", "consumer1", {"stream1": ">", "stream2": "0-0"})
```
```py With count and noack theme={"system"}
result = redis.xreadgroup("mygroup", "consumer1", {"mystream": ">"}, count=5, noack=True)
```
```py Read pending messages theme={"system"}
result = redis.xreadgroup("mygroup", "consumer1", {"mystream": "0"})
```
```py theme={"system"}
[
["mystream", [
["1638360173533-0", ["field", "value1"]],
["1638360173533-1", ["field", "value2"]]
]]
]
```
# XREVRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xrevrange
Returns stream entries matching a given range of IDs in reverse order.
## Arguments
The key of the stream.
The stream entry ID to end at (highest ID).
The stream entry ID to start from (lowest ID).
The maximum number of entries to return.
## Response
Returns a list of stream entries in reverse chronological order. Each entry contains the ID and field-value pairs.
```py All entries (reverse order) theme={"system"}
result = redis.xrevrange("mystream", "+", "-")
```
```py Limited count theme={"system"}
result = redis.xrevrange("mystream", "+", "-", count=2)
```
```py Specific range theme={"system"}
result = redis.xrevrange("mystream", end="1638360173533-2", start="1638360173533-0")
```
```py theme={"system"}
[
["1638360173533-2", ["field1", "value5", "field2", "value6"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]],
["1638360173533-0", ["field1", "value1", "field2", "value2"]]
]
```
# XTRIM
Source: https://upstash.com/docs/redis/sdks/py/commands/stream/xtrim
Trims the stream by removing entries to keep it at a reasonable size.
## Arguments
The key of the stream.
The maximum number of entries to keep in the stream. Mutually exclusive with `minid`.
Use approximate trimming (more efficient). When `True`, Redis may keep slightly more entries than specified. Defaults to `True`.
The minimum ID to keep. Entries with IDs lower than this will be removed. Mutually exclusive with `maxlen`.
Limit how many entries will be trimmed at most.
## Response
The number of entries removed from the stream.
```py Approximate trim (default) theme={"system"}
result = redis.xtrim("mystream", maxlen=50)
```
```py Approximate trim (explicit) theme={"system"}
result = redis.xtrim("mystream", maxlen=50, approximate=True)
```
```py Exact trim theme={"system"}
result = redis.xtrim("mystream", maxlen=20, approximate=False)
```
```py Trim by minimum ID theme={"system"}
result = redis.xtrim("mystream", minid="1638360173533-0")
```
```py Approximate trim with limit theme={"system"}
result = redis.xtrim("mystream", maxlen=1000, approximate=True, limit=100)
```
# APPEND
Source: https://upstash.com/docs/redis/sdks/py/commands/string/append
Append a value to a string stored at key.
## Arguments
The key to get.
The value to append.
## Response
How many characters were added to the string.
```py Example theme={"system"}
redis.set("key", "Hello")
assert redis.append("key", " World") == 11
assert redis.get("key") == "Hello World"
```
# DECR
Source: https://upstash.com/docs/redis/sdks/py/commands/string/decr
Decrement the integer value of a key by one
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
## Response
The value at the key after the decrementing.
```py Example theme={"system"}
redis.set("key", 6)
assert redis.decr("key") == 5
```
# DECRBY
Source: https://upstash.com/docs/redis/sdks/py/commands/string/decrby
Decrement the integer value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to decrement by.
## Response
The value at the key after the decrementing.
```py Example theme={"system"}
redis.set("key", 6)
assert redis.decrby("key", 4) == 2
```
# GET
Source: https://upstash.com/docs/redis/sdks/py/commands/string/get
Return the value of the specified key or `None` if the key doesn't exist.
## Arguments
The key to get.
## Response
The response is the value stored at the key or `None` if the key doesn't exist.
```py Example theme={"system"}
redis.set("key", "value")
assert redis.get("key") == "value"
```
# GETDEL
Source: https://upstash.com/docs/redis/sdks/py/commands/string/getdel
Return the value of the specified key and delete the key.
## Arguments
The key to get.
## Response
The response is the value stored at the key or `None` if the key doesn't exist.
```py Example theme={"system"}
redis.set("key", "value")
assert redis.getdel("key") == "value"
assert redis.get("key") == None
```
# GETRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/string/getrange
Return a substring of value at the specified key.
## Arguments
The key to get.
The start index of the substring.
The end index of the substring.
## Response
The substring.
```py Example theme={"system"}
redis.set("key", "Hello World")
assert redis.getrange("key", 0, 4) == "Hello"
```
# GETSET
Source: https://upstash.com/docs/redis/sdks/py/commands/string/getset
Return the value of the specified key and replace it with a new value.
## Arguments
The key to get.
The new value to store.
## Response
The response is the value stored at the key or `None` if the key doesn't exist.
```py Example theme={"system"}
redis.set("key", "old-value")
assert redis.getset("key", "newvalue") == "old-value"
```
# INCR
Source: https://upstash.com/docs/redis/sdks/py/commands/string/incr
Increment the integer value of a key by one
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to increment.
## Response
The value at the key after the incrementing.
```py Example theme={"system"}
redis.set("key", 6)
assert redis.incr("key") == 7
```
# INCRBY
Source: https://upstash.com/docs/redis/sdks/py/commands/string/incrby
Increment the integer value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to increment by.
## Response
The value at the key after the incrementing.
```py Example theme={"system"}
redis.set("key", 6)
assert redis.incrby("key", 4) == 10
```
# INCRBYFLOAT
Source: https://upstash.com/docs/redis/sdks/py/commands/string/incrbyfloat
Increment the float value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to increment by.
## Response
The value at the key after the incrementing.
```py Example theme={"system"}
redis.set("key", 6)
# returns 10.5
redis.incrbyfloat("key", 4,5)
```
# MGET
Source: https://upstash.com/docs/redis/sdks/py/commands/string/mget
Load multiple keys from Redis in one go.
For billing purposes, this counts as a single command.
## Arguments
Multiple keys to load from Redis.
## Response
An array of values corresponding to the keys passed in. If a key doesn't exist, the value will be `None`.
```py Example theme={"system"}
redis.set("key1", "value1")
redis.set("key2", "value2")
assert redis.mget("key1", "key2") == ["value1", "value2"]
```
# MSET
Source: https://upstash.com/docs/redis/sdks/py/commands/string/mset
Set multiple keys in one go.
For billing purposes, this counts as a single command.
## Arguments
An object where the keys are the keys to set, and the values are the values to set.
## Response
`True` if the operation succeeded.
```py Example theme={"system"}
redis.mset({
"key1": "value1",
"key2": "value2"
})
```
# MSETNX
Source: https://upstash.com/docs/redis/sdks/py/commands/string/msetnx
Set multiple keys in one go unless they exist already.
For billing purposes, this counts as a single command.
## Arguments
An object where the keys are the keys to set, and the values are the values to set.
## Response
`1` if all keys were set, `0` if at least one key was not set.
```py Example theme={"system"}
redis.msetnx({
key1: 1,
key2: "hello",
key3: { a: 1, b: "hello" },
})
```
# SET
Source: https://upstash.com/docs/redis/sdks/py/commands/string/set
Set a key to hold a string value.
## Arguments
The key
The value, if this is not a string, we will use `JSON.stringify` to convert it
to a string.
Instead of returning `True`, this will cause the command to return the old
value stored at key, or `None` when key did not exist.
Sets an expiration (in seconds) to the key.
Sets an expiration (in milliseconds) to the key.
Set the UNIX timestamp in seconds until the key expires.
Set the UNIX timestamp in milliseconds until the key expires.
Keeps the old expiration if the key already exists.
Only set the key if it does not already exist.
Only set the key if it already exists.
## Response
`True` if the key was set.
If `get` is specified, this will return the old value stored at key, or `None` when
the key did not exist.
```py Basic theme={"system"}
assert redis.set("key", "value") == True
assert redis.get("key") == "value"
```
```py With nx and xx theme={"system"}
# Only set the key if it does not already exist.
assert redis.set("key", "value", nx=True) == False
# Only set the key if it already exists.
assert redis.set("key", "value", xx=True) == True
```
```py With expiration theme={"system"}
# Set the key to expire in 10 seconds.
assert redis.set("key", "value", ex=10) == True
# Set the key to expire in 10000 milliseconds.
assert redis.set("key", "value", px=10000) == True
```
```py With old value theme={"system"}
# Get the old value stored at the key.
assert redis.set("key", "new-value", get=True) == "old-value"
```
# SETRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/string/setrange
Writes the value of key at offset.
The SETRANGE command in Redis is used to modify a portion of the value of a key by replacing a substring within the key's existing value. It allows you to update part of the string value associated with a specific key at a specified offset.
## Arguments
The name of the Redis key for which you want to modify the value.
The zero-based index in the value where you want to start replacing characters.
The new string that you want to insert at the specified offset in the existing value.
## Response
The length of the value after it was modified.
```py Example theme={"system"}
redis.set("key", "Hello World")
assert redis.setrange("key", 6, "Redis") == 11
assert redis.get("key") == "Hello Redis"
```
# STRLEN
Source: https://upstash.com/docs/redis/sdks/py/commands/string/strlen
Return the length of a string stored at a key.
The \`STRLEN\`\` command in Redis is used to find the length of the string value associated with a key. In Redis, keys can be associated with various data types, and one of these data types is the "string." The STRLEN command specifically operates on keys that are associated with string values.
## Arguments
The name of the Redis key.
## Response
The length of the value.
```py Example theme={"system"}
redis.set("key", "Hello World")
assert redis.strlen("key") == 11
```
# ZADD
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zadd
Add a member to a sorted set, or update its score if it already exists.
## Arguments
The key of the sorted set.
A dictionary of elements and their scores.
Only update elements that already exist. Never add elements.
Only add new elements. Never update elements.
Update scores if the new score is greater than the old score.
Update scores if the new score is less than the old score.
Return the number of elements changed instead.
When this option is specified `ZADD` acts like `ZINCRBY`. Only one score-element pair can be specified in this mode.
## Response
The number of elements added to the sorted sets, not including elements already existing for which the score was updated.
If `ch` was specified, the number of elements that were updated.
If `incr` was specified, the new score of `member`.
```py Simple theme={"system"}
# Add three elements
assert redis.zadd("myset", {
"one": 1,
"two": 2,
"three": 3
}) == 3
# No element is added since "one" and "two" already exist
assert redis.zadd("myset", {
"one": 1,
"two": 2
}, nx=True) == 0
# New element is not added since it does not exist
assert redis.zadd("myset", {
"new-element": 1
}, xx=True) == 0
# Only "three" is updated since new score was greater
assert redis.zadd("myset", {
"three": 10, "two": 0
}, gt=True) == 1
# Only "three" is updated since new score was greater
assert redis.zadd("myset", {
"three": 10,
"two": 0
}, gt=True) == 1
```
# ZCARD
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zcard
Returns the number of elements in the sorted set stored at key.
## Arguments
The key to get.
## Response
The number of elements in the sorted set.
```py Example theme={"system"}
redis.zadd("myset", {"one": 1, "two": 2, "three": 3})
assert redis.zcard("myset") == 3
```
# ZCOUNT
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zcount
Returns the number of elements in the sorted set stored at key filterd by score.
## Arguments
The key to get.
The minimum score to filter by.
Use `-inf` to effectively ignore this filter.
Use `(number` to exclude the value.
The maximum score to filter by.
Use `+inf` to effectively ignore this filter.
Use `(number` to exclude the value.
## Response
The number of elements where score is between min and max.
```py Example theme={"system"}
redis.zadd("key",
{ score: 1, member: "one"},
{ score: 2, member: "two" },
)
elements = redis.zcount("key", "(1", "+inf")
print(elements); # 1
```
# ZDIFF
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zdiff
Returns the difference between sets.
## Arguments
The keys of the sets to compare.
Whether to include scores in the result.
## Response
The number of elements in the resulting set.
```py Simple theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zdiff(["key1", "key2"])
assert result == ["a", "b"]
```
```py With scores theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zdiff(["key1", "key2"], withscores=True)
assert result == [("a", 1), ("b", 2)]
```
# ZDIFFSTORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zdiffstore
Writes the difference between sets to a new key.
## Arguments
The key to write the difference to.
The keys to compare.
## Response
The number of elements in the resulting set.
```py Example theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
# a and b
assert redis.zdiffstore("dest", ["key1", "key2"]) == 2
```
# ZINCRBY
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zincrby
Increment the score of a member.
## Arguments
The key of the sorted set.
The increment to add to the score.
The member to increment.
## Response
The new score of `member` after the increment operation.
```py Example theme={"system"}
redis.zadd("myset", {"one": 1, "two": 2, "three": 3})
assert redis.zincrby("myset", 2, "one") == 3
```
# ZINTER
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zinter
Returns the intersection between sets.
## Arguments
The keys of the sets to compare.
The weights to apply to the sets.
The aggregation function to apply to the sets.
Whether to include scores in the result.
## Response
The number of elements in the resulting set.
```py Simple theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zinter(["key1", "key2"])
assert result == ["c"]
```
```py Aggregation theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"a": 3, "b": 4, "c": 5})
result = redis.zinter(["key1", "key2"], withscores=True, aggregate="SUM")
assert result == [("a", 4), ("b", 6), ("c", 8)]
```
```py Weights theme={"system"}
redis.zadd("key1", {"a": 1})
redis.zadd("key2", {"a": 1})
result = redis.zinter(["key1", "key2"],
withscores=True,
aggregate="SUM",
weights=[2, 3])
assert result == [("a", 5)]
```
# ZINTERSTORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zinterstore
Calculates the intersection of sets and stores the result in a key
## Arguments
The key to store the result in.
The keys of the sets to compare.
The weights to apply to the sets.
The aggregation function to apply to the sets.
Whether to include scores in the result.
## Response
## Response
The number of elements in the resulting set.
```py Simple theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zinterstore("dest", ["key1", "key2"])
assert result == 1
```
```py Aggregation theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"a": 3, "b": 4, "c": 5})
result = redis.zinterstore("dest", ["key1", "key2"], withscores=True, aggregate="SUM")
assert result == 3
```
```py Weights theme={"system"}
redis.zadd("key1", {"a": 1})
redis.zadd("key2", {"a": 1})
result = redis.zinterstore("dest", ["key1", "key2"],
withscores=True,
aggregate="SUM",
weights=[2, 3])
assert result == 1
```
# ZLEXCOUNT
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zlexcount
Returns the number of elements in the sorted set stored at key filterd by lex.
## Arguments
The key to get.
The lower lexicographical bound to filter by.
Use `-` to disable the lower bound.
The upper lexicographical bound to filter by.
Use `+` to disable the upper bound.
## Response
The number of matched.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zlexcount("myset", "-", "+") == 3
```
# ZMSCORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zmscore
Returns the scores of multiple members.
## Arguments
The key of the sorted set.
## Response
The members of the sorted set.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zlexcount("myset", "-", "+") == 3
```
# ZPOPMAX
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zpopmax
Removes and returns up to count members with the highest scores in the sorted set stored at key.
## Arguments
The key of the sorted set
The number of members to pop
## Response
A list of tuples containing the popped members and their scores
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zpopmax("myset") == [("c", 3)]
```
# ZPOPMIN
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zpopmin
Removes and returns up to count members with the lowest scores in the sorted set stored at key.
## Arguments
The key of the sorted set
The number of members to pop
## Response
A list of tuples containing the popped members and their scores
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zpopmin("myset") == [("a", 1)]
```
# ZRANDMEMBER
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zrandmember
Returns one or more random members from a sorted set, optionally with their scores.
## Arguments
The key of the sorted set
The number of members to return
Whether to return the scores along with the members
## Response
The random member(s) from the sorted set
If no count is specified, a single member is returned. If count is specified, a list of members is returned.
If withscores, members are returned as a tuple of (member, score).
```py Example theme={"system"}
redis.zadd("myset", {"one": 1, "two": 2, "three": 3})
# "one"
redis.zrandmember("myset")
# ["one", "three"]
redis.zrandmember("myset", 2)
```
# ZRANGE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zrange
Returns the specified range of elements in the sorted set stored at key.
## Arguments
The key to get.
The minimum value to include.
The maximum value to include.
"-inf" and "+inf" are also valid values for the ranges
Whether to include the scores in the response.
Whether to reverse the order of the response.
If bylex
The offset to start from.
The number of elements to return.
## Response
The values in the specified range.
If `withscores` is true, the members will be tuples of the form `(member, score)`.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrange("myset", 0, 1) == ["a", "b"]
```
```py Reverse theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrange("myset", 0, 1, rev=True) == ["c", "b"]
```
```py Sorted theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrange("myset", 0, 1, sortby="BYSCORE") == ["a", "b"]
```
```py With scores theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrange("myset", 0, 1, withscores=True) == [("a", 1), ("b", 2)]
```
# ZRANK
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zrank
Returns the rank of a member
## Arguments
The key to get.
The member to get the rank of.
## Response
The rank of the member.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrank("myset", "a") == 0
assert redis.zrank("myset", "d") == None
assert redis.zrank("myset", "b") == 1
assert redis.zrank("myset", "c") == 2
```
# ZREM
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zrem
Remove one or more members from a sorted set
## Arguments
The key of the sorted set
One or more members to remove
## Response
The number of members removed from the sorted set.
```py Single theme={"system"}
redis.zadd("myset", {"one": 1, "two": 2, "three": 3})
assert redis.zrem("myset", "one", "four") == 1
```
# ZREMRANGEBYLEX
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zremrangebylex
Remove all members in a sorted set between the given lexicographical range.
## Arguments
The key of the sorted set
The minimum lexicographical value to remove.
The maximum lexicographical value to remove.
## Response
The number of elements removed from the sorted set.
```py Example theme={"system"}
redis.zremrangebylex("key", "alpha", "omega")
```
# ZREMRANGEBYRANK
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zremrangebyrank
Remove all members in a sorted set between the given ranks.
## Arguments
The key of the sorted set
The minimum rank to remove.
The maximum rank to remove.
## Response
The number of elements removed from the sorted set.
```py Example theme={"system"}
redis.zremrangebyrank("key", 4, 20)
```
# ZREMRANGEBYSCORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zremrangebyscore
Remove all members in a sorted set between the given scores.
## Arguments
The key of the sorted set
The minimum score to remove.
The maximum score to remove.
## Response
The number of elements removed from the sorted set.
```py Example theme={"system"}
redis.zremrangebyscore("key", 2, 5)
```
# ZREVRANK
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zrevrank
Returns the rank of a member in a sorted set, with scores ordered from high to low.
## Arguments
The key to get.
The member to get the reverse rank of.
## Response
The reverse rank of the member.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zrevrank("myset", "a") == 2
```
# ZSCAN
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zscan
Scan a sorted set
Return a paginated list of members and their scores of an ordered set matching a pattern.
## Arguments
The key of the sorted set.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by members.
Number of members to return per call.
## Response
The new cursor and keys as a tuple.
If the new cursor is `0` the iteration is complete.
```py Example theme={"system"}
# Get all elements of an ordered set.
cursor = 0
results = []
while True:
cursor, keys = redis.zscan("myzset", cursor, match="*")
results.extend(keys)
if cursor == 0:
break
for key, score in results:
print(key, score)
```
# ZSCORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zscore
Returns the scores of a member.
## Arguments
The key to get.
## Response
A member of the sortedset.
```py Example theme={"system"}
redis.zadd("myset", {"a": 1, "b": 2, "c": 3})
assert redis.zscore("myset", "a") == 1
```
# ZINTER
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zunion
Returns the intersection between sets.
## Arguments
The keys of the sets to compare.
The weights to apply to the sets.
The aggregation function to apply to the sets.
Whether to include scores in the result.
## Response
The number of elements in the resulting set.
```py Simple theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zunion(["key1", "key2"])
assert result == ["a", "b", "c", "d", "e"]
```
```py Aggregation theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"a": 3, "b": 4, "c": 5})
result = redis.zunion(["key1", "key2"], withscores=True, aggregate="SUM")
assert result == [("a", 4), ("b", 6), ("c", 8)]
```
```py Weights theme={"system"}
redis.zadd("key1", {"a": 1})
redis.zadd("key2", {"a": 1})
result = redis.zunion(["key1", "key2"],
withscores=True,
aggregate="SUM",
weights=[2, 3])
assert result == [("a", 5)]
```
# ZUNIONSTORE
Source: https://upstash.com/docs/redis/sdks/py/commands/zset/zunionstore
Writes the union between sets to a new key.
## Arguments
The key to store the resulting set in.
The keys of the sets to compare.
The weights to apply to the sets.
The aggregation function to apply to the sets.
Whether to include scores in the result.
## Response
The number of elements in the resulting set.
```py Simple theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"c": 3, "d": 4, "e": 5})
result = redis.zunionstore(["key1", "key2"])
assert result == 5
```
```py Aggregation theme={"system"}
redis.zadd("key1", {"a": 1, "b": 2, "c": 3})
redis.zadd("key2", {"a": 3, "b": 4, "c": 5})
result = redis.zunionstore(["key1", "key2"], withscores=True, aggregate="SUM")
assert result == [("a", 4), ("b", 6), ("c", 8)]
```
```py Weights theme={"system"}
redis.zadd("key1", {"a": 1})
redis.zadd("key2", {"a": 1})
result = redis.zunionstore(["key1", "key2"],
withscores=True,
aggregate="SUM",
weights=[2, 3])
assert result == [("a", 5)]
```
# Features
Source: https://upstash.com/docs/redis/sdks/py/features
### BITFIELD and BITFIELD\_RO
One particular case is represented by these two chained commands, which are
available as functions that return an instance of the `BITFIELD` and,
respectively, `BITFIELD_RO` classes. Use the `execute` function to run the
commands.
```python theme={"system"}
redis.bitfield("test_key") \
.incrby(encoding="i8", offset=100, increment=100) \
.overflow("SAT") \
.incrby(encoding="i8", offset=100, increment=100) \
.execute()
redis.bitfield_ro("test_key_2") \
.get(encoding="u8", offset=0) \
.get(encoding="u8", offset="#1") \
.execute()
```
### Custom commands
If you want to run a command that hasn't been implemented, you can use the
`execute` function of your client instance and pass the command as a `list`.
```python theme={"system"}
redis.execute(["XLEN", "test_stream"])
```
# Encoding
Although Redis can store invalid JSON data, there might be problems with the
deserialization. To avoid this, the Upstash REST proxy is capable of encoding
the data as base64 on the server and then sending it to the client to be
decoded.
For very large data, this can add a few milliseconds in latency. So, if you're
sure that your data is valid JSON, you can set `rest_encoding` to `None`.
# Retry mechanism
upstash-redis has a fallback mechanism in case of network or API issues. By
default, if a request fails it'll retry once, 3 seconds after the error. If you
want to customize that, set `rest_retries` and `rest_retry_interval` (in
seconds).
# Pipelines & Transactions
If you want to submit commands in batches to reduce the number of roundtrips, you can utilize pipelining or
transactions. The difference between pipelines and transactions is that transactions are atomic: no other
command is executed during that transaction. In pipelines there is no such guarantee.
To use a pipeline, simply call the `pipeline` method:
```python theme={"system"}
pipeline = redis.pipeline()
pipeline.set("foo", 1)
pipeline.incr("foo")
pipeline.get("foo")
result = pipeline.exec()
print(result)
# prints [True, 2, '2']
```
For transaction, use `mutli`:
```python theme={"system"}
pipeline = redis.multi()
pipeline.set("foo", 1)
pipeline.incr("foo")
pipeline.get("foo")
result = pipeline.exec()
print(result)
# prints [True, 2, '2']
```
You can also chain the commands:
```python theme={"system"}
pipeline = redis.pipeline()
pipeline.set("foo", 1).incr("foo").get("foo")
result = pipeline.exec()
print(result)
# prints [True, 2, '2']
```
# Telemetry
This library sends anonymous telemetry data to help us improve your experience.
We collect the following:
* SDK version
* Platform (Vercel, AWS)
* Python Runtime version
You can opt out by passing `allow_telemetry=False` when initializing the Redis client:
```py theme={"system"}
redis = Redis(
# ...,
allow_telemetry=False,
)
```
# Getting Started
Source: https://upstash.com/docs/redis/sdks/py/gettingstarted
## Install
### PyPI
```bash theme={"system"}
pip install upstash-redis
```
## Usage
To be able to use upstash-redis, you need to create a database on
[Upstash](https://console.upstash.com/) and grab `UPSTASH_REDIS_REST_URL` and
`UPSTASH_REDIS_REST_TOKEN` from the console.
```python theme={"system"}
# for sync client
from upstash_redis import Redis
redis = Redis(url="UPSTASH_REDIS_REST_URL", token="UPSTASH_REDIS_REST_TOKEN")
# for async client
from upstash_redis.asyncio import Redis
redis = Redis(url="UPSTASH_REDIS_REST_URL", token="UPSTASH_REDIS_REST_TOKEN")
```
Or, if you want to automatically load the credentials from the environment:
```python theme={"system"}
# for sync use
from upstash_redis import Redis
redis = Redis.from_env()
# for async use
from upstash_redis.asyncio import Redis
redis = Redis.from_env()
```
If you are in a serverless environment that allows it, it's recommended to
initialise the client outside the request handler to be reused while your
function is still hot.
Running commands might look like this:
```python theme={"system"}
from upstash_redis import Redis
redis = Redis.from_env()
def main():
redis.set("a", "b")
print(redis.get("a"))
# or for async context:
from upstash_redis.asyncio import Redis
redis = Redis.from_env()
async def main():
await redis.set("a", "b")
print(await redis.get("a"))
```
# Overview
Source: https://upstash.com/docs/redis/sdks/py/overview
`upstash-redis` is a connectionless, HTTP-based Redis client for Python,
designed to be used in serverless and serverful environments such as:
* AWS Lambda
* Vercel Serverless
* Google Cloud Functions
* and other environments where HTTP is preferred over TCP.
Inspired by other Redis clients like
[@upstash/redis](https://github.com/upstash/upstash-redis) and
[redis-py](https://github.com/redis/redis-py), the goal of this SDK is to
provide a simple way to use Redis over the
[Upstash REST API](https://docs.upstash.com/redis/features/restapi).
The SDK is currently compatible with Python 3.8 and above.
You can find the Github Repository [here](https://github.com/upstash/redis-python).
# Ratelimiting Algorithms
Source: https://upstash.com/docs/redis/sdks/ratelimit-py/algorithms
## Fixed Window
This algorithm divides time into fixed durations/windows. For example each
window is 10 seconds long. When a new request comes in, the current time is used
to determine the window and a counter is increased. If the counter is larger
than the set limit, the request is rejected.
In fixed & sliding window algorithms, the reset time is based on fixed time boundaries (which depend on the period), not on when the first request was made. So two requests made right before the window ends still count toward the current window, and limits reset at the start of the next window.
### Pros
* Very cheap in terms of data size and computation
* Newer requests are not starved due to a high burst in the past
### Cons
* Can cause high bursts at the window boundaries to leak through
* Causes request stampedes if many users are trying to access your server,
whenever a new window begins
### Usage
```python theme={"system"}
from upstash_ratelimit import Ratelimit, FixedWindow
from upstash_redis import Redis
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=FixedWindow(max_requests=10, window=10),
)
```
## Sliding Window
Builds on top of fixed window but instead of a fixed window, we use a rolling
window. Take this example: We have a rate limit of 10 requests per 1 minute. We
divide time into 1 minute slices, just like in the fixed window algorithm.
Window 1 will be from 00:00:00 to 00:01:00 (HH:MM:SS). Let's assume it is
currently 00:01:15 and we have received 4 requests in the first window and 5
requests so far in the current window. The approximation to determine if the
request should pass works like this:
```python theme={"system"}
limit = 10
# 4 request from the old window, weighted + requests in current window
rate = 4 * ((60 - 15) / 60) + 5 = 8
return rate < limit # True means we should allow the request
```
### Pros
* Solves the issue near boundary from fixed window.
### Cons
* More expensive in terms of storage and computation
* It's only an approximation because it assumes a uniform request flow in the
previous window
### Usage
```python theme={"system"}
from upstash_ratelimit import Ratelimit, SlidingWindow
from upstash_redis import Redis
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=SlidingWindow(max_requests=10, window=10),
)
```
`reset` field in the [`limit`](/redis/sdks/ratelimit-py/gettingstarted) method of sliding window does not
provide an exact reset time. Instead, the reset time is the start time of
the next window.
## Token Bucket
Consider a bucket filled with maximum number of tokens that refills constantly
at a rate per interval. Every request will remove one token from the bucket and
if there is no token to take, the request is rejected.
### Pros
* Bursts of requests are smoothed out and you can process them at a constant
rate.
* Allows setting a higher initial burst limit by setting maximum number of
tokens higher than the refill rate
### Cons
* Expensive in terms of computation
### Usage
```python theme={"system"}
from upstash_ratelimit import Ratelimit, TokenBucket
from upstash_redis import Redis
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=TokenBucket(max_tokens=10, refill_rate=5, interval=10),
)
```
# Features
Source: https://upstash.com/docs/redis/sdks/ratelimit-py/features
## Block until ready
You also have the option to try and wait for a request to pass in the given
timeout.
It is very similar to the `limit` method and takes an identifier and returns the
same response. However if the current limit has already been exceeded, it will
automatically wait until the next window starts and will try again. Setting the
timeout parameter (in seconds) will cause the method to block a finite amount of
time.
```python theme={"system"}
from upstash_ratelimit import Ratelimit, SlidingWindow
from upstash_redis import Redis
# Create a new ratelimiter, that allows 10 requests per 10 seconds
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=SlidingWindow(max_requests=10, window=10),
)
response = ratelimit.block_until_ready("id", timeout=30)
if not response.allowed:
print("Unable to process, even after 30 seconds")
else:
do_expensive_calculation()
print("Here you go!")
```
## Using multiple limits
Sometimes you might want to apply different limits to different users. For
example you might want to allow 10 requests per 10 seconds for free users, but
60 requests per 10 seconds for paid users.
Here's how you could do that:
```python theme={"system"}
from upstash_ratelimit import Ratelimit, SlidingWindow
from upstash_redis import Redis
class MultiRL:
def __init__(self) -> None:
redis = Redis.from_env()
self.free = Ratelimit(
redis=redis,
limiter=SlidingWindow(max_requests=10, window=10),
prefix="ratelimit:free",
)
self.paid = Ratelimit(
redis=redis,
limiter=SlidingWindow(max_requests=60, window=10),
prefix="ratelimit:paid",
)
# Create a new ratelimiter, that allows 10 requests per 10 seconds
ratelimit = MultiRL()
ratelimit.free.limit("userIP")
ratelimit.paid.limit("userIP")
```
## Custom Rates
When rate limiting, you may want different requests to consume different amounts of tokens.
This could be useful when processing batches of requests where you want to rate limit based
on items in the batch or when you want to rate limit based on the number of tokens.
To achieve this, you can simply pass `rate` parameter when calling the limit method:
```python theme={"system"}
from upstash_ratelimit import Ratelimit, FixedWindow
from upstash_redis import Redis
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=FixedWindow(max_requests=10, window=10),
)
# pass rate as 5 to subtract 5 from the number of
# allowed requests in the window:
identifier = "api"
response = ratelimit.limit(identifier, rate=5)
```
# Getting Started
Source: https://upstash.com/docs/redis/sdks/ratelimit-py/gettingstarted
## Install
```bash theme={"system"}
pip install upstash-ratelimit
```
## Create database
To be able to use upstash-ratelimit, you need to create a database on
[Upstash](https://console.upstash.com/).
## Usage
For possible Redis client configurations, have a look at the
[Redis SDK repository](https://github.com/upstash/redis-python).
> This library supports asyncio as well. To use it, import the asyncio-based
> variant from the `upstash_ratelimit.asyncio` module.
```python theme={"system"}
from upstash_ratelimit import Ratelimit, FixedWindow
from upstash_redis import Redis
# Create a new ratelimiter, that allows 10 requests per 10 seconds
ratelimit = Ratelimit(
redis=Redis.from_env(),
limiter=FixedWindow(max_requests=10, window=10),
# Optional prefix for the keys used in Redis. This is useful
# if you want to share a Redis instance with other applications
# and want to avoid key collisions. The default prefix is
# "@upstash/ratelimit"
prefix="@upstash/ratelimit",
)
# Use a constant string to limit all requests with a single ratelimit
# Or use a user ID, API key or IP address for individual limits.
identifier = "api"
response = ratelimit.limit(identifier)
if not response.allowed:
print("Unable to process at this time")
else:
do_expensive_calculation()
print("Here you go!")
```
The `limit` method also returns the following metadata:
```python theme={"system"}
@dataclasses.dataclass
class Response:
allowed: bool
"""
Whether the request may pass(`True`) or exceeded the limit(`False`)
"""
limit: int
"""
Maximum number of requests allowed within a window.
"""
remaining: int
"""
How many requests the user has left within the current window.
"""
reset: float
"""
Unix timestamp in seconds when the limits are reset
"""
```
# Overview
Source: https://upstash.com/docs/redis/sdks/ratelimit-py/overview
`upstash-ratelimit` is a connectionless rate limiting library for Python,
designed to be used in serverless environments such as:
* AWS Lambda
* Vercel Serverless
* Google Cloud Functions
* and other environments where HTTP is preferred over TCP.
The SDK is currently compatible with Python 3.8 and above.
You can find the Github Repository [here](https://github.com/upstash/ratelimit-python).
# Ratelimiting Algorithms
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/algorithms
We provide different algorithms to use out of the box. Each has pros and cons.
## Fixed Window
This algorithm divides time into fixed durations/windows. For example each
window is 10 seconds long. When a new request comes in, the current time is used
to determine the window and a counter is increased. If the counter is larger
than the set limit, the request is rejected.
In fixed & sliding window algorithms, the reset time is based on fixed time boundaries (which depend on the period), not on when the first request was made. So two requests made right before the window ends still count toward the current window, and limits reset at the start of the next window.
### Pros
* Very cheap in terms of data size and computation
* Newer requests are not starved due to a high burst in the past
### Cons
* Can cause high bursts at the window boundaries to leak through
* Causes request stampedes if many users are trying to access your server,
whenever a new window begins
### Usage
Create a new ratelimiter, that allows 10 requests per 10 seconds.
```ts theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.fixedWindow(10, "10 s"),
});
```
```ts theme={"system"}
const ratelimit = new MultiRegionRatelimit({
redis: [
new Redis({
/* auth */
}),
new Redis({
/* auth */
})
],
limiter: MultiRegionRatelimit.fixedWindow(10, "10 s"),
});
```
## Sliding Window
Builds on top of fixed window but instead of a fixed window, we use a rolling
window. Take this example: We have a rate limit of 10 requests per 1 minute. We
divide time into 1 minute slices, just like in the fixed window algorithm.
Window 1 will be from 00:00:00 to 00:01:00 (HH:MM:SS). Let's assume it is
currently 00:01:15 and we have received 4 requests in the first window and 5
requests so far in the current window. The approximation to determine if the
request should pass works like this:
```ts theme={"system"}
limit = 10
// 4 request from the old window, weighted + requests in current window
rate = 4 * ((60 - 15) / 60) + 5 = 8
return rate < limit // True means we should allow the request
```
### Pros
* Solves the issue near boundary from fixed window.
### Cons
* More expensive in terms of storage and computation
* Is only an approximation, because it assumes a uniform request flow in the
previous window, but this is fine in most cases
### Usage
Create a new ratelimiter, that allows 10 requests per 10 seconds.
```ts theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
});
```
**Warning:** Using sliding window algorithm with the multiregion setup results in large number of
commands in Redis and long request processing times. If you want to keep the number of commands
low, we recommend using the [fixed window algorithm in multi region setup](/redis/sdks/ratelimit-ts/algorithms#fixed-window).
```ts theme={"system"}
const ratelimit = new MultiRegionRatelimit({
redis: [
new Redis({
/* auth */
}),
new Redis({
/* auth */
})
],
limiter: MultiRegionRatelimit.slidingWindow(10, "10 s"),
});
```
`reset` field in the [`limit`](/redis/sdks/ratelimit-ts/methods#limit) and [`getRemaining`](/redis/sdks/ratelimit-ts/methods#getremaining) methods of sliding window do not
provide an exact reset time. Instead, the reset time is the start time of
the next window.
## Token Bucket
Consider a bucket filled with `{maxTokens}` tokens that refills constantly at
`{refillRate}` per `{interval}`. Every request will remove one token from the
bucket and if there is no token to take, the request is rejected.
### Pros
* Bursts of requests are smoothed out and you can process them at a constant
rate.
* Allows to set a higher initial burst limit by setting `maxTokens` higher than
`refillRate`
### Cons
* Expensive in terms of computation
### Usage
Create a new bucket, that refills 5 tokens every 10 seconds and has a maximum
size of 10.
```ts theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.tokenBucket(5, "10 s", 10),
analytics: true,
});
```
*Not yet supported for `MultiRegionRatelimit`*
# Costs
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/costs
This page details the cost of the Ratelimit algorithms in terms of the number of Redis commands. Note that these are calculated for Regional Ratelimits. For [Multi Region Ratelimit](/redis/sdks/ratelimit-ts/features#multi-region), costs will be higher. Additionally, if a Global Upstash Redis is used as the database, number of commands should be calculated as `(1+readRegionCount) * writeCommandCount + readCommandCount` and plus 1 if analytics is enabled.
The Rate Limit SDK minimizes Redis calls to reduce latency overhead and cost. Number of commands executed by the Rate Limit algorithm depends on the chosen algorithm, as well as the state of the algorithm and the caching.
#### Algorithm State
By state of the algorithm, we refer to the entry in our Redis store regarding some identifier `ip1`. You can imagine that there is a state for every identifier. We name these states in the following manner for the purpose of attributing costs to each one:
| State | Success | Explanation |
| ------------ | ------- | ------------------------------------------------------------------------ |
| First | true | First time the Ratelimit was called with identifier `ip1` |
| Intermediate | true | Second or some other time the Ratelimit was called with identifier `ip1` |
| Rate-Limited | false | Requests with identifier `ip1` which are rate limited. |
For instance, first time we call the algorithm with `ip1`, `PEXPIRE` is called so that the key expires after some time. In the following calls, we still use the same script but don't call `PEXPIRE`. In the rate-limited state, we may avoid using Redis altogether if we can make use of the cache.
#### Cache Result
We distinguish the two cases when the identifier `ip1` is found in cache, resulting in a "hit" and the case when the identifier `ip1` is not found in the cache, resulting in a "miss". The cache only exists in the runtime environment and is independent of the Redis database. The state of the cache is especially relevant for serverless contexts, where the cache will usually be empty because of a cold start.
| Result | Explanation |
| ------ | ------------------------------------------------------------------------------------------------------- |
| Hit | Identifier `ip1` is found in the runtime cache |
| Miss | Identifier `ip1` is not found in cache or the value in the cache doesn't block (rate-limit) the request |
An identifier is saved in the cache only when a request is rate limited after a call to the Redis database. The request to Redis returns a timestamp for the time when such a request won't be rate limited anymore. We save this timestamp in the cache and this allows us to reject any request before this timestamp without having to consult the Redis database.
See the [section on caching](/redis/sdks/ratelimit-ts/features) for more details.
# Costs
### `limit()`
#### Fixed Window
| Cache Result | Algorithm State | Command Count | Commands |
| ------------ | --------------- | ------------- | ------------------- |
| Hit/Miss | First | 3 | EVAL, INCR, PEXPIRE |
| Hit/Miss | Intermediate | 2 | EVAL, INCR |
| Miss | Rate-Limited | 2 | EVAL, INCR |
| Hit | Rate-Limited | 0 | *utilized cache* |
#### Sliding Window
| Cache Result | Algorithm State | Command Count | Commands |
| ------------ | --------------- | ------------- | ----------------------------- |
| Hit/Miss | First | 5 | EVAL, GET, GET, INCR, PEXPIRE |
| Hit/Miss | Intermediate | 4 | EVAL, GET, GET, INCR |
| Miss | Rate-Limited | 3 | EVAL, GET, GET |
| Hit | Rate-Limited | 0 | *utilized cache* |
#### Token Bucket
| Cache Result | Algorithm State | Command Count | Commands |
| ------------ | ------------------ | ------------- | -------------------------- |
| Hit/Miss | First/Intermediate | 4 | EVAL, HMGET, HSET, PEXPIRE |
| Miss | Rate-Limited | 2 | EVAL, HMGET |
| Hit | Rate-Limited | 0 | *utilized cache* |
### `getRemaining()`
This method doesn't use the cache or it doesn't have a state it depends on. Therefore, every call
results in the same number of commands in Redis.
| Algorithm | Command Count | Commands |
| -------------- | ------------- | -------------- |
| Fixed Window | 2 | EVAL, GET |
| Sliding Window | 3 | EVAL, GET, GET |
| Token Bucket | 2 | EVAL, HMGET |
### `resetUsedTokens()`
This method starts with a `SCAN` command and deletes every key that matches with `DEL` commands:
| Algorithm | Command Count | Commands |
| -------------- | ------------- | -------------------- |
| Fixed Window | 3 | EVAL, SCAN, DEL |
| Sliding Window | 4 | EVAL, SCAN, DEL, DEL |
| Token Bucket | 3 | EVAL, SCAN, DEL |
### `blockUntilReady()`
Works the same as `limit()`.
# Deny List
Enabling deny lists introduces a cost of 2 additional command per `limit` call.
Values passed in `identifier`, `ip`, `userAgent` and `country` are checked with a single `SMISMEMBER` command.
The other command is TTL which is for checking the status of the current ip deny list to figure out whether
it is expired, valid or disabled.
If [Auto IP deny list](/redis/sdks/ratelimit-ts/features#auto-ip-deny-list) is enabled,
the Ratelimit SDK will update the ip deny list everyday, in the first `limit` invocation after 2 AM UTC.
This will consume 9 commands per day.
If a value is found in the deny list at redis, the client saves this value in the cache and denies
any further requests with that value for a minute without calling Redis (except for analytics).
# Analytics
If analytics is enabled, all calls of `limit` will result in 1 more command since `ZINCRBY` will be called to update the analytics.
# Features
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/features
## Caching
For extreme load or denial of service attacks, it might be too expensive to call
redis for every incoming request, just to find out it should be blocked because
they have exceeded the limit.
You can use an ephemeral in memory cache by passing some variable of type
`Map` as the `ephemeralCache` option:
```ts theme={"system"}
const cache = new Map(); // must be outside of your serverless function handler
// ...
const ratelimit = new Ratelimit({
// ...
ephemeralCache: cache,
});
```
By default, `ephemeralCache` will be initialized with `new Map()` if no value is provided
as the `ephemeralCache` parameter. To disable the cache, one must pass `ephemeralCache: false`.
If enabled, the ratelimiter will keep track of the blocked identifiers and their
reset timestamps. When a request is received with some identifier `ip1` before the reset time of
`ip1`, the request will be denied without having to call Redis. [`reason` field of the
limit response will be `cacheBlock`](/redis/sdks/ratelimit-ts/methods#limit)
In serverless environments this is only possible if you create the cache or ratelimiter
instance outside of your handler function. While the function is still hot, the
ratelimiter can block requests without having to request data from Redis, thus
saving time and money.
See the section on how caching impacts the cost in the
[costs page](/redis/sdks/ratelimit-ts/costs#cache-result).
## Timeout
You can define an optional timeout in milliseconds, after which the request will
be allowed to pass regardless of what the current limit is. This can be useful
if you don't want network issues to cause your application to reject requests.
```ts theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
timeout: 1000, // 1 second
analytics: true,
});
```
Default value of the timeout is 5 seconds if no `timeout` is provided. When response
is success because of a timeout, this is shown in
[the `reason` field of the limit method](/redis/sdks/ratelimit-ts/methods#limit).
## Analytics & Dashboard
Another feature of the rate limiting algorithm is to collect analytics.
By default, analytics is disabled. To enable analytics, simply set the `analytics` parameter to `true`:
```js theme={"system"}
const ratelimit = new Ratelimit({
redis,
analytics: true,
limiter: Ratelimit.slidingWindow(60, "10s"),
});
```
Everytime we call `ratelimit.limit()`, analytics will be sent to the Redis database
([see costs page](/redis/sdks/ratelimit-ts/costs#analytics))
and information about the hour, identifier and the number of rate limit success and
failures will be collected. This information can be viewed from the Upstash console.
If you are using rate limiting in Cloudflare Workers, Vercel Edge or a similar environment,
you need to make sure that the analytics request is delivered correctly to the Redis.
Otherwise, you may observe lower numbers than the actual number of calls.
To make sure that the request completes, you can use the `pending` field returned by
the `limit` method. See the
[Asynchronous synchronization between databases](/redis/sdks/ratelimit-ts/features#asynchronous-synchronization-between-databases)
section to see how `pending` can be used.
### Dashboard
If the analytics is enabled, you can find information about how many requests were made
with which identifiers and how many of the requests were blocked from the [Rate Limit
dashboard in Upstash Console](https://console.upstash.com/ratelimit).
To find the dashboard, simply click the three dots and choose the "Rate Limit Analytics" tab:
In the dashboard, you can find information on how many requests were accepted, how many were blocked
and how many were received in total. Additionally, you can see requests over time; top allowed, rate limited
and denied requests.
**Allowed requests** show the identifiers of the requests which succeeded. **Rate limited requests** show the
identifiers of the requests which were blocked because they surpassed the limit. **Denied requests** show the identifier,
user agent, country, or the IP address which caused the request to fail.
If you are using a custom prefix, you need to use the same in the dashboard’s top left corner.
## Using Multiple Limits
Sometimes you might want to apply different limits to different users. For
example you might want to allow 10 requests per 10 seconds for free users, but
60 requests per 10 seconds for paid users.
Here's how you could do that:
```ts theme={"system"}
import { Redis } from "@upstash/redis";
import { Ratelimit } from "@upstash/ratelimit";
const redis = Redis.fromEnv();
const ratelimit = {
free: new Ratelimit({
redis,
analytics: true,
prefix: "ratelimit:free",
limiter: Ratelimit.slidingWindow(10, "10s"),
}),
paid: new Ratelimit({
redis,
analytics: true,
prefix: "ratelimit:paid",
limiter: Ratelimit.slidingWindow(60, "10s"),
}),
};
await ratelimit.free.limit(ip);
// or for a paid user you might have an email or userId available:
await ratelimit.paid.limit(userId);
```
## Custom Rates
When we call `limit`, it subtracts 1 from the number of calls/tokens available in
the timeframe by default. But there are use cases where we may want to subtract different
numbers depending on the request.
Consider a case where we receive some input from the user either alone or in batches.
If we want to rate limit based on the number of inputs the user can send, we need a way of
specifying what value to subtract.
This is possible thanks to the `rate` parameter. Simply call the `limit` method like the
following:
```ts theme={"system"}
const { success } = await ratelimit.limit("identifier", { rate: batchSize });
```
This way, the algorithm will subtract `batchSize` instead of 1.
## Multi Region
Let's assume you have customers in the US and Europe. In this case you can
create 2 separate global redis databases on [Upstash](https://console.upstash.com)
(one with its primary in US and the other in Europe) and your users will enjoy
the latency of whichever db is closest to them.
Using a single Redis instance with replicas in different regions cannot offer
the same performance as `MultiRegionRatelimit` because all write commands have
to go through the primary, increasing latency in other regions.
Using a single redis instance has the downside of providing low latencies only
to the part of your userbase closest to the deployed db. That's why we also
built `MultiRegionRatelimit` which replicates the state across multiple redis
databases as well as offering lower latencies to more of your users.
`MultiRegionRatelimit` does this by checking the current limit in the closest db
and returning immediately. Only afterwards will the state be asynchronously
replicated to the other databases leveraging
[CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type). Due
to the nature of distributed systems, there is no way to guarantee the set
ratelimit is not exceeded by a small margin. This is the tradeoff for reduced
global latency.
### Usage
The API is the same, except for asking for multiple redis instances:
```ts theme={"system"}
import { MultiRegionRatelimit } from "@upstash/ratelimit"; // for deno: see above
import { Redis } from "@upstash/redis";
// Create a new ratelimiter, that allows 10 requests per 10 seconds
const ratelimit = new MultiRegionRatelimit({
redis: [
new Redis({
/* auth */
}),
new Redis({
/* auth */
}),
new Redis({
/* auth */
}),
],
limiter: MultiRegionRatelimit.slidingWindow(10, "10 s"),
analytics: true,
});
// Use a constant string to limit all requests with a single ratelimit
// Or use a userID, apiKey or ip address for individual limits.
const identifier = "api";
const { success } = await ratelimit.limit(identifier);
```
### Asynchronous synchronization between databases
The MultiRegion setup will do some synchronization between databases after
returning the current limit. This can lead to problems on Cloudflare Workers and
therefore Vercel Edge functions, because dangling promises must be taken care
of:
```ts theme={"system"}
const { pending } = await ratelimit.limit("id");
context.waitUntil(pending);
```
See more information on `context.waitUntil` at
[Cloudflare](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil)
and [Vercel](https://vercel.com/docs/functions/edge-middleware/middleware-api#waituntil).
You can also utilize [`waitUntil` from Vercel Functions API](https://vercel.com/docs/functions/functions-api-reference#waituntil).
# Getting Started
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/gettingstarted
## Create your Redis instance
For the rate limit to work, we need to create an Upstash Redis and get its credentials. To create an Upstash Redis, you can follow the [Upstash Redis "Get Started" guide](/redis/overall/getstarted).
## Add Ratelimit to Your Project
Once we have a Redis instance, next step is adding the rate limit to your project in its most basic form.
### Install Ratelimit
First, we need to install `@upstash/ratelimit`:
```bash theme={"system"}
npm install @upstash/ratelimit
```
```ts theme={"system"}
import { Ratelimit } from "https://cdn.skypack.dev/@upstash/ratelimit@latest";
```
### Add Ratelimit to Your Endpoint
Next step is to add Ratelimit to your endpoint. In the example below, you can see how to initialize a Ratelimit and use it:
```ts theme={"system"}
import { Ratelimit } from "@upstash/ratelimit"; // for deno: see above
import { Redis } from "@upstash/redis";
// Create a new ratelimiter, that allows 10 requests per 10 seconds
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
analytics: true,
/**
* Optional prefix for the keys used in redis. This is useful if you want to share a redis
* instance with other applications and want to avoid key collisions. The default prefix is
* "@upstash/ratelimit"
*/
prefix: "@upstash/ratelimit",
});
// Use a constant string to limit all requests with a single ratelimit
// Or use a userID, apiKey or ip address for individual limits.
const identifier = "api";
const { success } = await ratelimit.limit(identifier);
if (!success) {
return "Unable to process at this time";
}
doExpensiveCalculation();
return "Here you go!";
```
For Cloudflare Workers and Fastly Compute\@Edge, you can use the following imports:
```ts theme={"system"}
import { Redis } from "@upstash/redis/cloudflare"; // for cloudflare workers and pages
import { Redis } from "@upstash/redis/fastly"; // for fastly compute@edge
```
In this example, we initialize a Ratelimit with an Upstash Redis. The Uptash Redis instance is created from the environment variables and passed to the Ratelimit instance. Then, we check the access rate using the `ratelimit.limit(identifier)` method. If the `success` field is true, we allow the expensive calculation to go through.
For more examples, see the [Examples](/redis/sdks/ratelimit-ts/overview#examples).
### Set Environment Variables
Final step is to update the environment variables so that the Ratelimit can communicate with the Upstash Redis. `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` environment variables must be set in order for the `Redis.fromEnv()` command to work. You can get the values of these environment variables from the [Upstash Console](https://console.upstash.com/redis) by navigating to the page of the Redis instance you created.
An alternative of using the `Redis.fromEnv()` method is to pass the variables yourself. This can be useful if you save these environment variables with a different name:
```ts theme={"system"}
new Redis({
url: "https://****.upstash.io",
token: "********",
});
```
Here is how you can set the environment variables in different cases:
Go to the "Settings" tab in your project. In the menu to the left, click "Environment Variables". Add `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` environment variables and their values.
Run:
```
npx wrangler secret put UPSTASH_REDIS_REST_URL
```
When prompted, enter the value of `UPSTASH_REDIS_REST_URL` when prompted. Do the same for `UPSTASH_REDIS_REST_TOKEN`:
```
npx wrangler secret put UPSTASH_REDIS_REST_TOKEN
```
Go to the `.env.local` file and add the environment variables:
```
UPSTASH_REDIS_REST_URL=****
UPSTASH_REDIS_REST_TOKEN=****
```
## Serverless Environments
When we use ratelimit in a serverless environment like CloudFlare Workers or Vercel Edge,
we need to be careful about making sure that the rate limiting operations complete correctly
before the runtime ends after returning the response.
This is important in two cases where we do some operations in the background asynchronously after `limit` is called:
1. Using MultiRegion: synchronize Redis instances in different regions
2. Enabling analytics: send analytics to Redis
In these cases, we need to wait for these operations to finish before sending the response to the user. Otherwise, the runtime will end and we won't be able to complete our chores.
In order to wait for these operations to finish, use the `pending` promise:
```ts theme={"system"}
const { pending } = await ratelimit.limit("id");
context.waitUntil(pending);
```
See more information on `context.waitUntil` at
[Cloudflare](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil)
and [Vercel](https://vercel.com/docs/functions/edge-middleware/middleware-api#waituntil).
You can also utilize [`waitUntil` from Vercel Functions API](https://vercel.com/docs/functions/functions-api-reference#waituntil).
## Customizing the Ratelimit Algorithm
There are several algorithms we can use for rate limiting. Explore the different rate-limiting algorithms available; how they work, their advantages and disadvantages in the [Algorithms page](/redis/sdks/ratelimit-ts/algorithms). You can learn about the **cost in terms of the number of commands**, by referring to the [Costs page](/redis/sdks/ratelimit-ts/costs).
## Methods
In our example, we only used the `limit` method. There are other methods we can use in the Ratelimit. These are:
* `blockUntilReady`: Process a request only when the rate-limiting algorithm allows it.
* `resetUsedTokens`: Reset the rate limiter state for some identifier.
* `getRemaining`: Get the remaining tokens/requests left for some identifier.
To learn more about these methods, refer to the [Methods page](/redis/sdks/ratelimit-ts/methods).
## Features
To configure the your Ratelimit according to your needs, you can make use of several features:
Handle blocked requests without having to call your Redis Database
If the Redis call of the ratelimit is not resolved in some timeframe, allow
the request by default
Collect information on which identifiers made how many requests and how many
were blocked
Create a deny list to block requests based on user agents, countries, IP
addresses an more
Consume different amounts of tokens in different requests (example: limiting
based on request/response size)
Utilize several Redis databases in different regions to serve users faster
Use different limits for different kinds of requests (example: paid and free
users)
For more information about the features, see the [Features page](/redis/sdks/ratelimit-ts/features).
# Configure Upstash Ratelimit Strapi Plugin
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/integrations/strapi/configurations
After setting up the plugin, it's possible to customize the ratelimiter algorithm and rates. You can also define different rate limits and rate limit algorithms for different routes.
## General Configurations
Enable or disable the plugin.
## Database Configurations
The token to authenticate with the Upstash Redis REST API. You can find this
credential on Upstash Console with the name `UPSTASH_REDIS_REST_TOKEN`
The URL for the Upstash Redis REST API. You can find this credential on
Upstash Console with the name `UPSTASH_REDIS_REST_URL`
The prefix for the rate limit keys. The plugin uses this prefix to store the
rate limit data in Redis.
For example, if the prefix is `@strapi`, the key will be
`@strapi:::`.
Enable analytics for the rate limit. When enabled, the plugin extra insights
related to your ratelimits. You can use this data to analyze the rate limit
usage on [Upstash Console](https://console.upstash.com/ratelimit).
## Strategy
The plugin uses a strategy array to define the rate limits per route. Each strategy object has the following properties:
An array of HTTP methods to apply the rate limit.
For example, `["GET", "POST"]`
The path to apply the rate limit. You can use wildcards to match multiple
routes. For example, `*` matches all routes.
Some examples:
* `path: "/api/restaurants/:id"`
* `path: "/api/restaurants"`
The source to identifiy the user. Requests with the same identifier will be
rate limited under the same limit.
Available sources are:
* `ip`: The IP address of the user.
* `header`: The value of a header key. You should pass the source in the `header.` format.
For example, `header.Authorization` will use the value of the `Authorization`
Enable debug mode for the route. When enabled, the plugin logs the remaining
limits and the block status for each request.
The limiter configuration for the route. The limiter object has the following
properties:
The rate limit algorithm to use. For more information related to algorithms, see docs [**here**](/redis/sdks/ratelimit-ts/algorithms).
* `fixed-window`: The fixed-window algorithm divides time into fixed intervals. Each interval has a set limit of allowed requests. When a new interval starts, the count resets.
* `sliding-window`:
The sliding-window algorithm uses a rolling time frame. It considers requests from the past X time units, continuously moving forward. This provides a smoother distribution of requests over time.
* `token-bucket`: The token-bucket algorithm uses a bucket that fills with tokens at a steady rate. Each request consumes a token. If the bucket is empty, requests are denied. This allows for bursts of traffic while maintaining a long-term rate limit.
The number of tokens allowed in the time window.
The time window for the rate limit. Available units are `"ms" | "s" | "m" | "h" | "d"`
For example, `20s` means 20 seconds.
The rate at which the bucket refills. **This property is only used for the token-bucket algorithm.**
## Examples
```json Apply rate limit for all routes theme={"system"}
{
"strapi-plugin-upstash-ratelimit":{
"enabled":true,
"resolve":"./src/plugins/strapi-plugin-upstash-ratelimit",
"config":{
"enabled":true,
"token":"process.env.UPSTASH_REDIS_REST_TOKEN",
"url":"process.env.UPSTASH_REDIS_REST_URL",
"strategy":[
{
"methods":[
"GET",
"POST"
],
"path":"*",
"identifierSource":"header.Authorization",
"limiter":{
"algorithm":"fixed-window",
"tokens":10,
"window":"20s"
}
}
],
"prefix":"@strapi"
}
}
}
```
```json Apply rate limit with IP theme={"system"}
{
"strapi-plugin-upstash-ratelimit": {
"enabled": true,
"resolve": "./src/plugins/strapi-plugin-upstash-ratelimit",
"config": {
"enabled": true,
"token": "process.env.UPSTASH_REDIS_REST_TOKEN",
"url": "process.env.UPSTASH_REDIS_REST_URL",
"strategy": [
{
"methods": ["GET", "POST"],
"path": "*",
"identifierSource": "ip",
"limiter": {
"algorithm": "fixed-window",
"tokens": 10,
"window": "20s"
}
}
],
"prefix": "@strapi"
}
}
}
```
```json Routes with different rate limit algorithms theme={"system"}
{
"strapi-plugin-upstash-ratelimit": {
"enabled": true,
"resolve": "./src/plugins/strapi-plugin-upstash-ratelimit",
"config": {
"enabled": true,
"token": "process.env.UPSTASH_REDIS_REST_TOKEN",
"url": "process.env.UPSTASH_REDIS_REST_URL",
"strategy": [
{
"methods": ["GET", "POST"],
"path": "/api/restaurants/:id",
"identifierSource": "header.x-author",
"limiter": {
"algorithm": "fixed-window",
"tokens": 10,
"window": "20s"
}
},
{
"methods": ["GET"],
"path": "/api/restaurants",
"identifierSource": "header.x-author",
"limiter": {
"algorithm": "tokenBucket",
"tokens": 10,
"window": "20s",
"refillRate": 1
}
}
],
"prefix": "@strapi"
}
}
}
```
# Upstash Ratelimit Strapi Integration
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/integrations/strapi/getting-started
Strapi is an open-source, Node.js based, Headless CMS that saves developers a lot of development time, enabling them to build their application backends quickly by decreasing the lines of code necessary.
You can use Upstash's HTTP and Redis based [Ratelimit package](https://github.com/upstash/ratelimit-js) integration with Strapi to protect your APIs from abuse.
## Getting started
### Installation
```bash npm theme={"system"}
npm install --save @upstash/strapi-plugin-upstash-ratelimit
```
```bash yarn theme={"system"}
yarn add @upstash/strapi-plugin-upstash-ratelimit
```
### Create database
Create a new redis database on [Upstash Console](https://console.upstash.com/). See [related docs](/redis/overall/getstarted) for further info related to creating a database.
### Set up environment variables
Get the environment variables from [Upstash Console](https://console.upstash.com/), and set it to `.env` file as below:
```shell .env theme={"system"}
UPSTASH_REDIS_REST_TOKEN=""
UPSTASH_REDIS_REST_URL=""
```
### Configure the plugin
You can use
```typescript /config/plugins.ts theme={"system"}
export default () => ({
"strapi-plugin-upstash-ratelimit": {
enabled: true,
resolve: "./src/plugins/strapi-plugin-upstash-ratelimit",
config: {
enabled: true,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
url: process.env.UPSTASH_REDIS_REST_URL,
strategy: [
{
methods: ["GET", "POST"],
path: "*",
limiter: {
algorithm: "fixed-window",
tokens: 10,
window: "20s",
},
},
],
prefix: "@strapi",
},
},
});
```
```javascript /config/plugins.js theme={"system"}
module.exports = () => ({
"strapi-plugin-upstash-ratelimit": {
enabled: true,
resolve: "./src/plugins/strapi-plugin-upstash-ratelimit",
config: {
enabled: true,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
url: process.env.UPSTASH_REDIS_REST_URL,
strategy: [
{
methods: ["GET", "POST"],
path: "*",
limiter: {
algorithm: "fixed-window",
tokens: 10,
window: "20s",
},
},
],
prefix: "@strapi",
},
},
});
```
# Methods
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/methods
This page contains information on what methods are available in Ratelimit and how they can be used. For information on the
cost of these operations in term of number of Redis commands, refer to the [Costs page](/redis/sdks/ratelimit-ts/costs).
## `limit`
The `limit` method is the heart of the Ratelimit algorithm.
```ts theme={"system"}
ratelimit.limit(
identifier: string,
req?: {
geo?: Geo;
rate?: number,
ip?: string,
userAgent?: string,
country?: string
},
): Promise
```
It receives an identifier to rate limit. Additionally, it can be passed a `req` parameter which can contain either a
`geo` or a `rate` field. `geo` field is passed to the analytics but is not in use currently. The `rate` field determines
the amount of tokens/requests to subtract from the state of the algorithm with regards to the provided identifier.
The `limit` method returns some more metadata that might be useful to you:
````ts theme={"system"}
export type RatelimitResponse = {
/**
* Whether the request may pass(true) or exceeded the limit(false)
*/
success: boolean;
/**
* Maximum number of requests allowed within a window.
*/
limit: number;
/**
* How many requests the user has left within the current window.
*/
remaining: number;
/**
* Unix timestamp in milliseconds when the limits are reset.
*/
reset: number;
/**
* For the MultiRegion setup we do some synchronizing in the background, after returning the current limit.
* Or when analytics is enabled, we send the analytics asynchronously after returning the limit.
* In most case you can simply ignore this.
*
* On Vercel Edge or Cloudflare workers, you need to explicitly handle the pending Promise like this:
*
* ```ts
* const { pending } = await ratelimit.limit("id")
* context.waitUntil(pending)
* ```
*
* See `waitUntil` documentation in
* [Cloudflare](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#contextwaituntil)
* and [Vercel](https://vercel.com/docs/functions/edge-middleware/middleware-api#waituntil)
* for more details.
* ```
*/
pending: Promise;
/**
* Reason behind the result in `success` field.
* - Is set to "timeout" when request times out
* - Is set to "cacheBlock" when an identifier is blocked through cache without calling redis because it was
* rate limited previously.
* - Is set to "denyList" when identifier or one of ip/user-agent/country parameters is in deny list. To enable
* deny list, see `enableProtection` parameter. To edit the deny list, see the Upstash Ratelimit Dashboard
* at https://console.upstash.com/ratelimit.
* - Is set to undefined if rate limit check had to use Redis. This happens in cases when `success` field in
* the response is true. It can also happen the first time sucecss is false.
*/
reason?: RatelimitResponseType;
/**
* The value which was in the deny list if reason: "denyList"
*/
deniedValue?: string;
};
````
## `blockUntilReady`
In case you don't want to reject a request immediately but wait until it can be
processed, we also provide
```ts theme={"system"}
ratelimit.blockUntilReady(
identifier: string,
timeout: number
): Promise
```
It is very similar to the `limit` method and takes an identifier and returns the
same response. However if the current limit has already been exceeded, it will
automatically wait until the next window starts and will try again. Setting the
timeout parameter (in milliseconds) will cause the returned Promise to resolve
in a finite amount of time.
```ts theme={"system"}
// Create a new ratelimiter, that allows 10 requests per 10 seconds
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
analytics: true,
});
// `blockUntilReady` returns a promise that resolves as soon as the request is allowed to be processed, or after 30 seconds
const { success } = await ratelimit.blockUntilReady("id", 30_000);
if (!success) {
return "Unable to process, even after 30 seconds";
}
doExpensiveCalculation();
return "Here you go!";
```
In **Cloudflare**, `blockUntilReady` will not work as intended due to
`Date.now()` not behaving the same as in Node environments.
**For more information, check**:
[https://developers.cloudflare.com/workers/runtime-apis/web-standards](https://developers.cloudflare.com/workers/runtime-apis/web-standards)
## `resetUsedTokens`
This method resets the state of the algorithm with respect to some identifier:
```ts theme={"system"}
ratelimit.resetUsedTokens(identifier: string): Promise
```
## `getRemaining`
This method returns the remaining tokens/requests available for some identifier:
```ts theme={"system"}
ratelimit.getRemaining(identifier: string): Promise<{
remaining: number;
reset: number;
}>
```
`remaining` is the remaining tokens. `reset` is the reset timestamp.
# Overview
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/overview
# Upstash Rate Limit
[](https://www.npmjs.com/package/ratelimit)
It is the only connectionless (HTTP based) rate limiting library and designed
for:
* Serverless functions (AWS Lambda, Vercel ...)
* Cloudflare Workers
* Vercel Edge
* Fastly Compute\@Edge
* Next.js, Jamstack ...
* Client side web/mobile applications
* WebAssembly
* and other environments where HTTP is preferred over TCP.
## Quick Links:
* [Github Repository](https://github.com/upstash/ratelimit)
* [Getting Started](/redis/sdks/ratelimit-ts/gettingstarted)
* [Costs](/redis/sdks/ratelimit-ts/costs)
## Features
Handle blocked requests without having to call your Redis Database
If the Redis call of the ratelimit is not resolved in some timeframe, allow
the request by default
Collect information on which identifiers made how many requests and how many
were blocked
Create a deny list to block requests based on user agents, countries, IP
addresses and more
Consume different amounts of tokens in different requests (example: limiting
based on request/response size)
Utilize several Redis databases in different regions to serve users faster
Use different limits for different kinds of requests (example: paid and free
users)
For more information about the features, see the [Features tab](/redis/sdks/ratelimit-ts/features).
## Examples
Rate limit an API in a Nextjs project
Rate limit an API with a Middleware in a Nextjs project
Rate limit an Vercel Edge Function
Use Deny Lists to Protect Your Website
Rate limit access to your Cloudflare Pages app
Rate limit access to your Cloudflare Workers
Rate limit access to a Remix App
Rate limit a Nexjs app using Vercel KV
Rate limit your deno app
Limiting requests to a Chatbot endpoint which streams LLM outputs
# Traffic Protection
Source: https://upstash.com/docs/redis/sdks/ratelimit-ts/traffic-protection
### Deny List
Imagine that you want to block requests from certain countries or from some
user agents. In this case, you can make use of deny lists introduced in
ratelimit version 1.2.1.
Deny lists allow you to block based on IP addresses, user agents, countries
and [identifiers](/redis/sdks/ratelimit-ts/methods#limit).
To enable checking the deny list in your Ratelimit client, simply pass
`enableProtection` as `true`:
```tsx theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
enableProtection: true
analytics: true,
});
```
When `limit` is called, the client will check whether any of these values
are in the deny list and block the request if so.
```tsx theme={"system"}
const { success, pending, reason, deniedValue } = ratelimit.limit("userId", {
ip: "ip-address",
userAgent: "user-agent",
country: "country",
});
await pending; // await pending if you have analytics enabled
console.log(success, reason, deniedValue);
// prints: false, "denyList", "ip-address"
```
If a request fails because a value was in deny list, `reason` field will
be `"denyList"`. `deniedValue` will contain the value in the deny list.
See [limit method](/redis/sdks/ratelimit-ts/methods#limit)
for more detailts.
Client also keeps a **cache** of denied values. When a value is found
in the deny list, the client stores this value in the cache. If this value
is encountered in the following requests, it is **denied without calling
Redis at all**. Items are stored in the cache for a minute. This means that if
you add a new value to the deny list, it will immediately take affect but when you
remove a value, it can take up to a minute for clients to start
accepting the value. This can significantly reduce the number of calls to Redis.
Contents of the deny lists are managed from the [Ratelimit Dashboard](/redis/sdks/ratelimit-ts/features#dashboard).
You can use the dashboard to add items to the deny list or remove them.
If you have analytics enabled, you can also view the number of denied
requests per country/ip address/user agent/identifier on the dashboard.
Note that we look for exact match when checking a value to see if it's in
the deny lists. **Pattern matching is not supported**.
### Auto IP Deny List
The Auto IP Deny List feature enables the automatic blocking of IP addresses
identified as malicious through open-source IP deny lists. This functionality
uses the [ipsum repository on GitHub](https://github.com/stamparm/ipsum),
which aggregates data from over 30 different deny lists.
To enable protection, set the enableProtection parameter to true. Once activated,
your SDK will automatically block IP addresses by leveraging the IP deny lists
when you provide the request IPs in the limit method.
```ts theme={"system"}
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
enableProtection: true,
});
```
The IP deny list is updated daily at 2 AM UTC. Upon expiration, the
first call to limit after 2 AM UTC will trigger an update, downloading
the latest IPs from GitHub and refreshing the list in your Redis
instance. The update process occurs asynchronously, allowing you to
return the result to the user while the IP list updates in the
background. To ensure the update completes successfully, utilize the
pending field:
```ts theme={"system"}
const { success, pending } = await ratelimit.limit(
content,
{ip: "ip-address"}
);
await pending;
```
For more information on effectively using pending, refer to the
["Asynchronous synchronization between databases" section](/redis/sdks/ratelimit-ts/features#asynchronous-synchronization-between-databases).
Blocked IPs will be listed in the "Denied" section of the Ratelimit
dashboard, providing a clear overview of the addresses that have
been automatically blocked.
If you prefer to disable the Auto IP Deny List feature while still
using the deny lists, you can do so via the [Ratelimit dashboard on
the Upstash Console](https://console.upstash.com/ratelimit).
# Advanced
Source: https://upstash.com/docs/redis/sdks/ts/advanced
## Disable automatic serialization
Your data is (de)serialized as `json` by default. This works for most use cases
but you can disable it if you want:
```ts theme={"system"}
const redis = new Redis({
// ...
automaticDeserialization: false,
});
// or
const redis = Redis.fromEnv({
automaticDeserialization: false,
});
```
This probably breaks quite a few types, but it's a first step in that direction.
Please report bugs and broken types
[here](https://github.com/upstash/upstash-redis/issues/49).
## Keep-Alive
`@upstash/redis` optimizes performance by reusing connections wherever possible, reducing latency.
This is achieved by keeping the client in memory instead of reinitializing it with each new function invocation.
As a result, when a hot lambda function receives a new request, it uses the already initialized client, allowing for the reuse of existing connections to Upstash.
This functionality is enabled by default.
## Request Timeout
You can configure the SDK so that it will throw an error if the request takes longer than a specified time.
You can achieve this using the signal parameter like this:
```ts theme={"system"}
try {
const redis = new Redis({
url: "",
token: "",
// set a timeout of 1 second
signal: () => AbortSignal.timeout(1000),
});
} catch (error) {
if (error.name === "TimeoutError") {
console.error("Request timed out");
} else {
console.error("An error occurred:", error);
}
}
```
## Telemetry
This library sends anonymous telemetry data to help us improve your experience.
We collect the following:
* SDK version
* Platform (Deno, Cloudflare, Vercel)
* Runtime version ([node@18.x](mailto:node@18.x))
You can opt out by setting the `UPSTASH_DISABLE_TELEMETRY` environment variable
to any truthy value.
```sh theme={"system"}
UPSTASH_DISABLE_TELEMETRY=1
```
Alternatively, you can pass `enableTelemetry: false` when initializing the Redis client:
```ts theme={"system"}
const redis = new Redis({
// ...,
enableTelemetry: false,
});
```
# ECHO
Source: https://upstash.com/docs/redis/sdks/ts/commands/auth/echo
Returns a message back to you. Useful for debugging the connection.
## Arguments
A message to send to the server.
## Response
The same message you sent.
```ts Example theme={"system"}
const response = await redis.echo("hello world");
console.log(response); // "hello world"
```
# PING
Source: https://upstash.com/docs/redis/sdks/ts/commands/auth/ping
Send a ping to the server and get a response if the server is alive.
## Arguments
No arguments
## Response
`PONG`
```ts Example theme={"system"}
const response = await redis.ping();
console.log(response); // "PONG"
```
# BITCOUNT
Source: https://upstash.com/docs/redis/sdks/ts/commands/bitmap/bitcount
Count the number of set bits.
The `BITCOUNT` command in Redis is used to count the number of set bits (bits with a value of 1) in a range of bytes within a key that is stored as a binary string. It is primarily used for bit-level operations on binary data stored in Redis.
## Arguments
The key to get.
Specify the range of bytes within the binary string to count the set bits. If not provided, it counts set bits in the entire string.
Either specify both `start` and `end` or neither.
Specify the range of bytes within the binary string to count the set bits. If not provided, it counts set bits in the entire string.
Either specify both `start` and `end` or neither.
## Response
The number of set bits in the specified range.
```ts Example theme={"system"}
const bits = await redis.bitcount(key);
```
```ts With Range theme={"system"}
const bits = await redis.bitcount(key, 5, 10);
```
# BITOP
Source: https://upstash.com/docs/redis/sdks/ts/commands/bitmap/bitop
Perform bitwise operations between strings.
The `BITOP` command in Redis is used to perform bitwise operations on multiple keys (or Redis strings) and store the result in a destination key. It is primarily used for performing logical AND, OR, XOR, and NOT operations on binary data stored in Redis.
## Arguments
Specifies the type of bitwise operation to perform, which can be one of the following: `AND`, `OR`, `XOR`, or `NOT`.
The key to store the result of the operation in.
One or more keys to perform the operation on.
## Response
The size of the string stored in the destination key.
```ts Example theme={"system"}
await redis.bitop("AND", "destKey", "sourceKey1", "sourceKey2");
```
# BITPOS
Source: https://upstash.com/docs/redis/sdks/ts/commands/bitmap/bitpos
Find the position of the first set or clear bit (bit with a value of 1 or 0) in a Redis string key.
## Arguments
The key to search in.
The key to store the result of the operation in.
The index to start searching at.
The index to stop searching at.
## Response
The index of the first occurrence of the bit in the string.
```ts Example theme={"system"}
await redis.bitpos("key", 1);
```
```ts With Range theme={"system"}
await redis.bitpos("key", 1, 5, 20);
```
# GETBIT
Source: https://upstash.com/docs/redis/sdks/ts/commands/bitmap/getbit
Retrieve a single bit.
## Arguments
The key of the bitset
Specify the offset at which to get the bit.
## Response
The bit value stored at offset.
```ts Example theme={"system"}
const bit = await redis.getbit(key, 4);
```
# SETBIT
Source: https://upstash.com/docs/redis/sdks/ts/commands/bitmap/setbit
Set a single bit in a string.
## Arguments
The key of the bitset
Specify the offset at which to set the bit.
The bit to set
## Response
The original bit value stored at offset.
```ts Example theme={"system"}
const originalBit = await redis.setbit(key, 4, 1);
```
# DEL
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/del
Removes the specified keys. A key is ignored if it does not exist.
## Arguments
One or more keys to remove.
## Response
The number of keys that were removed.
```ts Basic theme={"system"}
await redis.del("key1", "key2");
```
```ts Array theme={"system"}
// in case you have an array of keys
const keys = ["key1", "key2"];
await redis.del(...keys)
```
# EXISTS
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/exists
Check if a key exists.
## Arguments
One or more keys to check.
## Response
The number of keys that exist
```ts Example theme={"system"}
await redis.set("key1", "value1")
await redis.set("key2", "value2")
const keys = await redis.exists("key1", "key2", "key3");
console.log(keys) // 2
```
# EXPIRE
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/expire
Sets a timeout on key. The key will automatically be deleted.
## Arguments
The key to set the timeout on.
How many seconds until the key should be deleted.
## Response
`1` if the timeout was set, `0` otherwise
```ts Example theme={"system"}
await redis.set("mykey", "Hello");
await redis.expire("mykey", 10);
```
# EXPIREAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/expireat
Sets a timeout on key. The key will automatically be deleted.
## Arguments
The key to set the timeout on.
A unix timestamp in seconds at which point the key will expire.
## Response
`1` if the timeout was set, `0` otherwise
```ts Example theme={"system"}
await redis.set("mykey", "Hello");
const tenSecondsFromNow = Math.floor(Date.now() / 1000) + 10;
await redis.expireat("mykey", tenSecondsFromNow);
```
# KEYS
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/keys
Returns all keys matching pattern.
This command may block the DB for a long time, depending on its size. We advice against using it in production. Use [SCAN](/redis/sdks/ts/commands/generic/scan) instead.
## Arguments
A glob-style pattern. Use `*` to match all keys.
## Response
Array of keys matching the pattern.
```ts Example theme={"system"}
const keys = await redis.keys("prefix*");
```
```ts Match All theme={"system"}
const keys = await redis.keys("*");
```
# PERSIST
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/persist
Remove any timeout set on the key.
## Arguments
The key to persist
## Response
`1` if the timeout was removed, `0` if `key` does not exist or does not have an associated timeout.
```ts Example theme={"system"}
await redis.persist(key);
```
# PEXPIRE
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/pexpire
Sets a timeout on key. After the timeout has expired, the key will automatically be deleted.
## Arguments
The key to expire.
The number of milliseconds until the key expires.
## Response
`1` if the timeout was applied, `0` if `key` does not exist.
```ts Example theme={"system"}
await redis.pexpire(key, 60_000); // 1 minute
```
# PEXPIREAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/pexpireat
Sets a timeout on key. After the timeout has expired, the key will automatically be deleted.
## Arguments
The key to expire.
The unix timestamp in milliseconds at which the key will expire.
## Response
`1` if the timeout was applied, `0` if `key` does not exist.
```ts Example theme={"system"}
const 10MinutesFromNow = Date.now() + 10 * 60 * 1000;
await redis.pexpireat(key, 10MinutesFromNow);
```
# PTTL
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/pttl
Return the expiration in milliseconds of a key.
## Arguments
The key
## Response
The number of milliseconds until this expires, negative if the key does not exist or does not have an expiration set.
```ts Example theme={"system"}
const millis = await redis.pttl(key);
```
# RANDOMKEY
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/randomkey
Returns a random key from database
## Arguments
No arguments
## Response
A random key from database, or `null` when database is empty.
```ts Example theme={"system"}
const key = await redis.randomkey();
```
# RENAME
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/rename
Rename a key
## Arguments
The original key.
A new name for the key.
## Response
`OK`
```ts Example theme={"system"}
await redis.rename("old", "new");
```
# RENAMENX
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/renamenx
Rename a key if it does not already exist.
## Arguments
The original key.
A new name for the key.
## Response
`1` if key was renamed, `0` if key was not renamed.
```ts Example theme={"system"}
const renamed = await redis.renamenx("old", "new");
```
# SCAN
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/scan
Scan the database for keys.
## Arguments
The cursor value. Start with "0" on the first call, then use the cursor
returned by each call for the next. It's a string to safely support large
numbers that might exceed JavaScript's number limits.
Glob-style pattern to filter by field names.
Number of fields to return per call.
Filter by type. For example `string`, `hash`, `set`, `zset`, `list`,
`stream`.
## Response
Returns the next cursor and the list of matching keys. When the returned
cursor is "0", the scan is complete.
```ts Basic theme={"system"}
const [cursor, keys] = await redis.scan(0, { match: "*" });
```
# TOUCH
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/touch
Alters the last access time of one or more keys
## Arguments
One or more keys.
## Response
The number of keys that were touched.
```ts Example theme={"system"}
await redis.touch("key1", "key2", "key3");
```
# TTL
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/ttl
Return the expiration in seconds of a key.
## Arguments
The key
## Response
The number of seconds until this expires, negative if the key does not exist or does not have an expiration set.
```ts Example theme={"system"}
const seconds = await redis.ttl(key);
```
# TYPE
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/type
Get the type of a key.
## Arguments
The key to get.
## Response
The type of the key.
One of `string` | `list` | `set` | `zset` | `hash` | `none`
```ts Example theme={"system"}
await redis.set("key", "value");
const t = await redis.type("key");
console.log(t) // "string"
```
# UNLINK
Source: https://upstash.com/docs/redis/sdks/ts/commands/generic/unlink
Removes the specified keys. A key is ignored if it does not exist.
## Arguments
One or more keys to unlink.
## Response
The number of keys that were unlinked.
```ts Basic theme={"system"}
await redis.unlink("key1", "key2");
```
```ts Array theme={"system"}
// in case you have an array of keys
const keys = ["key1", "key2"];
await redis.unlink(...keys)
```
# HDEL
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hdel
Deletes one or more hash fields.
## Arguments
The key to get.
One or more fields to delete.
## Response
The number of fields that were removed from the hash.
```ts Example theme={"system"}
await redis.hdel(key, 'field1', 'field2');
// returns 5
```
# HEXISTS
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hexists
Checks if a field exists in a hash.
## Arguments
The key to get.
The field to check.
## Response
`1` if the hash contains `field`. `0` if the hash does not contain `field`, or `key` does not exist.
```ts Example theme={"system"}
await redis.hset("key", "field", "value");
const exists = await redis.hexists("key", "field");
console.log(exists); // 1
```
# HEXPIRE
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hexpire
Sets an expiration time for one or more fields in a hash.
## Arguments
The key of the hash.
The field or fields to set an expiration time for.
The time-to-live (TTL) in seconds.
Optional condition for setting the expiration:
* `NX`: Set the expiration only if the field does not already have an expiration.
* `XX`: Set the expiration only if the field already has an expiration.
* `GT`: Set the expiration only if the new TTL is greater than the current TTL.
* `LT`: Set the expiration only if the new TTL is less than the current TTL.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HEXPIRE documentation](https://redis.io/commands/hexpire).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
const expirationSet = await redis.hexpire("my-key", "my-field", 1);
console.log(expirationSet); // 1
```
# HEXPIREAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hexpireat
Sets an expiration time for field(s) in a hash in seconds since the Unix epoch.
## Arguments
The key of the hash.
The field(s) to set an expiration time for.
The expiration time as a Unix timestamp in seconds.
Optional condition for setting the expiration:
* `NX`: Set the expiration only if the field does not already have an expiration.
* `XX`: Set the expiration only if the field already has an expiration.
* `GT`: Set the expiration only if the new TTL is greater than the current TTL.
* `LT`: Set the expiration only if the new TTL is less than the current TTL.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HEXPIREAT documentation](https://redis.io/commands/hexpireat).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
const expirationSet = await redis.hexpireat("my-key", "my-field", Math.floor(Date.now() / 1000) + 10);
console.log(expirationSet); // [1]
```
# HEXPIRETIME
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hexpiretime
Retrieves the expiration time of field(s) in a hash in seconds.
## Arguments
The key of the hash.
The field(s) to retrieve the expiration time for.
## Response
The expiration time in seconds since the Unix epoch for each field.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HEXPIRETIME documentation](https://redis.io/commands/hexpiretime).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
await redis.hexpireat("my-key", "my-field", Math.floor(Date.now() / 1000) + 10);
const expireTime = await redis.hexpiretime("my-key", "my-field");
console.log(expireTime); // e.g., [1697059200]
```
# HGET
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hget
Retrieves the value of a hash field.
## Arguments
The key to get.
The field to get.
## Response
The value of the field, or `null`, when field is not present in the hash or key does not exist.
```ts Example theme={"system"}
await redis.hset("key", {field: "value"});
const field = await redis.hget("key", "field");
console.log(field); // "value"
```
# HGETALL
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hgetall
Retrieves all fields from a hash.
## Arguments
The key to get.
## Response
An object with all fields in the hash.
```ts Example theme={"system"}
await redis.hset("key", {
field1: "value1",
field2: "value2",
});
const hash = await redis.hgetall("key");
console.log(hash); // { field1: "value1", field2: "value2" }
```
# HINCRBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hincrby
Increments the value of a hash field by a given amount
## Arguments
The key of the hash.
The field to increment
How much to increment the field by. Can be negative to subtract.
## Response
The new value of the field after the increment.
```ts Example theme={"system"}
await redis.hset("key", {
field: 20,
});
const after = await redis.hincrby("key", "field", 2);
console.log(after); // 22
```
# HINCRBYFLOAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hincrbyfloat
Increments the value of a hash field by a given float value.
## Arguments
The key of the hash.
The field to increment
How much to increment the field by. Can be negative to subtract.
## Response
The new value of the field after the increment.
```ts Example theme={"system"}
await redis.hset("key", {
field: 20,
});
const after = await redis.hincrby("key", "field", 2.5);
console.log(after); // 22.5
```
# HKEYS
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hkeys
Return all field names in the hash stored at key.
## Arguments
The key of the hash.
## Response
The field names of the hash
```ts Example theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
});
const fields = await redis.hkeys("key");
console.log(fields); // ["id", "username"]
```
# HLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hlen
Returns the number of fields contained in the hash stored at key.
## Arguments
The key of the hash.
## Response
How many fields are in the hash.
```ts Example theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
});
const fields = await redis.hlen("key");
console.log(fields); // 2
```
# HMGET
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hmget
Return the requested fields and their values.
## Arguments
The key of the hash.
One or more fields to get.
## Response
An object containing the fields and their values.
```ts Example theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas"
});
const fields = await redis.hmget("key", "username", "name");
console.log(fields); // { username: "chronark", name: "andreas" }
```
# HPERSIST
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hpersist
Remove the expiration from one or more fields in a hash.
## Arguments
The key of the hash.
The field or fields to remove the expiration from.
## Response
A list of integers indicating the result for each field:
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration set.
* `1` if the expiration was successfully removed.
For more details, see [HPERSIST documentation](https://redis.io/commands/hpersist).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
await redis.hpexpire("my-key", "my-field", 1000);
const expirationRemoved = await redis.hpersist("my-key", "my-field");
console.log(expirationRemoved); // [1]
```
# HPEXPIRE
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hpexpire
Sets an expiration time for a field in a hash in milliseconds.
## Arguments
The key of the hash.
The field or list of fields within the hash to set the expiry for.
The time-to-live (TTL) in milliseconds.
Optional condition for setting the expiration:
* `NX`: Set the expiration only if the field does not already have an expiration.
* `XX`: Set the expiration only if the field already has an expiration.
* `GT`: Set the expiration only if the new TTL is greater than the current TTL.
* `LT`: Set the expiration only if the new TTL is less than the current TTL.
For more details, see [HPEXPIRE documentation](https://redis.io/commands/hpexpire).
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HPEXPIRE documentation](https://redis.io/commands/hpexpire).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
const expirationSet = await redis.hpexpire("my-key", "my-field", 1000);
console.log(expirationSet); // [1]
```
# HPEXPIREAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hpexpireat
Sets an expiration time for field(s) in a hash in milliseconds since the Unix epoch.
## Arguments
The key of the hash.
The field(s) to set an expiration time for.
The expiration time as a Unix timestamp in milliseconds.
Optional condition for setting the expiration:
* `NX`: Set the expiration only if the field does not already have an expiration.
* `XX`: Set the expiration only if the field already has an expiration.
* `GT`: Set the expiration only if the new TTL is greater than the current TTL.
* `LT`: Set the expiration only if the new TTL is less than the current TTL.
## Response
A list of integers indicating whether the expiry was successfully set.
* `-2` if the field does not exist in the hash or if key doesn't exist.
* `0` if the expiration was not set due to the condition.
* `1` if the expiration was successfully set.
* `2` if called with 0 seconds/milliseconds or a past Unix time.
For more details, see [HPEXPIREAT documentation](https://redis.io/commands/hpexpireat).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
const expirationSet = await redis.hpexpireat("my-key", "my-field", Date.now() + 1000);
console.log(expirationSet); // [1]
```
# HPEXPIRETIME
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hpexpiretime
Retrieves the expiration time of a field in a hash in milliseconds.
## Arguments
The key of the hash.
The field(s) to retrieve the expiration time for.
## Response
The expiration time in milliseconds since the Unix epoch.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HPEXPIRETIME documentation](https://redis.io/commands/hpexpiretime).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
await redis.hpexpireat("my-key", "my-field", Date.now() + 1000);
const expireTime = await redis.hpexpiretime("my-key", "my-field");
console.log(expireTime); // e.g., 1697059200000
```
# HPTTL
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hpttl
Retrieves the remaining time-to-live (TTL) for field(s) in a hash in milliseconds.
## Arguments
The key of the hash.
The field(s) to retrieve the TTL for.
## Response
The remaining TTL in milliseconds for each field.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HPTTL documentation](https://redis.io/commands/hpttl).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
await redis.hpexpire("my-key", "my-field", 1000);
const ttl = await redis.hpttl("my-key", "my-field");
console.log(ttl); // e.g., [950]
```
# HRANDFIELD
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hrandfield
Return a random field from a hash
## Arguments
The key of the hash.
Optionally return more than one field.
Return the values of the fields as well.
## Response
An object containing the fields and their values.
```ts Basic theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas"
});
const randomField = await redis.hrandfield("key");
console.log(randomField); // one of "id", "username" or "name"
```
```ts Multiple Fields theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas",
});
const randomFields = await redis.hrandfield("key", 2);
console.log(randomFields); // ["id", "username"] or any other combination
```
```ts With Values theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas",
});
const randomFields = await redis.hrandfield("key", 2, true);
console.log(randomFields); // { id: "1", username: "chronark" } or any other combination
```
# HSCAN
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hscan
Scan a hash for fields.
## Arguments
The key of the hash.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by field names.
Number of fields to return per call.
## Response
The new cursor and the fields array in format `[field, value, field, value]`.
If the new cursor is `0` the iteration is complete.
```ts Basic theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas"
});
const [newCursor, fields] = await redis.hscan("key", 0);
console.log(newCursor); // likely `0` since this is a very small hash
console.log(fields); // ["id", 1, "username", "chronark", "name", "andreas"]
```
```ts Match theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas",
});
const [newCursor, fields] = await redis.hscan("key", 0, { match: "user*" });
console.log(newCursor); // likely `0` since this is a very small hash
console.log(fields); // ["username", "chronark"]
```
```ts Count theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas",
});
const [newCursor, fields] = await redis.hscan("key", 0, { count: 2 });
console.log(fields); // ["id", 1, "name", "andreas", "username", "chronark"]
```
# HSET
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hset
Write one or more fields to a hash.
## Arguments
The key of the hash.
An object of fields and their values.
## Response
The number of fields that were added.
```ts Example theme={"system"}
await redis.hset("key", {
id: 1,
username: "chronark",
name: "andreas"
});
```
# HSETNX
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hsetnx
Write a field to a hash but only if the field does not exist.
## Arguments
The key of the hash.
The name of the field.
Any value, if it's not a string it will be serialized to JSON.
## Response
`1` if the field was set, `0` if it already existed.
```ts Example theme={"system"}
await redis.hsetnx("key", "id", 1)
```
# HSTRLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hstrlen
Returns the string length of a value in a hash.
## Arguments
The key of the hash.
The name of the field.
## Response
`0` if the hash or field does not exist. Otherwise the length of the string.
```ts Example theme={"system"}
const length = await redis.hstrlen("key", "field")
```
# HTTL
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/httl
Retrieves the remaining time-to-live (TTL) for field(s) in a hash in seconds.
## Arguments
The key of the hash.
The field(s) to retrieve the TTL for.
## Response
The remaining TTL in seconds for each field.
* `-2` if the field does not exist in the hash or if the key doesn't exist.
* `-1` if the field exists but has no associated expiration.
For more details, see [HTTL documentation](https://redis.io/commands/httl).
```ts Example theme={"system"}
await redis.hset("my-key", "my-field", "my-value");
await redis.hexpire("my-key", "my-field", 10);
const ttl = await redis.httl("my-key", "my-field");
console.log(ttl); // e.g., [9]
```
# HVALS
Source: https://upstash.com/docs/redis/sdks/ts/commands/hash/hvals
Returns all values in the hash stored at key.
## Arguments
The key of the hash.
## Response
All values in the hash, or an empty list when key does not exist.
```ts Example theme={"system"}
await redis.hset("key", {
field1: "Hello",
field2: "World",
})
const values = await redis.hvals("key")
console.log(values) // ["Hello", "World"]
```
# JSON.ARRAPPEND
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrappend
Append values to the array at path in the JSON document at key.
To specify a string as an array value to append, wrap the quoted string with an additional set of single quotes. Example: '"silver"'.
## Arguments
The key of the json entry.
The path of the array.
One or more values to append to the array.
## Response
The length of the array after the appending.
```ts Example theme={"system"}
await redis.json.arrappend("key", "$.path.to.array", "a");
```
# JSON.ARRINDEX
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrindex
Search for the first occurrence of a JSON value in an array.
## Arguments
The key of the json entry.
The path of the array.
The value to search for.
The start index.
The stop index.
## Response
The index of the first occurrence of the value in the array, or -1 if not found.
```ts Example theme={"system"}
const index = await redis.json.arrindex("key", "$.path.to.array", "a");
```
# JSON.ARRINSERT
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrinsert
Insert the json values into the array at path before the index (shifts to the right).
## Arguments
The key of the json entry.
The path of the array.
The index where to insert the values.
One or more values to append to the array.
## Response
The length of the array after the insertion.
```ts Example theme={"system"}
const length = await redis.json.arrinsert("key", "$.path.to.array", 2, "a", "b");
```
# JSON.ARRLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrlen
Report the length of the JSON array at `path` in `key`.
## Arguments
The key of the json entry.
The path of the array.
## Response
The length of the array.
```ts Example theme={"system"}
const length = await redis.json.arrlen("key", "$.path.to.array");
```
# JSON.ARRPOP
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrpop
Remove and return an element from the index in the array. By default the last element from an array is popped.
## Arguments
The key of the json entry.
The path of the array.
The index of the element to pop.
## Response
The popped element or null if the array is empty.
```ts Example theme={"system"}
const element = await redis.json.arrpop("key", "$.path.to.array");
```
```ts First theme={"system"}
const firstElement = await redis.json.arrpop("key", "$.path.to.array", 0);
```
# JSON.ARRTRIM
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/arrtrim
Trim an array so that it contains only the specified inclusive range of elements.
## Arguments
The key of the json entry.
The path of the array.
The start index of the range.
The stop index of the range.
## Response
The length of the array after the trimming.
```ts Example theme={"system"}
const length = await redis.json.arrtrim("key", "$.path.to.array", 2, 10);
```
# JSON.CLEAR
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/clear
Clear container values (arrays/objects) and set numeric values to 0.
## Arguments
The key of the json entry.
The path to clear
## Response
How many values were cleared.
```ts Example theme={"system"}
await redis.json.clear("key");
```
```ts With path theme={"system"}
await redis.json.clear("key", "$.my.key");
```
# JSON.DEL
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/del
Delete a key from a JSON document.
## Arguments
The key of the json entry.
The path to delete
## Response
How many paths were deleted.
```ts Example theme={"system"}
await redis.json.del("key", "$.path.to.value");
```
# JSON.FORGET
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/forget
Delete a key from a JSON document.
## Arguments
The key of the json entry.
The path to forget.
## Response
How many paths were deleted.
```ts Example theme={"system"}
await redis.json.forget("key", "$.path.to.value");
```
# JSON.GET
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/get
Get a single value from a JSON document.
## Arguments
The key of the json entry.
Sets the indentation string for nested levels.
Sets the string that's printed at the end of each line.
Sets the string that is put between a key and a value.
One or more paths to retrieve from the JSON document.
## Response
The value at the specified path or `null` if the path does not exist.
```ts Example theme={"system"}
const value = await redis.json.get("key", "$.path.to.somewhere");
```
```ts With Options theme={"system"}
const value = await redis.json.get("key", {
indent: " ",
newline: "\n",
space: " ",
}, "$.path.to.somewhere");
```
# JSON.MERGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/merge
Merges the JSON value at path in key with the provided value.
## Arguments
The key of the json entry.
The path of the value to set.
The value to merge with.
## Response
Returns "OK" if the merge was successful.
```ts Example theme={"system"}
await redis.json.merge("key", "$.path.to.value", {"new": "value"})
```
# JSON.MGET
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/mget
Get the same path from multiple JSON documents.
## Arguments
One or more keys of JSON documents.
The path to get from the JSON document.
## Response
The values at the specified path or `null` if the path does not exist.
```ts Example theme={"system"}
const values = await redis.json.mget(["key1", "key2"], "$.path.to.somewhere");
```
# JSON.MSET
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/mset
Sets multiple JSON values at multiple paths in multiple keys.
## Arguments
A list of objects where each tuple contains a key, a path, and a value.
Type of value (`TData`) can be `Array`, `string`, `boolean`, `Record`, or `Array`.
## Response
Returns "OK" if the command was successful.
```py Example theme={"system"}
await redis.json.mset([
{ key: key, path: "$.path", value: value},
{ key: key2, path: "$.path2", value: value2}
])
```
# JSON.NUMINCRBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/numincrby
Increment the number value stored at `path` by number.
## Arguments
The key of the json entry.
The path of the array.
The number to increment by.
## Response
The new value after incrementing
```ts Example theme={"system"}
const newValue = await redis.json.numincrby("key", "$.path.to.value", 2);
```
# JSON.NUMMULTBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/nummultby
Multiply the number value stored at `path` by number.
## Arguments
The key of the json entry.
The path of the array.
The number to multiply by.
## Response
The new value after multiplying
```ts Example theme={"system"}
const newValue = await redis.json.nummultby("key", "$.path.to.value", 2);
```
# JSON.OBJKEYS
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/objkeys
Return the keys in the object that`s referenced by path.
## Arguments
The key of the json entry.
The path of the array.
## Response
The keys of the object at the path.
```ts Example theme={"system"}
const keys = await redis.json.objkeys("key", "$.path");
```
# JSON.OBJLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/objlen
Report the number of keys in the JSON object at `path` in `key`.
## Arguments
The key of the json entry.
The path of the object.
## Response
The number of keys in the objects.
```ts Example theme={"system"}
const lengths = await redis.json.objlen("key", "$.path");
```
# JSON.SET
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/set
Set the JSON value at path in key.
## Arguments
The key of the json entry.
The path of the value to set.
The value to set.
## Response
`OK`
```ts Example theme={"system"}
Set the JSON value at path in key.
redis.json.set(key, "$.path", value);
```
```ts NX theme={"system"}
const value = ...
redis.json.set(key, "$.path", value, { nx:true });
```
```ts XX theme={"system"}
const value = ...
redis.json.set(key, "$.path", value, { xx:true });
```
# JSON.STRAPPEND
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/strappend
Append the json-string values to the string at path.
## Arguments
The key of the json entry.
The path of the value.
The value to append to the existing string.
## Response
The length of the array after the appending.
```ts Example theme={"system"}
await redis.json.strappend("key", "$.path.to.str", "abc");
```
# JSON.STRLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/strlen
Report the length of the JSON String at path in key
## Arguments
The key of the json entry.
The path of the array.
## Response
JSON.STRLEN returns by recursive descent an array of integer replies for each path, the array's length, or nil, if the matching JSON value is not a string.
```ts Example theme={"system"}
await redis.json.strlen("key", "$.path.to.str", "a");
```
# JSON.TOGGLE
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/toggle
Toggle a boolean value stored at `path`.
## Arguments
The key of the json entry.
The path of the boolean.
## Response
The new value of the boolean.
```ts Example theme={"system"}
const bool = await redis.json.toggle("key", "$.path.to.bool");
```
# JSON.TYPE
Source: https://upstash.com/docs/redis/sdks/ts/commands/json/type
Report the type of JSON value at `path`.
## Arguments
The key of the json entry.
The path of the value.
## Response
The type of the value at `path` or `null` if the value does not exist.
```ts Example theme={"system"}
const myType = await redis.json.type("key", "$.path.to.value");
```
# LINDEX
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lindex
Returns the element at index index in the list stored at key.
The index is zero-based, so 0 means the first element, 1 the second element and so on. Negative indices can be used to designate elements starting at the tail of the list.
## Arguments
The key of the list.
The index of the element to return, zero-based.
## Response
The value of the element at index index in the list. If the index is out of range, `null` is returned.
```ts Example theme={"system"}
await redis.rpush("key", "a", "b", "c");
const element = await redis.lindex("key", 0);
console.log(element); // "a"
```
# LINSERT
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/linsert
Insert an element before or after another element in a list
## Arguments
The key of the list.
Whether to insert the element before or after pivot.
The element to insert before or after.
The element to insert.
## Response
The list length after insertion, `0` when the list doesn't exist or `-1` when pivot was not found.
```ts Example theme={"system"}
await redis.rpush("key", "a", "b", "c");
await redis.linsert("key", "before", "b", "x");
```
# LLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/llen
Returns the length of the list stored at key.
## Arguments
The key of the list.
## Response
The length of the list at key.
```ts Example theme={"system"}
await redis.rpush("key", "a", "b", "c");
const length = await redis.llen("key");
console.log(length); // 3
```
# LMOVE
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lmove
Move an element from one list to another.
## Arguments
The key of the source list.
The key of the destination list.
The side of the source list from which the element was popped.
The side of the destination list to which the element was pushed.
## Response
The element that was moved.
```ts Example theme={"system"}
await redis.rpush("source", "a", "b", "c");
const element = await redis.move("source", "destination", "left", "left");
```
# LPOP
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lpop
Remove and return the first element(s) of a list
## Arguments
The key of the list.
How many elements to pop. If not specified, a single element is popped.
## Response
The popped element(s). If `count` was specified, an array of elements is
returned, otherwise a single element is returned. If the list is empty, `null`
is returned.
```ts Single theme={"system"}
await redis.rpush("key", "a", "b", "c");
const element = await redis.lpop("key");
console.log(element); // "a"
```
```ts Multiple theme={"system"}
await redis.rpush("key", "a", "b", "c");
const element = await redis.lpop("key", 2);
console.log(element); // ["a", "b"]
```
# LPOS
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lpos
Returns the index of matching elements inside a list.
## Arguments
The key of the list.
The element to match.
The rank of the element to match. If specified, the element at the given
rank is matched instead of the first element.
The maximum number of elements to match. If specified, an array of elements
is returned instead of a single element.
Limit the number of comparisons to perform.
## Response
The index of the matching element or an array of indexes if `opts.count` is
specified.
```ts Example theme={"system"}
await redis.rpush("key", "a", "b", "c");
const index = await redis.lpos("key", "b");
console.log(index); // 1
```
```ts With Rank theme={"system"}
await redis.rpush("key", "a", "b", "c", "b");
const index = await redis.lpos("key", "b", { rank: 2 });
console.log(index); // 3
```
```ts With Count theme={"system"}
await redis.rpush("key", "a", "b", "b");
const positions = await redis.lpos("key", "b", { count: 2 });
console.log(positions); // [1, 2]
```
# LPUSH
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lpush
Push an element at the head of the list.
## Arguments
The key of the list.
One or more elements to push at the head of the list.
## Response
The length of the list after the push operation.
```ts Example theme={"system"}
const length1 = await redis.lpush("key", "a", "b", "c");
console.log(length1); // 3
const length2 = await redis.lpush("key", "d");
console.log(length2); // 4
```
# LPUSHX
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lpushx
Push an element at the head of the list only if the list exists.
## Arguments
The key of the list.
One or more elements to push at the head of the list.
## Response
The length of the list after the push operation.
`0` if the list did not exist and thus no element was pushed.
```ts Example theme={"system"}
await redis.lpush("key", "a", "b", "c");
const length = await redis.lpushx("key", "d");
console.log(length); // 4
```
```ts Without existing list theme={"system"}
const length = await redis.lpushx("key", "a");
console.log(length); // 0
```
# LRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lrange
Returns the specified elements of the list stored at key.
## Arguments
The key of the list.
The starting index of the range to return.
Use negative numbers to specify offsets starting at the end of the list.
The ending index of the range to return.
Use negative numbers to specify offsets starting at the end of the list.
## Response
The list of elements in the specified range.
```ts Example theme={"system"}
await redis.lpush("key", "a", "b", "c");
const elements = await redis.lrange("key", 1, 2);
console.log(elements) // ["b", "c"]
```
# LREM
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lrem
Remove the first `count` occurrences of an element from a list.
## Arguments
The key of the list.
How many occurrences of the element to remove.
The element to remove
## Response
The number of elements removed.
```ts Example theme={"system"}
await redis.lpush("key", "a", "a", "b", "b", "c");
const removed = await redis.lrem("key", 4, "b");
console.log(removed) // 2
```
# LSET
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/lset
Set a value at a specific index.
## Arguments
The key of the list.
At which index to set the value.
The value to set.
## Response
`OK`
```ts Example theme={"system"}
await redis.lpush("key", "a", "b", "c");
await redis.lset("key", 1, "d");
// list is now ["a", "d", "c"]
```
# LTRIM
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/ltrim
Trim a list to the specified range
## Arguments
The key of the list.
The index of the first element to keep.
The index of the first element to keep.
## Response
`OK`
```ts Example theme={"system"}
await redis.lpush("key", "a", "b", "c", "d");
await redis.ltrim("key", 1, 2);
// the list is now ["b", "c"]
```
# RPOP
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/rpop
Remove and return the last element(s) of a list
## Arguments
The key of the list.
How many elements to pop. If not specified, a single element is popped.
## Response
The popped element(s). If `count` was specified, an array of elements is
returned, otherwise a single element is returned. If the list is empty, `null`
is returned.
```ts Single theme={"system"}
await redis.rpush("key", "a", "b", "c");
const element = await redis.rpop("key");
console.log(element); // "c"
```
```ts Multiple theme={"system"}
await redis.rpush("key", "a", "b", "c");
const element = await redis.rpop("key", 2);
console.log(element); // ["c", "b"]
```
# RPUSH
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/rpush
Push an element at the end of the list.
## Arguments
The key of the list.
One or more elements to push at the end of the list.
## Response
The length of the list after the push operation.
```ts Example theme={"system"}
const length1 = await redis.rpush("key", "a", "b", "c");
console.log(length1); // 3
const length2 = await redis.rpush("key", "d");
console.log(length2); // 4
```
# RPUSHX
Source: https://upstash.com/docs/redis/sdks/ts/commands/list/rpushx
Push an element at the end of the list only if the list exists.
## Arguments
The key of the list.
One or more elements to push at the end of the list.
## Response
The length of the list after the push operation.
`0` if the list did not exist and thus no element was pushed.
```ts Example theme={"system"}
await redis.lpush("key", "a", "b", "c");
const length = await redis.rpushx("key", "d");
console.log(length); // 4
```
```ts Without existing list theme={"system"}
const length = await redis.rpushx("key", "a");
console.log(length); // 0
```
# Overview
Source: https://upstash.com/docs/redis/sdks/ts/commands/overview
Available Commands in @upstash/redis
Echo the given string.
Ping the server.
Count set bits in a string.
Perform bitwise operations between strings.
Find first bit set or clear in a string.
Returns the bit value at offset in the string value stored at key.
Sets or clears the bit at offset in the string value stored at key.
Delete one or multiple keys.
Determine if a key exists.
Set a key's time to live in seconds.
Set the expiration for a key as a UNIX timestamp.
Find all keys matching the given pattern.
Remove the expiration from a key.
Set a key's time to live in milliseconds.
Set the expiration for a key as a UNIX timestamp specified in milliseconds.
Get the time to live for a key in milliseconds.
Return a random key from the keyspace.
Rename a key.
Rename a key, only if the new key does not exist.
Incrementally iterate the keys space.
Alters the last access time of a key(s). Returns the number of existing keys specified.
Get the time to live for a key.
Determine the type stored at key.
Delete one or more keys.
Publish messages to many clients
Appends a new entry to a stream.
Return a range of elements in a stream, with IDs matching the specified IDs interval.
Append a value to a string stored at key.
Decrement the integer value of a key by one.
Decrement the integer value of a key by the given number.
Get the value of a key.
Get the value of a key and delete the key.
Get a substring of the string stored at a key.
Set the string value of a key and return its old value.
Increment the integer value of a key by one.
Increment the integer value of a key by the given amount.
Increment the float value of a key by the given amount.
Get the values of all the given keys.
Set multiple keys to multiple values.
Set multiple keys to multiple values, only if none of the keys exist.
Set the string value of a key.
Overwrite part of a string at key starting at the specified offset.
Get the length of the value stored in a key.
Acknowledge one or multiple messages as processed for a consumer group.
Append a new entry to a stream.
Transfer ownership of pending messages to another consumer automatically.
Transfer ownership of pending messages to another consumer.
Remove one or multiple entries from a stream.
Manage consumer groups for Redis streams.
Get information about streams, consumer groups, and consumers.
Get the number of entries in a stream.
Get information about pending messages in a consumer group.
Get entries from a stream within a range of IDs.
Read data from one or multiple streams.
Read data from streams as part of a consumer group.
Get entries from a stream within a range of IDs in reverse order.
Trim a stream to a specified size.
Run multiple commands in a transaction.
# PSUBSCRIBE
Source: https://upstash.com/docs/redis/sdks/ts/commands/pubsub/psubscribe
Subscribe to a channel by patterns/wildcards
## Arguments
The patterns matching channels to publish to.
## Response
A subscriber instance which can subscribe to channels.
```ts Example theme={"system"}
const subscription = redis.psubscribe(["user:*"]);
const messages = [];
subscription.on("pmessage", (data) => {
messages.push(data.message);
});
await redis.publish("user:123", "user:123 message"); // receives
await redis.publish("user:456", "user:456 message"); // receives
await redis.publish("other:789", "other:789 message"); // doesn't receive
console.log(messages[0]) // user:123 message
console.log(messages[1]) // user:456 message
console.log(messages[2]) // undefined
```
# PUBLISH
Source: https://upstash.com/docs/redis/sdks/ts/commands/pubsub/publish
Publish a message to a channel
## Arguments
The channel to publish to.
The message to publish.
## Response
The number of clients who received the message.
```ts Example theme={"system"}
const listeners = await redis.publish("my-channel", "my-message");
```
# SUBSCRIBE
Source: https://upstash.com/docs/redis/sdks/ts/commands/pubsub/subscribe
Subscribe to a channel
## Arguments
The channel to publish to.
## Response
A subscriber instance which can subscribe to channels.
```ts Example theme={"system"}
const subscription = redis.subscribe(["my-channel"]);
const messages = [];
subscription.on("message", (data) => {
messages.push(data.message);
});
```
# EVAL
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/eval
Evaluate a Lua script server side.
## Arguments
The lua script to run.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```ts Example theme={"system"}
const script = `
return ARGV[1]
`
const result = await redis.eval(script, [], ["hello"]);
console.log(result) // "hello"
```
# EVAL_RO
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/eval_ro
Evaluate a read-only Lua script server side.
## Arguments
The read-only lua script to run.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```ts Example theme={"system"}
const script = `
return ARGV[1]
`
const result = await redis.evalRo(script, [], ["hello"]);
console.log(result) // "hello"
```
# EVALSHA
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/evalsha
Evaluate a cached Lua script server side.
`EVALSHA` is like `EVAL` but instead of sending the script over the wire every time, you reference the script by its SHA1 hash. This is useful for caching scripts on the server side.
## Arguments
The sha1 hash of the script.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```ts Example theme={"system"}
const result = await redis.evalsha("fb67a0c03b48ddbf8b4c9b011e779563bdbc28cb", [], ["hello"]);
console.log(result) // "hello"
```
# EVALSHA_RO
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/evalsha_ro
Evaluate a cached read-only Lua script server side.
`EVALSHA_RO` is like `EVAL_RO` but instead of sending the script over the wire every time, you reference the script by its SHA1 hash. This is useful for caching scripts on the server side.
## Arguments
The sha1 hash of the read-only script.
All of the keys accessed in the script
All of the arguments you passed to the script
## Response
The result of the script.
```ts Example theme={"system"}
const result = await redis.evalshaRo("fb67a0c03b48ddbf8b4c9b011e779563bdbc28cb", [], ["hello"]);
console.log(result) // "hello"
```
# SCRIPT EXISTS
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/script_exists
Check if scripts exist in the script cache.
## Arguments
The sha1 of the scripts to check.
## Response
An array of numbers. `1` if the script exists, otherwise `0`.
```ts Example theme={"system"}
await redis.scriptExists("", "")
// Returns 1
// [1, 0]
```
# SCRIPT FLUSH
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/script_flush
Removes all scripts from the script cache.
## Arguments
Performs the flush asynchronously.
Performs the flush synchronously.
```ts Example theme={"system"}
await redis.scriptFlush();
```
```ts With options theme={"system"}
await redis.scriptFlush({
async: true,
});
```
# SCRIPT LOAD
Source: https://upstash.com/docs/redis/sdks/ts/commands/scripts/script_load
Load the specified Lua script into the script cache.
## Arguments
The script to load.
## Response
The sha1 of the script.
```ts Example theme={"system"}
const script = `
local value = redis.call('GET', KEYS[1])
return value
`;
const sha1 = await redis.scriptLoad(script);
```
# DBSIZE
Source: https://upstash.com/docs/redis/sdks/ts/commands/server/dbsize
Count the number of keys in the database.
## Arguments
This command has no arguments
## Response
The number of keys in the database
```ts Example theme={"system"}
const keys = await redis.dbsize();
console.log(keys) // 20
```
# FLUSHALL
Source: https://upstash.com/docs/redis/sdks/ts/commands/server/flushall
Deletes all keys permanently. Use with caution!
## Arguments
Whether to perform the operation asynchronously.
Defaults to synchronous.
```ts Sync theme={"system"}
await redis.flushall();
```
```ts Async theme={"system"}
await redis.flushall({async: true})
```
# FLUSHDB
Source: https://upstash.com/docs/redis/sdks/ts/commands/server/flushdb
Deletes all keys permanently. Use with caution!
## Arguments
Whether to perform the operation asynchronously.
Defaults to synchronous.
```ts Sync theme={"system"}
await redis.flushdb();
```
```ts Async theme={"system"}
await redis.flushdb({async: true})
```
# SADD
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sadd
Adds one or more members to a set.
## Arguments
The key of the set.
One or more members to add to the set.
## Response
The number of elements that were added to the set, not including all the elements already present in the set.
```ts Example theme={"system"}
// 3
await redis.sadd("key", "a", "b", "c");
// 0
await redis.sadd("key", "a", "b");
```
# SCARD
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/scard
Return how many members are in a set
## Arguments
The key of the set.
## Response
How many members are in the set.
```ts Example theme={"system"}
await redis.sadd("key", "a", "b", "c");
const cardinality = await redis.scard("key");
console.log(cardinality); // 3
```
# SDIFF
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sdiff
Return the difference between sets
## Arguments
The keys of the sets to perform the difference operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
const diff = await redis.sdiff("set1", "set2");
console.log(diff); // ["a", "b"]
```
# SDIFFSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sdiffstore
Write the difference between sets to a new set
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the difference operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
await redis.sdiff("dest", "set1", "set2");
console.log(diff); // ["a", "b"]
```
# SINTER
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sinter
Return the intersection between sets
## Arguments
The keys of the sets to perform the intersection operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
const intersection = await redis.sinter("set1", "set2");
console.log(intersection); // ["c"]
```
# SINTERSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sinterstore
Return the intersection between sets and store the resulting set in a key
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the intersection operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
await redis.sinterstore("destination", "set1", "set2");
```
# SISMEMBER
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sismember
Check if a member exists in a set
## Arguments
The key of the set to check.
The member to check for.
## Response
`1` if the member exists in the set, `0` if not.
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const isMember = await redis.sismember("set", "a");
console.log(isMember); // 1
```
# SMEMBERS
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/smembers
Return all the members of a set
## Arguments
The key of the set.
## Response
The members of the set.
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const members = await redis.smembers("set");
console.log(members); // ["a", "b", "c"]
```
# SMISMEMBER
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/smismember
Check if multiple members exist in a set
## Arguments
The key of the set to check.
The members to check
## Response
An array of `0` and `1` values.
`1` if the member exists in the set, `0` if not.
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const members = await redis.smismember("set", ["a", "b", "d"]);
console.log(members); // [1, 1, 0]
```
# SMOVE
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/smove
Move a member from one set to another
## Arguments
The key of the set to move the member from.
The key of the set to move the member to.
The members to move
## Response
`1` if the member was moved, `0` if not.
```ts Example theme={"system"}
await redis.sadd("original", "a", "b", "c");
const moved = await redis.smove("original", "destination", "a");
// moved: 1
// original: ["b", "c"]
// destination: ["a"]
```
# SPOP
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/spop
Removes and returns one or more random members from a set.
## Arguments
The key of the set.
How many members to remove and return.
## Response
The popped member.
If `count` is specified, an array of members is returned.
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const popped = await redis.spop("set");
console.log(popped); // "a"
```
```ts With Count theme={"system"}
await redis.sadd("set", "a", "b", "c");
const popped = await redis.spop("set", 2);
console.log(popped); // ["a", "b"]
```
# SRANDMEMBER
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/srandmember
Returns one or more random members from a set.
## Arguments
The key of the set.
How many members to return.
## Response
The random member.
If `count` is specified, an array of members is returned.
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const member = await redis.srandmember("set");
console.log(member); // "a"
```
```ts With Count theme={"system"}
await redis.sadd("set", "a", "b", "c");
const members = await redis.srandmember("set", 2);
console.log(members); // ["a", "b"]
```
# SREM
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/srem
Remove one or more members from a set
## Arguments
The key of the set to remove the member from.
One or more members to remove from the set.
## Response
How many members were removed
```ts Example theme={"system"}
await redis.sadd("set", "a", "b", "c");
const removed = await redis.srem("set", "a", "b", "d");
console.log(removed); // 2
```
# SSCAN
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sscan
Scan a set
## Arguments
The key of the set.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by members.
Number of members to return per call.
## Response
The new cursor and the members.
If the new cursor is `0` the iteration is complete.
```ts Example theme={"system"}
await redis.sadd("key", "a", "ab","b", "c");
const [newCursor, fields] = await redis.sscan("key", 0, { match: "a*"});
console.log(newCursor); // likely `0` since this is a very small set
console.log(fields); // ["a", "ab"]
```
# SUNION
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sunion
Return the union between sets
## Arguments
The keys of the sets to perform the union operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
const union = await redis.sunion("set1", "set2");
console.log(union); // ["a", "b", "c", "d", "e"]
```
# SUNIONSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/set/sunionstore
Return the union between sets and store the resulting set in a key
## Arguments
The key of the set to store the resulting set in.
The keys of the sets to perform the union operation on.
## Response
The members of the resulting set.
```ts Example theme={"system"}
await redis.sadd("set1", "a", "b", "c");
await redis.sadd("set2", "c", "d", "e");
await redis.sunionstore("destination", "set1", "set2");
```
# XACK
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xack
Removes one or multiple messages from the pending entries list of a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The ID(s) of the message(s) to acknowledge. Can be a single ID or an array of IDs.
## Response
The number of messages successfully acknowledged.
```ts Single message theme={"system"}
const result = await redis.xack("mystream", "mygroup", "1638360173533-0");
```
```ts Multiple messages theme={"system"}
const result = await redis.xack("mystream", "mygroup", [
"1638360173533-0",
"1638360173533-1"
]);
```
# XADD
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xadd
Appends one or more new entries to a stream.
## Arguments
The key to of the stream.
The stream entry ID. If `*` is passed, a new ID will be generated
automatically.
Key-value data to be appended to the stream.
Prevent creating the stream if it does not exist.
Trim options for the stream.
The trim strategy:
* `MAXLEN`: Trim based on the maximum number of entries
* `MINID`: Trim based on the minimum ID
The threshold value for trimming:
* For `MAXLEN`: The maximum number of entries to keep (number)
* For `MINID`: The minimum ID to keep (string)
The comparison operator:
* `~`: Approximate trimming (more efficient)
* `=`: Exact trimming
Limit how many entries will be trimmed at most
## Response
The ID of the newly added entry.
```ts Basic Example theme={"system"}
const result = await redis.xadd("mystream", "*", { name: "John Doe", age: 30 });
```
```ts With Custom ID theme={"system"}
const result = await redis.xadd("mystream", "1634567890123-0", { temperature: 25.5, humidity: 60 });
```
```ts Trimming with MAXLEN theme={"system"}
const result = await redis.xadd("mystream", "*", { event: "user_login", user_id: "12345" }, {
trim: {
type: "MAXLEN",
threshold: 1000,
comparison: "="
}
});
```
```ts Prevent Stream Creation theme={"system"}
const result = await redis.xadd("existing_stream", "*", { data: "value" }, {
nomkStream: true
});
```
```ts Trimming with MINID theme={"system"}
const result = await redis.xadd("mystream", "*", { action: "purchase", amount: 99.99 }, {
trim: {
type: "MINID",
threshold: "1634567890000-0",
comparison: "="
}
});
```
# XAUTOCLAIM
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xautoclaim
Changes the ownership of pending messages from one consumer to another in a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The consumer name that will claim the messages.
The minimum idle time in milliseconds for messages to be claimed.
The stream entry ID to start claiming from.
The maximum number of messages to claim.
Return only the message IDs instead of the full message data.
## Response
Returns a tuple containing:
* Next start ID for pagination
* Array of claimed messages (ID and field-value pairs)
* Array of deleted message IDs
```ts Basic autoclaim theme={"system"}
const result = await redis.xautoclaim(
"mystream",
"mygroup",
"consumer1",
60000,
"0-0"
);
```
```ts With count and justid theme={"system"}
const result = await redis.xautoclaim(
"mystream",
"mygroup",
"consumer1",
60000,
"0-0",
{ count: 5, justid: true }
);
```
```ts theme={"system"}
[
"1638360173533-1", // next start ID
[["1638360173533-0", ["field1", "value1", "field2", "value2"]]], // claimed messages
[] // deleted message IDs
]
```
# XCLAIM
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xclaim
Changes the ownership of pending messages from one consumer to another in a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The consumer name that will claim the messages.
The minimum idle time in milliseconds for messages to be claimed.
The ID(s) of the message(s) to claim. Can be a single ID or an array of IDs.
Set the idle time of the message.
Set the idle time to a specific Unix time.
Set the retry counter to the specified value.
Create the pending message entry even if certain IDs are not already pending.
Return only the message IDs instead of the full message data.
## Response
Returns an array of claimed messages. If `justid` option is used, returns only message IDs.
```ts Basic claim theme={"system"}
const result = await redis.xclaim(
"mystream",
"mygroup",
"consumer1",
60000,
["1638360173533-0", "1638360173533-1"]
);
```
```ts With justid option theme={"system"}
const result = await redis.xclaim(
"mystream",
"mygroup",
"consumer1",
60000,
["1638360173533-0"],
{ justid: true, force: true }
);
```
```ts theme={"system"}
[
["1638360173533-0", ["field1", "value1", "field2", "value2"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]]
]
```
# XDEL
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xdel
Removes the specified entries from a stream, and returns the number of entries deleted.
## Arguments
The key of the stream.
The ID(s) of the message(s) to delete. Can be a single ID or an array of IDs.
## Response
The number of entries actually deleted from the stream.
```ts Single message theme={"system"}
const result = await redis.xdel("mystream", "1638360173533-0");
```
```ts Multiple messages theme={"system"}
const result = await redis.xdel("mystream", [
"1638360173533-0",
"1638360173533-1",
"1638360173533-2"
]);
```
# XGROUP
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xgroup
Manage consumer groups for Redis streams.
## Arguments
The key of the stream.
The XGROUP subcommand and its parameters. Can be one of:
Create a new consumer group.
The consumer group name.
The stream entry ID to start consuming from. Use '\$' to start from the end.
Create the stream if it doesn't exist.
Set the number of entries read by the group.
Create a new consumer in the group.
The consumer group name.
The consumer name to create.
Delete a consumer from the group.
The consumer group name.
The consumer name to delete.
Delete the entire consumer group.
The consumer group name to destroy.
Set the last delivered ID for the group.
The consumer group name.
The stream entry ID to set as the last delivered ID.
Set the number of entries read by the group.
## Response
The return type depends on the subcommand:
* CREATE: Returns "OK" string
* CREATECONSUMER: Returns 1 if created, 0 if already exists
* DELCONSUMER: Returns the number of pending messages the consumer had
* DESTROY: Returns 1 if destroyed, 0 if group didn't exist
* SETID: Returns "OK" string
```ts Create group theme={"system"}
const result = await redis.xgroup("mystream", {
type: "CREATE",
group: "mygroup",
id: "$",
options: { MKSTREAM: true }
});
```
```ts Create consumer theme={"system"}
const result = await redis.xgroup("mystream", {
type: "CREATECONSUMER",
group: "mygroup",
consumer: "consumer1"
});
```
```ts Delete consumer theme={"system"}
const result = await redis.xgroup("mystream", {
type: "DELCONSUMER",
group: "mygroup",
consumer: "consumer1"
});
```
```ts Set group ID theme={"system"}
const result = await redis.xgroup("mystream", {
type: "SETID",
group: "mygroup",
id: "0-0"
});
```
```ts Destroy group theme={"system"}
const result = await redis.xgroup("mystream", {
type: "DESTROY",
group: "mygroup"
});
```
# XINFO
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xinfo
Returns information about streams, consumer groups, and consumers.
## Arguments
The key of the stream.
The XINFO subcommand options. Can be one of:
List all consumer groups for the stream.
List all consumers in a consumer group.
The consumer group name.
## Response
The return type depends on the subcommand:
* GROUPS: Returns an array of consumer group information
* CONSUMERS: Returns an array of consumer information
```ts List groups theme={"system"}
const result = await redis.xinfo("mystream", {
type: "GROUPS"
});
```
```ts List consumers theme={"system"}
const result = await redis.xinfo("mystream", {
type: "CONSUMERS",
group: "mygroup"
});
```
```ts theme={"system"}
// GROUPS response
[
{
name: "mygroup",
consumers: 2,
pending: 1,
"last-delivered-id": "1638360173533-2",
"entries-read": 3,
lag: 2
}
]
// CONSUMERS response
[
{
name: "consumer1",
pending: 1,
idle: 15000,
"inactive": 15000
}
]
```
# XLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xlen
Returns the number of entries inside a stream.
## Arguments
The key of the stream.
## Response
The number of entries in the stream. Returns 0 if the stream does not exist.
```ts Get stream length theme={"system"}
const result = await redis.xlen("mystream");
```
# XPENDING
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xpending
Returns information about pending messages in a stream consumer group.
## Arguments
The key of the stream.
The consumer group name.
The minimum pending ID to return. Use "-" for the first available ID.
The maximum pending ID to return. Use "+" for the last available ID.
The maximum number of pending messages to return.
Filter by minimum idle time in milliseconds.
Filter results by a specific consumer.
## Response
Returns an array of pending message details.
```ts Summary theme={"system"}
const result = await redis.xpending("mystream", "mygroup", "-", "+", 10);
```
```ts With idle time filter theme={"system"}
const result = await redis.xpending("mystream", "mygroup", "-", "+", 5, {
idleTime: 10000
});
```
```ts Specific consumer filter theme={"system"}
const result = await redis.xpending("mystream", "mygroup", "-", "+", 5, {
consumer: "consumer1"
});
```
```ts theme={"system"}
[
2, // total pending count
"1638360173533-0", // smallest pending ID
"1638360173533-1", // greatest pending ID
[
["consumer1", "1"], // consumer and their pending count
["consumer2", "1"]
]
]
```
# XRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xrange
Returns stream entries matching a given range of IDs.
## Arguments
The key to of the stream.
The stream entry ID to start from.
The stream entry ID to end at.
The maximum number of entries to return.
## Response
An object of stream entries, keyed by their stream ID
```ts All entries theme={"system"}
const result = await redis.xrange("mystream", "-", "+");
```
```ts Range with specific IDs theme={"system"}
const result = await redis.xrange("mystream", "1548149259438-0", "1548149259438-5");
```
```ts Limited count theme={"system"}
const result = await redis.xrange("mystream", "-", "+", 10);
```
```ts theme={"system"}
{
"1548149259438-0": {
"field1": "value1",
"field2": "value2"
},
"1548149259438-1": {
"field1": "value3",
"field2": "value4"
}
}
```
# XREAD
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xread
Reads data from one or multiple streams, starting from the specified IDs.
## Arguments
The key(s) of the stream(s). Can be a single stream key or an array of stream keys.
The stream entry ID(s) to start reading from. Must match the number of keys provided.
Use "\$" to read only new messages added after the command is issued.
The maximum number of messages to return per stream.
## Response
Returns an array where each element represents a stream and contains:
* The stream key
* An array of messages (ID and field-value pairs)
Returns null if no data is available.
```ts Single stream theme={"system"}
const result = await redis.xread("mystream", "0-0");
```
```ts Multiple streams theme={"system"}
const result = await redis.xread(
["stream1", "stream2"],
["0-0", "0-0"]
);
```
```ts With count limit theme={"system"}
const result = await redis.xread("mystream", "0-0", { count: 2 });
```
```ts Only new messages theme={"system"}
const result = await redis.xread("mystream", "$");
```
```ts theme={"system"}
[
["mystream", [
["1638360173533-0", ["field1", "value1", "field2", "value2"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]]
]]
]
```
# XREADGROUP
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xreadgroup
Reads data from a stream as part of a consumer group.
## Arguments
The consumer group name.
The consumer name within the group.
The stream key(s) to read from. Can be a single stream key or an array of stream keys for multiple streams.
The starting ID(s) to read from. Use ">" to read messages never delivered to any consumer in the group.
For multiple streams, provide an array of IDs corresponding to each stream.
The maximum number of messages to return per stream.
Don't add messages to the pending entries list (messages won't need acknowledgment).
## Response
Returns an array where each element represents a stream and contains:
* The stream key
* An array of messages (ID and field-value pairs)
Returns null if no data is available.
```ts Read new messages theme={"system"}
const result = await redis.xreadgroup("mygroup", "consumer1", "mystream", ">");
```
```ts With count option theme={"system"}
const result = await redis.xreadgroup("mygroup", "consumer1", "mystream", ">", {
count: 5
});
```
```ts With NOACK option theme={"system"}
const result = await redis.xreadgroup("mygroup", "consumer1", "mystream", ">", {
NOACK: true
});
```
```ts Multiple streams theme={"system"}
const result = await redis.xreadgroup(
"mygroup",
"consumer1",
["stream1", "stream2"],
[">", ">"],
{ count: 1 }
);
```
```ts Read pending messages theme={"system"}
const result = await redis.xreadgroup("mygroup", "consumer1", "mystream", "0");
```
```ts theme={"system"}
[
["mystream", [
["1638360173533-0", ["field1", "value1", "field2", "value2"]],
["1638360173533-1", ["field1", "value3", "field2", "value4"]]
]]
]
```
# XREVRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xrevrange
Returns stream entries matching a given range of IDs in reverse order.
## Arguments
The key of the stream.
The stream entry ID to end at (highest ID).
The stream entry ID to start from (lowest ID).
The maximum number of entries to return.
## Response
An object of stream entries in reverse chronological order, keyed by their stream ID.
```ts All entries (reverse order) theme={"system"}
const result = await redis.xrevrange("mystream", "+", "-");
```
```ts Limited count theme={"system"}
const result = await redis.xrevrange("mystream", "+", "-", 2);
```
```ts Specific range theme={"system"}
const result = await redis.xrevrange(
"mystream",
"1638360173533-3",
"1638360173533-1"
);
```
```ts theme={"system"}
{
"1638360173533-4": {
"field1": "value5",
"field2": "value6"
},
"1638360173533-3": {
"field1": "value3",
"field2": "value4"
},
"1638360173533-0": {
"field1": "value1",
"field2": "value2"
}
}
```
# XTRIM
Source: https://upstash.com/docs/redis/sdks/ts/commands/stream/xtrim
Trims the stream by removing entries to keep it at a reasonable size.
## Arguments
The key of the stream.
The trimming strategy:
* `MAXLEN`: Trim based on the maximum number of entries
* `MINID`: Trim based on the minimum ID
The threshold value for trimming:
* For `MAXLEN`: The maximum number of entries to keep (number)
* For `MINID`: The minimum ID to keep (string). Entries with IDs lower than this will be removed
Use `~` for approximate trimming (more efficient, default) or `=` for exact trimming.
Limit how many entries will be trimmed at most (only valid with approximate trimming `~`).
## Response
The number of entries removed from the stream.
```ts Approximate trim by max length theme={"system"}
const result = await redis.xtrim("mystream", {
strategy: "MAXLEN",
threshold: 100,
exactness: "~"
});
```
```ts Exact trim by max length theme={"system"}
const result = await redis.xtrim("mystream", {
strategy: "MAXLEN",
threshold: 50,
exactness: "="
});
```
```ts Trim by minimum ID theme={"system"}
const result = await redis.xtrim("mystream", {
strategy: "MINID",
threshold: "1638360173533-0",
exactness: "="
});
```
```ts Approximate trim with limit theme={"system"}
const result = await redis.xtrim("mystream", {
strategy: "MAXLEN",
threshold: 1000,
exactness: "~",
limit: 100
});
```
# String Commands
Source: https://upstash.com/docs/redis/sdks/ts/commands/string
## MGET
Load multiple keys at once. For billing purposes, this counts as a single command.
If a key is not found, it will be returned as `null`, so you might end up with `null` values in your response array.
```ts theme={"system"}
const values = await redis.mget("key1", "key2", "key3");
```
## MSET
Set multiple values at once. For billing purposes, this counts as a single command.
```ts theme={"system"}
await redis.mset({
key1: { a: 1 },
key2: "value2",
key3: true,
});
```
## MSETNX
```ts theme={"system"}
```
## PSETEX
```ts theme={"system"}
```
## SET
```ts theme={"system"}
```
## SETEX
```ts theme={"system"}
```
## SETNX
```ts theme={"system"}
```
## SETRANGE
```ts theme={"system"}
```
## STRLEN
```ts theme={"system"}
```
## SUBSTR
```ts theme={"system"}
```
# APPEND
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/append
Append a value to a string stored at key.
## Arguments
The key to get.
The value to append.
## Response
How many characters were added to the string.
```ts Example theme={"system"}
await redis.append(key, "Hello");
// returns 5
```
# DECR
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/decr
Decrement the integer value of a key by one
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
## Response
The value at the key after the decrementing.
```ts Example theme={"system"}
await redis.set("key", 6);
await redis.decr("key");
// returns 5
```
# DECRBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/decrby
Decrement the integer value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to decrement by.
## Response
The value at the key after the decrementing.
```ts Example theme={"system"}
await redis.set("key", 6);
await redis.decrby("key", 4);
// returns 2
```
# GET
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/get
Return the value of the specified key or `null` if the key doesn't exist.
## Arguments
The key to get.
## Response
The response is the value stored at the key or `null` if the key doesn't exist.
```ts Example theme={"system"}
type MyType = {
a: number;
b: string;
}
const value = await redis.get("key");
if (!value) {
// key doesn't exist
} else {
// value is of type MyType
}
```
# GETDEL
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/getdel
Return the value of the specified key and delete the key.
## Arguments
The key to get.
## Response
The response is the value stored at the key or `null` if the key doesn't exist.
```ts Example theme={"system"}
type MyType = {
a: number;
b: string;
}
await redis.getdel("key");
// returns {a: 1, b: "2"}
```
# GETRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/getrange
Return a substring of value at the specified key.
## Arguments
The key to get.
The start index of the substring.
The end index of the substring.
## Response
The substring.
```ts Example theme={"system"}
const substring = await redis.getrange("key", 2, 4);
```
# GETSET
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/getset
Return the value of the specified key and replace it with a new value.
## Arguments
The key to get.
The new value to store.
## Response
The response is the value stored at the key or `null` if the key doesn't exist.
```ts Example theme={"system"}
const oldValue = await redis.getset("key", newValue);
```
# INCR
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/incr
Increment the integer value of a key by one
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to increment.
## Response
The value at the key after the incrementing.
```ts Example theme={"system"}
await redis.set("key", 6);
await redis.incr("key");
// returns 7
```
# INCRBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/incrby
Increment the integer value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to increment by.
## Response
The value at the key after the incrementing.
```ts Example theme={"system"}
await redis.set("key", 6);
await redis.incrby("key", 4);
// returns 10
```
# INCRBYFLOAT
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/incrbyfloat
Increment the float value of a key by a given number.
If a key does not exist, it is initialized as 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer.
## Arguments
The key to decrement.
The amount to increment by.
## Response
The value at the key after the incrementing.
```ts Example theme={"system"}
await redis.set("key", 6);
await redis.incrbyfloat("key", 4,5);
// returns 10.5
```
# MGET
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/mget
Load multiple keys from Redis in one go.
For billing purposes, this counts as a single command.
## Arguments
Multiple keys to load from Redis.
## Response
An array of values corresponding to the keys passed in. If a key doesn't exist, the value will be `null`.
```ts Example theme={"system"}
type MyType = {
a: number;
b: string;
}
const values = await redis.mget("key1", "key2", "key3");
// values.length -> 3
```
# MSET
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/mset
Set multiple keys in one go.
For billing purposes, this counts as a single command.
## Arguments
An object where the keys are the keys to set, and the values are the values to set.
## Response
"OK"
```ts Example theme={"system"}
await redis.mset({
key1: 1,
key2: "hello",
key3: { a: 1, b: "hello" },
});
```
# MSETNX
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/msetnx
Set multiple keys in one go unless they exist already.
For billing purposes, this counts as a single command.
## Arguments
An object where the keys are the keys to set, and the values are the values to set.
## Response
`True` if all keys were set, `False` if at least one key was not set.
```ts Example theme={"system"}
redis.msetnx({
"key1": "value1",
"key2": "value2"
})
```
# SET
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/set
Set a key to hold a string value.
## Arguments
The key
The value, if this is not a string, we will use `JSON.stringify` to convert it
to a string.
You can pass a few options to the command.
Instead of returning `"OK"`, this will cause the command to return the old
value stored at key, or `null` when key did not exist.
Adds an expiration (in seconds) to the key.
Adds an expiration (in milliseconds) to the key.
Expires the key after the given timestamp (in seconds).
Expires the key after the given timestamp (in milliseconds).
Keeps the old expiration if the key already exists.
Only set the key if it does not already exist.
Only set the key if it already exists.
## Response
`"OK"`
```ts Basic theme={"system"}
await redis.set("my-key", {my: "value"});
```
```ts Expire in 60 seconds theme={"system"}
await redis.set("my-key", {my: "value"}, {
ex: 60
});
```
```ts Only update theme={"system"}
await redis.set("my-key", {my: "value"}, {
xx: true
});
```
# SETRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/setrange
Writes the value of key at offset.
The SETRANGE command in Redis is used to modify a portion of the value of a key by replacing a substring within the key's existing value. It allows you to update part of the string value associated with a specific key at a specified offset.
## Arguments
The name of the Redis key for which you want to modify the value.
The zero-based index in the value where you want to start replacing characters.
The new string that you want to insert at the specified offset in the existing value.
## Response
The length of the value after it was modified.
```ts Example theme={"system"}
await redis.set("key", "helloworld")
const length = await redis.setrange("key", 5, "redis");
console.log(length); // 10
// The value of "key" is now "helloredis"
```
# STRLEN
Source: https://upstash.com/docs/redis/sdks/ts/commands/string/strlen
Return the length of a string stored at a key.
The \`STRLEN\`\` command in Redis is used to find the length of the string value associated with a key. In Redis, keys can be associated with various data types, and one of these data types is the "string." The STRLEN command specifically operates on keys that are associated with string values.
## Arguments
The name of the Redis key.
## Response
The length of the value.
```ts Example theme={"system"}
await redis.set("key", "helloworld")
const length = await redis.strlen("key");
console.log(length); // 10
```
# Transactions
Source: https://upstash.com/docs/redis/sdks/ts/commands/transaction
Transactions
You can use transactions or pipelines with the `multi` or `pipeline` method.
Transactions are executed atomically, while pipelines are not. In pipelines you can execute multiple commands at once, but other commands from other clients can be executed in between.
```ts Pipeline theme={"system"}
const p = redis.pipeline();
p.set("foo", "bar");
p.get("foo");
const res = await p.exec();
```
```ts Transaction theme={"system"}
const tx = redis.multi();
tx.set("foo", "bar");
tx.get("foo");
const res = await tx.exec();
```
For more information on pipelines and transactions, see
[the Pipeline page](https://docs.upstash.com/redis/sdks/ts/pipelining/pipeline-transaction).
# ZADD
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zadd
Add a member to a sorted set, or update its score if it already exists.
## Arguments
The key of the sorted set.
Only update elements that already exist. Never add elements.
Only add new elements. Never update elements.
Return the number of elements added or updated.
When this option is specified ZADD acts like ZINCRBY. Only one score-element pair can be specified in this mode.
## Response
The number of elements added to the sorted sets, not including elements already existing for which the score was updated.
If `ch` was specified, the number of elements that were updated.
If `incr` was specified, the new score of `member`.
```ts Simple theme={"system"}
await redis.zadd(
"key",
{ score: 2, member: "member" },
{ score: 3, member: "member2"},
);
```
```ts XX theme={"system"}
await redis.zadd(
"key",
{ xx: true },
{ score: 2, member: "member" },
)
```
```ts NX theme={"system"}
await redis.zadd(
"key",
{ nx: true },
{ score: 2, member: "member" },
)
```
```ts CH theme={"system"}
await redis.zadd(
"key",
{ ch: true },
{ score: 2, member: "member" },
)
```
```ts INCR theme={"system"}
await redis.zadd(
"key",
{ cincrh: true },
{ score: 2, member: "member" },
)
```
# ZCARD
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zcard
Returns the number of elements in the sorted set stored at key.
## Arguments
The key to get.
## Response
The number of elements in the sorted set.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "one"},
{ score: 2, member: "two" },
);
const elements = await redis.zrank("key");
console.log(elements); // 2
```
# ZCOUNT
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zcount
Returns the number of elements in the sorted set stored at key filterd by score.
## Arguments
The key to get.
The minimum score to filter by.
Use `-inf` to effectively ignore this filter.
Use `(number` to exclude the value.
The maximum score to filter by.
Use `+inf` to effectively ignore this filter.
Use `(number` to exclude the value.
## Response
The number of elements where score is between min and max.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "one"},
{ score: 2, member: "two" },
);
const elements = await redis.zcount("key", "(1", "+inf");
console.log(elements); // 1
```
# ZDIFFSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zdiffstore
Writes the difference between sets to a new key.
## Arguments
The key to write the difference to.
How many keys to compare.
The keys to compare.
## Response
The number of elements in the resulting set.
```ts Example theme={"system"}
const values = await redis.zdiffstore("destination", 2, "key1", "key2");
```
# ZINCRBY
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zincrby
Increment the score of a member.
## Arguments
The key of the sorted set.
The increment to add to the score.
The member to increment.
## Response
The new score of `member` after the increment operation.
```ts Example theme={"system"}
await redis.zadd("key", 1, "member");
const value = await redis.zincrby("key", 2, "member");
console.log(value); // 3
```
# ZINTERSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zinterstore
Writes the intersection between sets to a new key.
## Arguments
The key to write the intersection to.
How many keys to compare.
The keys to compare.
The aggregation method.
The weight to apply to each key.
The weights to apply to each key.
## Response
The number of elements in the resulting set.
```ts Simple theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zinterstore("destination", 2, ["key1", "key2"]);
console.log(res) // 1
```
```ts With Weights theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zinterstore(
"destination",
2,
["key1", "key2"],
{ weights: [2, 3] },
);
console.log(res) // 1
```
```ts Aggregate theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zinterstore(
"destination",
2,
["key1", "key2"],
{ aggregate: "sum" },
);
console.log(res) // 1
```
# ZLEXCOUNT
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zlexcount
Returns the number of elements in the sorted set stored at key filtered by lex.
## Arguments
The key to get.
The lower lexicographical bound to filter by.
Use `-` to disable the lower bound.
The upper lexicographical bound to filter by.
Use `+` to disable the upper bound.
## Response
The number of matched.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "one"},
{ score: 2, member: "two" },
);
const elements = await redis.zlexcount("key", "two", "+");
console.log(elements); // 1
```
# ZMSCORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zmscore
Returns the scores of multiple members.
## Arguments
The key to get.
## Response
The members of the sorted set.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "m1" },
{ score: 2, member: "m2" },
{ score: 3, member: "m3" },
{ score: 4, member: "m4" },
)
const scores = await redis.zmscore("key", ["m2", "m4"])
console.log(scores) // [2, 4]
```
# ZPOPMAX
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zpopmax
Removes and returns up to count members with the highest scores in the sorted set stored at key.
## Arguments
The key of the sorted set
## Response
The number of elements removed. Defaults to 1.
```ts Example theme={"system"}
const popped = await redis.zpopmax("key", 4);
```
# ZPOPMIN
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zpopmin
Removes and returns up to count members with the lowest scores in the sorted set stored at key.
## Arguments
The key of the sorted set
## Response
The number of elements removed. Defaults to 1.
```ts Example theme={"system"}
const popped = await redis.zpopmin("key", 4);
```
# ZRANGE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zrange
Returns the specified range of elements in the sorted set stored at key.
## Arguments
The key to get.
The lower bound of the range.
The upper bound of the range.
Whether to include the scores in the response.
Whether to reverse the order of the response.
Whether to use the score as the sort order.
Whether to use lexicographical ordering.
The offset to start from.
The number of elements to return.
## Response
The values in the specified range.
If `withScores` is true, the response will have interleaved members and scores: `[TMember, number, TMember, number, ...]`
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "m1" },
{ score: 2, member: "m2" },
)
const res = await redis.zrange("key", 1, 3)
console.log(res) // ["m2"]
```
```ts WithScores theme={"system"}
await redis.zadd("key",
{ score: 1, member: "m1" },
{ score: 2, member: "m2" },
)
const res = await redis.zrange("key", 1, 3, { withScores: true })
console.log(res) // ["m2", 2]
```
```ts ByScore theme={"system"}
await redis.zadd("key",
{ score: 1, member: "m1" },
{ score: 2, member: "m2" },
{ score: 3, member: "m3" },
)
const res = await redis.zrange("key", 1, 2, { byScore: true })
console.log(res) // ["m1", "m2"]
```
# ZRANK
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zrank
Returns the rank of a member
## Arguments
The key to get.
The member to get the rank of.
## Response
The rank of the member.
```ts Example theme={"system"}
const rank = await redis.rank("key", "member");
```
# ZREM
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zrem
Remove one or more members from a sorted set
## Arguments
The key of the sorted set
One or more members to remove
## Response
The number of members removed from the sorted set.
```ts Single theme={"system"}
await redis.zrem("key", "member");
```
```ts Multiple theme={"system"}
await redis.zrem("key", "member1", "member2");
```
# ZREMRANGEBYLEX
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zremrangebylex
Remove all members in a sorted set between the given lexicographical range.
## Arguments
The key of the sorted set
The minimum lexicographical value to remove.
The maximum lexicographical value to remove.
## Response
The number of elements removed from the sorted set.
```ts Example theme={"system"}
await redis.zremrangebylex("key", "alpha", "omega")
```
# ZREMRANGEBYRANK
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zremrangebyrank
Remove all members in a sorted set between the given ranks.
## Arguments
The key of the sorted set
The minimum rank to remove.
The maximum rank to remove.
## Response
The number of elements removed from the sorted set.
```ts Example theme={"system"}
await redis.zremrangebyrank("key", 4, 20)
```
# ZREMRANGEBYSCORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zremrangebyscore
Remove all members in a sorted set between the given scores.
## Arguments
The key of the sorted set
The minimum score to remove.
The maximum score to remove.
## Response
The number of elements removed from the sorted set.
```ts Example theme={"system"}
await redis.zremrangebyscore("key", 2, 5)
```
# ZREVRANK
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zrevrank
Returns the rank of a member in a sorted set, with scores ordered from high to low.
## Arguments
The key to get.
The member to get the reverse rank of.
## Response
The reverse rank of the member.
```ts Example theme={"system"}
const rank = await redis.rank("key", "member");
```
# ZSCAN
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zscan
Scan a sorted set
## Arguments
The key of the sorted set.
The cursor, use `0` in the beginning and then use the returned cursor for subsequent calls.
Glob-style pattern to filter by members.
Number of members to return per call.
## Response
The new cursor and the members.
If the new cursor is `0` the iteration is complete.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "a" },
{ score: 2, member: "ab" },
{ score: 3, member: "b" },
{ score: 4, member: "c" },
{ score: 5, member: "d" },
)
const [newCursor, members] = await redis.zscan("key", 0, { match: "a*"});
console.log(newCursor); // likely `0` since this is a very small set
console.log(members); // ["a", "ab"]
```
```ts withCount theme={"system"}
await redis.zadd("key",
{ score: 1, member: "a" },
{ score: 2, member: "ab" },
{ score: 3, member: "b" },
{ score: 4, member: "c" },
{ score: 5, member: "d" },
)
const [newCursor, members] = await redis.zscan("key", 0, { match: "a*", count: 1});
console.log(newCursor); // likely `0` since this is a very small set
console.log(members); // ["a"]
```
# ZSCORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zscore
Returns the scores of a member.
## Arguments
The key to get.
## Response
A member of the sortedset.
```ts Example theme={"system"}
await redis.zadd("key",
{ score: 1, member: "m1" },
{ score: 2, member: "m2" },
{ score: 3, member: "m3" },
{ score: 4, member: "m4" },
)
const score = await redis.zscore("key", "m2")
console.log(score) // 2
```
# ZUNIONSTORE
Source: https://upstash.com/docs/redis/sdks/ts/commands/zset/zunionstore
Writes the union between sets to a new key.
## Arguments
The key to write the union to.
How many keys to compare.
The keys to compare.
The aggregation method.
The weight to apply to each key.
The weights to apply to each key.
## Response
The number of elements in the resulting set.
```ts Simple theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zunionstore("destination", 2, ["key1", "key2"]);
console.log(res) // 2
```
```ts With Weights theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zunionstore(
"destination",
2,
["key1", "key2"],
{ weights: [2, 3] },
);
console.log(res) // 2
```
```ts Aggregate theme={"system"}
await redis.zadd(
"key1",
{ score: 1, member: "member1" },
)
await redis.zadd(
"key2",
{ score: 1, member: "member1" },
{ score: 2, member: "member2" },
)
const res = await redis.zunionstore(
"destination",
2,
["key1", "key2"],
{ aggregate: "sum" },
);
console.log(res) // 2
```
# Deployment
Source: https://upstash.com/docs/redis/sdks/ts/deployment
We support various platforms, such as nodejs, cloudflare and fastly. Platforms
differ slightly when it comes to environment variables and their `fetch` api.
Please use the correct import when deploying to special platforms.
## Node.js / Browser
Examples: Vercel, Netlify, AWS Lambda
If you are running on nodejs you can set `UPSTASH_REDIS_REST_URL` and
`UPSTASH_REDIS_REST_TOKEN` as environment variable and create a redis instance
like this:
```ts theme={"system"}
import { Redis } from "@upstash/redis"
const redis = new Redis({
url: ,
token: ,
})
// or load directly from env
const redis = Redis.fromEnv()
```
If you are running on nodejs v17 and earlier, `fetch` will not be natively
supported. Platforms like Vercel, Netlify, Deno, Fastly etc. provide a polyfill
for you. But if you are running on bare node, you need to either specify a
polyfill yourself or change the import path slightly:
```typescript theme={"system"}
import { Redis } from "@upstash/redis/with-fetch";
```
* [Code example](https://github.com/upstash/upstash-redis/blob/main/examples/nodejs)
## Cloudflare Workers
Cloudflare handles environment variables differently than Node.js. Please add
`UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` using
`wrangler secret put ...` or in the cloudflare dashboard.
Afterwards you can create a redis instance:
```ts theme={"system"}
import { Redis } from "@upstash/redis/cloudflare"
const redis = new Redis({
url: ,
token: ,
})
// or load directly from global env
// service worker
const redis = Redis.fromEnv()
// module worker
export default {
async fetch(request: Request, env: Bindings) {
const redis = Redis.fromEnv(env)
// ...
}
}
```
* [Code example](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers)
* [Code example typescript](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers-with-typescript)
* [Code example Wrangler 1](https://github.com/upstash/upstash-redis/tree/main/examples/cloudflare-workers-with-wrangler-1)
* [Documentation](https://docs.upstash.com/redis/tutorials/cloudflare_workers_with_redis)
## Fastly
Fastly introduces a concept called
[backend](https://developer.fastly.com/reference/api/services/backend/). You
need to configure a backend in your `fastly.toml`. An example can be found
[here](https://github.com/upstash/upstash-redis/blob/main/examples/fastly/fastly.toml).
Until the fastly api stabilizes we recommend creating an instance manually:
```ts theme={"system"}
import { Redis } from "@upstash/redis/fastly"
const redis = new Redis({
url: ,
token: ,
backend: ,
})
```
* [Code example](https://github.com/upstash/upstash-redis/tree/main/examples/fastly)
* [Documentation](https://blog.upstash.com/fastly-compute-edge-with-redis)
## Deno
Examples: [Deno Deploy](https://deno.com/deploy),
[Netlify Edge](https://www.netlify.com/products/edge/)
```ts theme={"system"}
import { Redis } from "https://deno.land/x/upstash_redis/mod.ts"
const redis = new Redis({
url: ,
token: ,
})
// or
const redis = Redis.fromEnv();
```
# Developing or Testing
Source: https://upstash.com/docs/redis/sdks/ts/developing
When developing or testing your application, you might not want or can not use
Upstash over the internet. In this case, you can use a community project called
[Serverless Redis HTTP (SRH)](https://github.com/hiett/serverless-redis-http)
created by [Scott Hiett](https://x.com/hiettdigital).
SRH is a Redis proxy and connection pooler that uses HTTP rather than the Redis
binary protocol. The aim of this project is to be entirely compatible with
Upstash, and work with any Upstash supported Redis version.
We are working with Scott together to keep SRH up to date with the latest
Upstash features.
## Use cases for SRH:
* For usage in your CI pipelines, creating Upstash databases is tedious, or you
have lots of parallel runs.
* See [Using in GitHub Actions](#in-github-actions) on how to quickly get SRH
setup for this context.
* For usage inside of Kubernetes, or any network whereby the Redis server is not
exposed to the internet.
* See [Using in Docker Compose](#via-docker-compose) for the various setup
options directly using the Docker Container.
* For local development environments, where you have a local Redis server
running, or require offline access.
* See [Using the Docker Command](#via-docker-command), or
[Using Docker Compose](#via-docker-compose).
## Setting up SRH
### Via Docker command
If you have a locally running Redis server, you can simply start an SRH
container that connects to it. In this example, SRH will be running on port
`8080`.
```bash theme={"system"}
docker run \
-it -d -p 8080:80 --name srh \
-e SRH_MODE=env \
-e SRH_TOKEN=your_token_here \
-e SRH_CONNECTION_STRING="redis://your_server_here:6379" \
hiett/serverless-redis-http:latest
```
### Via Docker Compose
If you wish to run in Kubernetes, this should contain all the basics would need
to set that up. However, be sure to read the Configuration Options, because you
can create a setup whereby multiple Redis servers are proxied.
```yml theme={"system"}
version: "3"
services:
redis:
image: redis
ports:
- "6379:6379"
serverless-redis-http:
ports:
- "8079:80"
image: hiett/serverless-redis-http:latest
environment:
SRH_MODE: env
SRH_TOKEN: example_token
SRH_CONNECTION_STRING: "redis://redis:6379" # Using `redis` hostname since they're in the same Docker network.
```
### In GitHub Actions
SRH works nicely in GitHub Actions because you can run it as a container in a
job's services. Simply start a Redis server, and then SRH alongside it. You
don't need to worry about a race condition of the Redis instance not being
ready, because SRH doesn't create a Redis connection until the first command
comes in.
```yml theme={"system"}
name: Test @upstash/redis compatibility
on:
push:
workflow_dispatch:
env:
SRH_TOKEN: example_token
jobs:
container-job:
runs-on: ubuntu-latest
container: denoland/deno
services:
redis:
image: redis/redis-stack-server:6.2.6-v6 # 6.2 is the Upstash compatible Redis version
srh:
image: hiett/serverless-redis-http:latest
env:
SRH_MODE: env # We are using env mode because we are only connecting to one server.
SRH_TOKEN: ${{ env.SRH_TOKEN }}
SRH_CONNECTION_STRING: redis://redis:6379
steps:
# You can place your normal testing steps here. In this example, we are running SRH against the upstash/upstash-redis test suite.
- name: Checkout code
uses: actions/checkout@v3
with:
repository: upstash/upstash-redis
- name: Run @upstash/redis Test Suite
run: deno test -A ./pkg
env:
UPSTASH_REDIS_REST_URL: http://srh:80
UPSTASH_REDIS_REST_TOKEN: ${{ env.SRH_TOKEN }}
```
A huge thanks goes out to [Scott](https://hiett.dev/) for creating this project,
and for his continued efforts to keep it up to date with Upstash.
# Get Started
Source: https://upstash.com/docs/redis/sdks/ts/getstarted
`@upstash/redis` is written in Deno and can be imported from
[deno.land](https://deno.land)
```ts theme={"system"}
import { Redis } from "https://deno.land/x/upstash_redis/mod.ts";
```
We transpile the package into an npm compatible package as well:
```bash theme={"system"}
npm install @upstash/redis
```
```bash theme={"system"}
yarn add @upstash/redis
```
```bash theme={"system"}
pnpm add @upstash/redis
```
## Basic Usage:
```ts theme={"system"}
import { Redis } from "@upstash/redis"
const redis = new Redis({
url: ,
token: ,
})
// string
await redis.set('key', 'value');
let data = await redis.get('key');
console.log(data)
await redis.set('key2', 'value2', {ex: 1});
// sorted set
await redis.zadd('scores', { score: 1, member: 'team1' })
data = await redis.zrange('scores', 0, 100 )
console.log(data)
// list
await redis.lpush('elements', 'magnesium')
data = await redis.lrange('elements', 0, 100 )
console.log(data)
// hash
await redis.hset('people', {name: 'joe'})
data = await redis.hget('people', 'name' )
console.log(data)
// sets
await redis.sadd('animals', 'cat')
data = await redis.spop('animals', 1)
console.log(data)
```
# Overview
Source: https://upstash.com/docs/redis/sdks/ts/overview
`@upstash/redis` is an HTTP/REST based Redis client for TypeScript, built on top
of [Upstash REST API](https://docs.upstash.com/features/restapi).
[](https://github.com/upstash/upstash-redis/actions/workflows/tests.yaml)


You can find the Github Repository [here](https://github.com/upstash/upstash-redis).
It is the only connectionless (HTTP based) Redis client and designed for:
* Serverless functions (AWS Lambda ...)
* Cloudflare Workers (see
[the example](https://github.com/upstash/upstash-redis/tree/master/examples/cloudflare-workers))
* Fastly Compute\@Edge (see
[the example](https://github.com/upstash/upstash-redis/tree/master/examples/fastly))
* Next.js, Jamstack ...
* Client side web/mobile applications
* WebAssembly
* and other environments where HTTP is preferred over TCP.
See
[the list of APIs](https://docs.upstash.com/features/restapi#rest---redis-api-compatibility)
supported.
# Auto-Pipelining
Source: https://upstash.com/docs/redis/sdks/ts/pipelining/auto-pipeline
### Auto Pipelining
Auto pipelining allows you to use the Redis client as usual
while in the background it tries to send requests in batches
whenever possible.
In a nutshell, the client will accumulate commands in a pipeline
and wait for a short amount of time for more commands to arrive.
When there are no more commands, it will execute them as a batch.
To enable the feature, simply pass `enableAutoPipelining: true`
when creating the Redis client:
```ts Redis theme={"system"}
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv({
latencyLogging: false,
enableAutoPipelining: true
});
```
```ts fromEnv theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: ,
token: ,
enableAutoPipelining: true
})
```
This is especially useful in cases when we want to make async
requests or when we want to make requests in batches.
```ts theme={"system"}
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv({
latencyLogging: false,
enableAutoPipelining: true
});
// async call to redis. Not executed right away, instead
// added to the pipeline
redis.hincrby("Brooklyn", "visited", 1);
// making requests in batches
const brooklynInfo = Promise.all([
redis.hget("Brooklyn", "coordinates"),
redis.hget("Brooklyn", "population")
]);
// when we call await, the three commands are executed
// as a pipeline automatically. A single PIPELINE command
// is executed instead of three requests and the results
// are returned:
const [ coordinates, population ] = await brooklynInfo;
```
The benefit of auto pipelining is that it reduces the number
of HTTP requests made like pipelining and transaction while
being extremely simple to enable and use. It's especially
useful in cases like Vercel Edge and [Cloudflare Workers, where the number of
simultaneous requests is limited by 6](https://developers.cloudflare.com/workers/platform/limits/#account-plan-limits).
To learn more about how auto pipelining can be utilized in a
project, see
[the auto-pipeline example project under `upstash-redis` repository](https://github.com/upstash/upstash-redis/tree/main/examples/auto-pipeline)
### How it Works
For auto pipeline to work, the client keeps an active pipeline
and adds incoming commands to this pipeline. After the command
is added to the pipeline, execution of the pipeline is delayed
by releasing the control of the Node thread.
The pipeline executes when one of these two conditions are met:
No more commands are being added or at least one of the commands
added is being 'awaited'.
This means that if you are awaiting every time you run a command,
you won't benefit much from auto pipelining since each await will
trigger a pipeline:
```ts theme={"system"}
const foo = await redis.get("foo") // makes a PIPELINE call
const bar = await redis.get("bar") // makes another PIPELINE call
```
In these cases, we suggest using `Promise.all`:
```ts theme={"system"}
// makes a single PIPELINE call:
const [ foo, bar ] = await Promise.all([
redis.get("foo"),
redis.get("bar")
])
```
In addition to resulting in a single PIPELINE call, the commands
in `Promise.all` are executed in the order they are written!
# Pipeline & Transaction
Source: https://upstash.com/docs/redis/sdks/ts/pipelining/pipeline-transaction
### Pipeline
Pipelining commands allows you to send a single http request with multiple
commands. Keep in mind, that the execution of pipelines is not atomic and the
execution of other commands can interleave.
```ts theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
/* auth */
});
const p = redis.pipeline();
// Now you can chain multiple commands to create your pipeline:
p.set("key", 2);
p.incr("key");
// or inline:
p.hset("key2", "field", { hello: "world" }).hvals("key2");
// Execute the pipeline once you are done building it:
// `exec` returns an array where each element represents the response of a command in the pipeline.
// You can optionally provide a type like this to get a typed response.
const res = await p.exec<[Type1, Type2, Type3]>();
```
For more information about pipelines using REST see
[here](https://blog.upstash.com/pipeline).
If you wish to benefit from pipeline automatically,
you can simply enable auto-pipelining to make your redis client
handle the commands in batches in the background. See
[the Auto-pipelining page](https://docs.upstash.com/redis/sdks/ts/pipelining/auto-pipeline).
### Transaction
Remember that the pipeline is able to send multiple commands at once but
can't execute them atomically. With transactions, you can make the commands
execute atomically.
```ts theme={"system"}
import { Redis } from "@upstash/redis";
const redis = new Redis({
/* auth */
});
const p = redis.multi();
p.set("key", 2);
p.incr("key");
// or inline:
p.hset("key2", "field", { hello: "world" }).hvals("key2");
// execute the transaction
const res = await p.exec<[Type1, Type2, Type3]>();
```
# Retries
Source: https://upstash.com/docs/redis/sdks/ts/retries
By default `@upstash/redis` will retry sending you request when network errors
occur. It will retry 5 times with a backoff of
`(retryCount) => Math.exp(retryCount) * 50` milliseconds.
You can customize this in the `Redis` constructor:
```ts theme={"system"}
new Redis({
url: UPSTASH_REDIS_REST_URL,
token: UPSTASH_REDIS_REST_TOKEN,
retry: {
retries: 5,
backoff: (retryCount) => Math.exp(retryCount) * 50,
},
});
```
The exact type definition can be found
[here](https://github.com/upstash/upstash-redis/blob/4948b049e0d580d1de0a4cbfeac5565d7e035cc4/pkg/http.ts#LL31C1-L49C5).
# Troubleshooting
Source: https://upstash.com/docs/redis/sdks/ts/troubleshooting
## ReferenceError: fetch is not defined
#### Problem
If you are running on nodejs v17 and earlier, fetch will not be natively
supported. Platforms like Vercel, Netlify, Deno, Fastly etc. provide a polyfill
for you. But if you are running on bare node, you need to add a polyfill.
#### Solution
```bash theme={"system"}
npm i isomorphic-fetch
```
```ts theme={"system"}
import { Redis } from "@upstash/redis";
import "isomorphic-fetch";
const redis = new Redis({
/*...*/
});
```
## Hashed Response
The response from a server is not what you expect but looks like a hash?
```ts theme={"system"}
await redis.set("key", "value");
const data = await redis.get("key");
console.log(data);
// dmFsdWU=
```
#### Problem
By default `@upstash/redis` will request responses from the server to be base64
encoded. This is to prevent issues with some edge cases when storing data where
the http response fails to be deserialized using `res.json()`
This solves the problem for almost all edge cases, but it can cause new issues.
#### Solution
You can disable this behavior by setting `responseEncoding` to `false` in the
options.
```ts theme={"system"}
const redis = new Redis({
// ...
responseEncoding: false,
});
```
This should no longer be necessary, but if you are still experiencing issues
with this, please let us know:
* [Discord](https://discord.gg/w9SenAtbme)
* [X](https://x.com/upstash)
* [GitHub](https://github.com/upstash/upstash-redis/issues/new)
## Large numbers are returned as string
You are trying to load a large number and it is returned as a string instead.
```ts theme={"system"}
await redis.set("key", "101600000000150081467");
const res = await redis("get");
// "101600000000150081467"
```
#### Problem
Javascript can not handle numbers larger than `2^53 -1` safely and would return
wrong results when trying to deserialize them. In these cases the default
deserializer will return them as string instead. This might cause a mismatch
with your custom types.
#### Solution
Please be aware that this is a limitation of javascript and take special care
when handling large numbers.
# Unexpected Increase in Command Count
Source: https://upstash.com/docs/redis/troubleshooting/command_count_increases_unexpectedly
### Symptom
You notice an increasing command count for your Redis database in the Upstash Console, even when there are no connected clients.
### Diagnosis
The Upstash Console interacts with your Redis database to provide its functionality, which can result in an increased command count. This behavior is normal and expected. Here's a breakdown of why this occurs:
1. **Data Browser functionality:**
The Data Browser tab sends various commands to list and display your keys, including:
* SCAN: To iterate through the keyspace
* GET: To retrieve values for keys
* TTL: To check the time-to-live for keys
2. **Rate Limiting check:**
The Console checks if your database is being used for Rate Limiting. This involves sending EXISTS commands for rate limiting-related keys.
3. **Other Console features:**
Additional features in the Console may send commands to your database to retrieve or display information.
### Verification
You can use the Monitor tab in the Upstash Console to observe which commands are being sent by the Console itself. This can help you distinguish between Console-generated commands and those from your application or other clients.
Also, Usage tab contains 'Top Commands Usage' graph which shows the exact command history.
### Conclusion
The increasing command count you're seeing is likely due to the Console's normal operations and should not be a cause for concern. These commands do not significantly impact your database's performance or your usage limits.
If you have any further questions or concerns about command usage, please don't hesitate to contact Upstash support.
# ERR DB capacity quota exceeded
Source: https://upstash.com/docs/redis/troubleshooting/db_capacity_quota_exceeded
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR DB capacity quota exceeded
```
### Diagnosis
Your total database size exceeds the max data size limit of your current plan. When this limit is reached,
write requests may be rejected. Read and delete requests will not be affected.
### Solution-1
You can manually delete some entries to allow further writes. Additionally you
can consider setting TTL (expiration time) for your keys or enable
[eviction](../features/eviction) for your database.
### Solution-2
You can upgrade your database to Pro for higher limits.
# Error read ECONNRESET
Source: https://upstash.com/docs/redis/troubleshooting/econn_reset
### Symptom
The client can not connect to the database throwing an exception similar to:
```
[ioredis] Unhandled error event: Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:211:20)
```
### Diagnosis
The server is TLS enabled but your connection (client) is not.
### Solution
Check your connection parameters and ensure you enable TLS.
If you are using a Redis URL then it should start with `rediss://`.
You can copy the correct client configuration from Upstash console clicking on
**Redis Connect** button.
# WRONGPASS invalid or missing auth token
Source: https://upstash.com/docs/redis/troubleshooting/http_unauthorized
### Symptom
The database rejects your request with an error similar to:
```
UpstashError: WRONGPASS invalid or missing auth token
```
### Diagnosis
The server rejects your request because the auth token is missing or invalid.
Most likely you have forgotten to set it in your environment variables, or you
are using a wrong token.
The connection password can only be used in traditional Redis clients. If you
want to connect over HTTP, you need to use the HTTP auth token.
### Solution
1. Check that you have set the `UPSTASH_REDIS_REST_TOKEN` in your environment
variables and it is loaded correctly by your application at runtime.
2. Make sure you are using the correct HTTP auth token. You can copy the correct
client configuration from the
[Upstash console](https://console.upstash.com/redis) by copying the snippet
from the `Connect to your database` -> `@upstash/redis` tab
Or scroll further down to the `REST API` section and copy the
`UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` from there.
# ERR max concurrent connections exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_concurrent_connections
### Symptom
New clients can not connect to the database throwing an exception similar to:
```
"message" : "[ioredis] Unhandled error event:
ReplyError: ERR max concurrent connections exceeded\r
at Object.onceWrapper (events.js:286:20)\r
at Socket.emit (events.js:203:15)\r at Socket.EventEmitter.emit (domain.js:448:20)\r
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1093:10)\n"
```
### Diagnosis
You have reached the concurrent connection limit.
### Solution-1
You need to manage connections more efficiently. If you are using serverless
functions, you can create the Redis client inside the function and close the
connection when you are done with the database as below.
This solution may have a latency overhead (about 4 ms). See [the blog
post](https://blog.upstash.com/serverless-database-connections) for more.
```javascript theme={"system"}
exports.handler = async (event) => {
const client = new Redis(process.env.REDIS_URL);
/*
do stuff with redis
*/
await client.quit();
/*
do other stuff
*/
return {
response: "response",
};
};
```
### Solution-2
You can use [@upstash/redis](https://github.com/upstash/upstash-redis) client
which is REST based so it does not have any connection related problems.
### Solution-3
You can upgrade your database to Pro for higher limits.
See [the blog post](https://blog.upstash.com/serverless-database-connections)
about the database connections in serverless functions.
# ERR max daily request limit exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_daily_request_limit
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR max daily request limit exceeded
```
### Diagnosis
Your database exceeds the max daily request count limit.
### Solution-1
You can refactor your application to send less number of commands.
### Solution-2
You can upgrade your database to a paid plan, such as pay-as-you-go or a fixed plan
by entering a payment method. When you entered your credit card, your database will be upgraded
automatically.
See [here](../howto/upgradedatabase) for more information.
# ERR max key size exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_key_size_exceeded
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR max key size exceeded. Limit: X bytes, Actual: Z bytes
```
### Diagnosis
Size of the key in the request exceeds the max key size limit, which is `32Kb`.
### Solution
This is a hardcoded limit and cannot be configured per database. You should
reduce the key size.
# ERR max single record size exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_record_size_exceeded
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR max single record size exceeded
```
### Diagnosis
An entry size exceeds the max record size limit which is `100Mb` for "Free" and
"Pay as you go" databases. You may reach this limit either by inserting a single
huge value or appending many small values to an entry. This entry can be a
String, List, Set, Hash etc. Read (`GET`, `LRANGE`, `HMGET`, `ZRANGE` etc) and
delete (`DEL`, `LPOP`, `HDEL`, `SREM` etc) requests will not be affected.
### Solution-1
You can split your data into smaller chunks and store them as separate entries
with different keys.
### Solution-2
You can upgrade your database to Pro as it has higher limits. Also
you can submit quota increase request in the console or contact
[support@upstash.com](mailto:support@upstash.com) about the options with higher max record size limit.
# ERR max request size exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_request_size_exceeded
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR max request size exceeded
```
### Diagnosis
Your command exceeds the max request size which is `10MB` for "Free" and "Pay as
you go" databases.
### Solution-1
You can split your data into smaller chunks and send them in separate commands.
### Solution-2
You can upgrade your database to a higher plan, or apply a custom quota increase if you are on Pay-as-You-Go plan. Please reach out to [support@upstash.com](mailto:support@upstash.com) about the options with higher max request size limit.
max-request-size-limit is about the size of a single request. Your data
structure (like list, set) can exceed the max request size limit without any
problem. If you try to load all elements in the list with a single request
then it can throw the max-request-size-limit exception.
# ERR max requests limit exceeded
Source: https://upstash.com/docs/redis/troubleshooting/max_requests_limit
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR max requests limit exceeded.
```
### Diagnosis
Your database exceeds the max monthly request count limit.
### Solution-1
You can refactor your application to send less number of commands.
### Solution-2
You can upgrade your database to a paid plan, such as pay-as-you-go or a fixed plan
by entering a payment method. When you entered your credit card, your database will be upgraded
automatically.
See [here](../howto/upgradedatabase) for more information.
# NOAUTH Authentication Required
Source: https://upstash.com/docs/redis/troubleshooting/no_auth
### Symptom
The client can not connect to the database throwing an exception similar to:
```
[ioredis] Unhandled error event:
ReplyError: NOAUTH Authentication required
```
### Diagnosis
The server does not let you connect because the password is missing in your
connection parameters.
### Solution
Check your connection parameters and ensure they contain the password. If you
are using ioredis (Redis client) with a Redis URL, check the URL format. ioredis
requires a colon before the password. The format for IORedis - TLS enabled
```
rediss://:YOUR_PASSWORD@YOUR_ENDPOINT:YOUR_PORT
```
The format for IORedis - TLS disabled
```
redis://:YOUR_PASSWORD@YOUR_ENDPOINT:YOUR_PORT
```
You can copy the correct client configuration from Upstash console clicking on
**Redis Connect** button.
# ERR XReadGroup is cancelled
Source: https://upstash.com/docs/redis/troubleshooting/stream_pel_limit
### Symptom
The client gets an exception similar to:
```
ReplyError: ERR XReadGroup is cancelled. Pending Entries List limit per consumer is about to be reached. Limit: 1000, Current PEL size: 90, Requested Read: 20, Key: mstream, Group: group1, Consumer: consumer1.
```
### Diagnosis
Pending Entries List of the stream for the consumer is full. For each consumer
in a consumer group, there is a pending entries list. This list keeps the
messages that are delivered to a consumer but not yet acknowledged via
[XACK](https://redis.io/commands/xack/). This list is populated via
[XREADGROUP](https://redis.io/commands/xreadgroup/).
### Solution
Acknowledge the consumed messages via [XACK](https://redis.io/commands/xack/)
from the list of the associated group and consumer.
# Deploy a Serverless API with AWS CDK and AWS Lambda
Source: https://upstash.com/docs/redis/tutorials/api_with_cdk
You can find the project source code on GitHub.
In this tutorial, we will implement a Serverless API using AWS Lambda and we
will deploy it using AWS CDK. We will use Typescript as the CDK language. It
will be a view counter where we keep the state in Redis.
### What is AWS CDK?
AWS CDK is an interesting project which allows you to provision and deploy AWS
infrastructure with code. Currently TypeScript, JavaScript, Python, Java,
C#/.Net and Go are supported. You can compare AWS CDK with following technologies:
* AWS CloudFormation
* AWS SAM
* Serverless Framework
The above projects allows you to set up the infrastructure with configuration
files (yaml, json) while with AWS CDK, you set up the resources with code. For
more information about CDK see the related
[AWS Docs](https://docs.aws.amazon.com/cdk/latest/guide/home.html).
### Prerequisites
* Complete all steps in [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
### Project Setup
Create and navigate to a directory named `counter-cdk`. The CDK CLI uses this directory name to name things in your CDK code, so if you decide to use a different name, don't forget to make the appropriate changes when applying this tutorial.
```shell theme={"system"}
mkdir counter-cdk && cd counter-cdk
```
Initialize a new CDK project.
```shell theme={"system"}
cdk init app --language typescript
```
Install `@upstash/redis`.
```shell theme={"system"}
npm install @upstash/redis
```
### Counter Function Setup
Create `/api/counter.ts`.
```ts /api/counter.ts theme={"system"}
import { Redis } from '@upstash/redis';
const redis = Redis.fromEnv();
export const handler = async function() {
const count = await redis.incr("counter");
return {
statusCode: 200,
body: JSON.stringify('Counter: ' + count),
};
};
```
### Counter Stack Setup
Update `/lib/counter-cdk-stack.ts`.
```ts /lib/counter-cdk-stack.ts theme={"system"}
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as nodejs from 'aws-cdk-lib/aws-lambda-nodejs';
export class CounterCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const counterFunction = new nodejs.NodejsFunction(this, 'CounterFunction', {
entry: 'api/counter.ts',
handler: 'handler',
runtime: lambda.Runtime.NODEJS_20_X,
environment: {
UPSTASH_REDIS_REST_URL: process.env.UPSTASH_REDIS_REST_URL || '',
UPSTASH_REDIS_REST_TOKEN: process.env.UPSTASH_REDIS_REST_TOKEN || '',
},
bundling: {
format: nodejs.OutputFormat.ESM,
target: "node20",
nodeModules: ['@upstash/redis'],
},
});
const counterFunctionUrl = counterFunction.addFunctionUrl({
authType: lambda.FunctionUrlAuthType.NONE,
});
new cdk.CfnOutput(this, "counterFunctionUrlOutput", {
value: counterFunctionUrl.url,
})
}
}
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and export `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment.
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
### Deploy
Run in the top folder:
```shell theme={"system"}
cdk synth
cdk bootstrap
cdk deploy
```
Visit the output URL.
# Autocomplete API with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/auto_complete_with_serverless_redis
This tutorial implements an autocomplete API powered by serverless Redis. See
[the demo](https://auto-complete-example.vercel.app/) and
[API endpoint](https://wfgz7cju24.execute-api.us-east-1.amazonaws.com/query?term=ca)
and
[the source code](https://github.com/upstash/examples/tree/main/examples/auto-complete-api).
We will keep country names in a Redis Sorted set. In Redis sorted set, elements
with the same score are sorted lexicographically. So in our case, all country
names will have the same score, 0. We keep all prefixes of country and use ZRANK
to find the terms to suggest. See
[this blog post](https://oldblog.antirez.com/post/autocomplete-with-redis.html)
for the details of the algorithm.
### Step 1: Project Setup
I will use Serverless framework for this tutorial. You can also use [AWS
SAM](/redis/tutorials/using_aws_sam)
If you do not have it already install serverless framework via:
`npm install -g serverless`
In any folder run `serverless` as below:
```text theme={"system"}
>> serverless
Serverless: No project detected. Do you want to create a new one? Yes
Serverless: What do you want to make? AWS Node.js
Serverless: What do you want to call this project? test-upstash
Project successfully created in 'test-upstash' folder.
You can monitor, troubleshoot, and test your new service with a free Serverless account.
Serverless: Would you like to enable this? No
You can run the “serverless” command again if you change your mind later.
```
Inside the project folder create a node project with the command:
```
npm init
```
Then install the redis client with:
```
npm install ioredis
```
### Step 2: API Implementation
Edit handler.js file as below. See
[the blog post](https://oldblog.antirez.com/post/autocomplete-with-redis.html)
for the details of the algorithm.
```javascript theme={"system"}
var Redis = require("ioredis");
if (typeof client === "undefined") {
var client = new Redis(process.env.REDIS_URL);
}
const headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true,
};
module.exports.query = async (event, context, callback) => {
if (!event.queryStringParameters || !event.queryStringParameters.term) {
return {
statusCode: 400,
headers: headers,
body: JSON.stringify({
message: "Invalid parameters. Term needed as query param.",
}),
};
}
let term = event.queryStringParameters.term.toUpperCase();
let res = [];
let rank = await client.zrank("terms", term);
if (rank != null) {
let temp = await client.zrange("terms", rank, rank + 100);
for (const el of temp) {
if (!el.startsWith(term)) {
break;
}
if (el.endsWith("*")) {
res.push(el.substring(0, el.length - 1));
}
}
}
return {
statusCode: 200,
headers: headers,
body: JSON.stringify({
message: "Query:" + event.queryStringParameters.term,
result: res,
}),
};
};
```
### Step 3: Create database on Upstash
If you do not have one, create a database following this
[guide](../overall/getstarted). Copy the Redis URL by clicking `Redis Connect`
button inside database page. Copy the URL for ioredis as we use ioredis in our
application. Create .env file and paste your Redis URL:
```text theme={"system"}
REDIS_URL=YOUR_REDIS_URL
```
### Step 4: Initialize Database
We will initialize the database with country names. Copy and run initdb.js
script from
[here](https://github.com/upstash/examples/tree/main/examples/auto-complete-api/initdb.js).
We simply put the country names and all their prefixes to the sorted set.
```javascript theme={"system"}
require('dotenv').config()
var Redis = require("ioredis");
var countries = [
{"name": "Afghanistan", "code": "AF"},
{"name": "Åland Islands", "code": "AX"},
{"name": "Albania", "code": "AL"},
{"name": "Algeria", "code": "DZ"},
...
]
var client = new Redis(process.env.REDIS_URL);
for (const country of countries) {
let term = country.name.toUpperCase();
let terms = [];
for (let i = 1; i < term.length; i++) {
terms.push(0);
terms.push(term.substring(0, i));
}
terms.push(0);
terms.push(term + "*");
(async () => {
await client.zadd("terms", ...terms)
})();
}
```
### Step 5: Deploy Your Function
Edit `serverless.yml` as below and replace your Redis URL:
```yaml theme={"system"}
service: auto-complete-api
# add this if you set REDIS_URL in .env
useDotenv: true
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
environment:
REDIS_URL: REPLACE_YOUR_REDIS_URL
functions:
query:
handler: handler.query
events:
- httpApi:
path: /query
method: get
cors: true
```
In the project folder run:
```
serverless deploy
```
Now you can run your function with:
```shell theme={"system"}
serverless invoke -f query -d '{ "queryStringParameters": {"term":"ca"}}'
```
It should give the following output:
```json theme={"system"}
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
"body": "{\"message\":\"Query:ca\",\"result\":[\"CAMBODIA\",\"CAMEROON\",\"CANADA\",\"CAPE VERDE\",\"CAYMAN ISLANDS\"]}"
}
```
You can also test your function using AWS console. In your AWS Lambda section,
click on your function. Scroll down to the code sections and click on the `Test`
button on the top right. Use `{ "queryStringParameters": {"term":"ar"}}` as your
event data.
### Step 6: Run Your Function Locally
In your project folder run:
```shell theme={"system"}
serverless invoke local -f query -d '{ "queryStringParameters": {"term":"ca"}}'
```
It should give the following output:
```json theme={"system"}
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
"body": "{\"message\":\"Query:ca\",\"result\":[\"CAMBODIA\",\"CAMEROON\",\"CANADA\",\"CAPE VERDE\",\"CAYMAN ISLANDS\"]}"
}
```
# Build Stateful Applications with AWS App Runner and Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/aws_app_runner_with_redis
This tutorial shows how to create a serverless and stateful application using AWS App Runner and Redis
AWS App Runner is a container service where AWS runs and scales your container
in a serverless way. The container storage is ephemeral so you should keep the
state in an external data store. In this tutorial we will build a simple
application which will keep the state on Redis and deploy the application to AWS
App Runner.
### The Stack
* Serverless compute: AWS App Runner (Node.js)
* Serverless data store: Redis via Upstash
* Deployment source: github repo
### Project Setup
Create a directory for your project:
```
mkdir app_runner_example
cd app_runner_example
```
Create a node project and install dependencies:
```
npm init
npm install ioredis
```
Create a Redis DB from [Upstash](https://console.upstash.com). In the database
details page, copy the connection code (Node tab).
### The Code
In your node project folder, create server.js and copy the below code:
```javascript theme={"system"}
var Redis = require("ioredis");
const http = require("http");
if (typeof client === "undefined") {
var client = new Redis(process.env.REDIS_URL);
}
const requestListener = async function (req, res) {
if (req.url !== "/favicon.ico") {
let count = await client.incr("counter");
res.writeHead(200);
res.end("Page view:" + count);
}
};
const server = http.createServer(requestListener);
server.listen(8080);
```
As you see, the code simple increment a counter on Redis and returns the
response as the page view count.
### Deployment
You have two options to deploy your code to the App Runner. You can either share
your Github repo with AWS or register your docker image to ECR. In this
tutorial, we will share
[our Github repo](https://github.com/upstash/app_runner_example) with App
Runner.
Create a github repo for your project and push your code. In AWS console open
the App Runner service. Click on `Create Service` button. Select
`Source code repository` option and add your repository by connecting your
Github and AWS accounts.
In the next page, choose `Nodejs 12` as your runtime, `npm install` as your
build command, `node server` as your start command and `8080` as your port.
The next page configures your App Runner service. Set a name for your service.
Set your Redis URL that you copied from Upstash console as `REDIS_URL`
environment variable. Your Redis URL should be something like this:
`rediss://:d34baef614b6fsdeb01b25@us1-lasting-panther-33618.upstash.io:33618`
You can leave other settings as default.
Click on `Create and Deploy` at the next page. Your service will be ready in a
few minutes. Click on the default domain, you should see the page with a view
counter as [here](https://xmzuanrpf3.us-east-1.awsapprunner.com/).
### App Runner vs AWS Lambda
* AWS Lambda runs functions, App Runner runs applications. So with App Runner
you do not need to split your application to functions.
* App Runner is a more portable solution. You can move your application from App
Runner to any other container service.
* AWS Lambda price scales to zero, App Runner's does not. With App Runner you
need to pay for an at least one instance unless you pause the system.
App Runner is great alternative when you need more control on your serverless
runtime and application. Check out
[this video](https://www.youtube.com/watch?v=x_1X_4j16A4) to learn more about
App Runner.
# Session Management on Google Cloud Run with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/cloud_run_sessions
This tutorial shows how to manage user sessions on Google Cloud Run using Serverless Redis.
Developers are moving their apps to serverless architectures and one of the most
common questions is
[how to store user sessions](https://stackoverflow.com/questions/57711095/are-users-sessions-on-google-cloud-run-apps-directed-to-the-same-instance).
You need to keep your state and session data in an external data store because
serverless environments are stateless by design. Unfortunately most of the
databases are not serverless friendly. They do not support per-request pricing
or they require heavy and persistent connections. These also explain the
motivations why we built Upstash. Upstash is a serverless Redis database with
per-request pricing, durable storage.
In this article I will write a basic web application which will run on Google
Cloud Run and keep the user sessions in Upstash Redis. Google Cloud Run provides
Serverless Container service which is also stateless. Cloud Run is more powerful
than serverless functions (AWS Lambda, Cloud Functions) as you can run your own
container. But you can not guarantee that the same container instance will
process the requests of the same user. So you need to keep the user session in
an external storage. Redis is the most popular choice to keep the session data
thanks to its speed and simplicity. Upstash gives you the serverless Redis
database which fits perfectly to your serverless stack.
If you want to store your session data manually on Redis, check
[here](/redis/tutorials/using_google_cloud_functions). But in
this article I will use [Express session](https://github.com/expressjs/session)
middleware which can work with Redis for user session management.
Here is the [live demo.](https://cloud-run-sessions-dr7fcdmn3a-uc.a.run.app)
Here is the
[source code](https://github.com/upstash/examples/tree/master/examples/cloud-run-sessions)
## The Stack
Serverless processing: Google Cloud Run
Serverless data: Upstash
Web framework: Express
## Project Setup
Create a directory for your project:
```
mkdir cloud-run-sessions
cd cloud-run-sessions
```
Create a node project and install dependencies:
```
npm init
npm install express redis connect-redis express-session
```
Create a Redis DB from [Upstash](https://console.upstash.com). In the database
details page, click the Connect button, copy the connection code (Node.js
node-redis).
If you do not have it already, install Google Cloud SDK as described
[here.](https://cloud.google.com/sdk/docs/install) Set the project and enable
Google Run and Build services:
```
gcloud config set project cloud-run-sessions
gcloud services enable run.googleapis.com
gcloud services enable cloudbuild.googleapis.com
```
## The Code
Create index.js and update as below:
```javascript theme={"system"}
var express = require("express");
var parseurl = require("parseurl");
var session = require("express-session");
const redis = require("redis");
var RedisStore = require("connect-redis")(session);
var client = redis.createClient({
// REPLACE HERE
});
var app = express();
app.use(
session({
store: new RedisStore({ client: client }),
secret: "forest squirrel",
resave: false,
saveUninitialized: true,
})
);
app.use(function (req, res, next) {
if (!req.session.views) {
req.session.views = {};
}
// get the url pathname
var pathname = parseurl(req).pathname;
// count the views
req.session.views[pathname] = (req.session.views[pathname] || 0) + 1;
next();
});
app.get("/", function (req, res, next) {
res.send("you viewed this page " + req.session.views["/"] + " times");
});
app.get("/foo", function (req, res, next) {
res.send("you viewed this page " + req.session.views["/foo"] + " times");
});
app.get("/bar", function (req, res, next) {
res.send("you viewed this page " + req.session.views["/bar"] + " times");
});
app.listen(8080, function () {
console.log("Example app listening on port 8080!");
});
```
Run the app: `node index.js`
Check [http://localhost:3000/foo](http://localhost:3000/foo) in different
browsers to validate it keeps the session.
Add the start script to your `package.json`:
```json theme={"system"}
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node index"
}
```
## Build
Create a Docker file (Dockerfile) in the project folder as below:
```
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
RUN npm install
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
```
Build your container image:
```
gcloud builds submit --tag gcr.io/cloud-run-sessions/main
```
List your container images: `gcloud container images list`
Run the container locally:
```
gcloud auth configure-docker
docker run -d -p 8080:8080 gcr.io/cloud-run-sessions/main:v0.1
```
In case you have an issue on docker run, check
[here](https://cloud.google.com/container-registry/docs/troubleshooting).
## Deploy
Run:
```
gcloud run deploy cloud-run-sessions \
--image gcr.io/cloud-run-sessions/main:v0.1 \
--platform managed \
--region us-central1 \
--allow-unauthenticated
```
This command should give you
[the URL of your application](https://cloud-run-sessions-dr7fcdmn3a-uc.a.run.app)
as below:
```
Deploying container to Cloud Run service [cloud-run-sessions] in project [cloud-run-sessions] region [us-central1]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
✓ Setting IAM Policy...
Done.
Service [cloud-run-sessions] revision [cloud-run-sessions-00006-dun] has been deployed and is serving 100 percent of traffic.
Service URL: https://cloud-run-sessions-dr7fcdmn3a-uc.a.run.app
```
## Cloud Run vs Cloud Functions
I have developed two small prototypes with both. Here my impression:
* Simplicity: Cloud functions are simpler to deploy as it does not require any
container building step.
* Portability: Cloud Run leverages your container, so anytime you can move your
application to any containerized system. This is a plus for Cloud Run.
* Cloud Run looks more powerful as it runs your own container with more
configuration options. It also allows running longer tasks (can be extended to
60 minutes)
* Cloud Run looks more testable as you can run the container locally. Cloud
Functions require a simulated environment.
Personally, I see Cloud Functions as a pure serverless solution where Cloud Run
is a hybrid solution. I would choose Cloud functions for simple, self contained
tasks or event driven solutions. If my use case is more complex with
portability/testability requirements, then I would choose Cloud Run.
# Cloudflare Workers with Websockets and Redis
Source: https://upstash.com/docs/redis/tutorials/cloudflare_websockets_redis
# Use Redis in Cloudflare Workers
Source: https://upstash.com/docs/redis/tutorials/cloudflare_workers_with_redis
You can find the project source code on GitHub.
This tutorial showcases using Redis with REST API in Cloudflare Workers. We will
write a sample edge function (Cloudflare Workers) which will show a custom
greeting depending on the location of the client. We will load the greeting
message from Redis so you can update it without touching the code.
### Why Upstash?
* Cloudflare Workers does not allow TCP connections. Upstash provides REST API
on top of the Redis database.
* Upstash is a serverless offering with per-request pricing which fits for edge
and serverless functions.
* Upstash Global database provides low latency all over the world.
### Prerequisites
1. Install the Cloudflare Wrangler CLI with `npm install wrangler --save-dev`
### Project Setup
Create a Cloudflare Worker with the following options:
```shell theme={"system"}
➜ tutorials > ✗ npx wrangler init
╭ Create an application with Cloudflare Step 1 of 3
│
├ In which directory do you want to create your application?
│ dir ./greetings-cloudflare
│
├ What would you like to start with?
│ category Hello World example
│
├ Which template would you like to use?
│ type Hello World Worker
│
├ Which language do you want to use?
│ lang TypeScript
│
├ Copying template files
│ files copied to project directory
│
├ Updating name in `package.json`
│ updated `package.json`
│
├ Installing dependencies
│ installed via `npm install`
│
╰ Application created
╭ Configuring your application for Cloudflare Step 2 of 3
│
├ Installing @cloudflare/workers-types
│ installed via npm
│
├ Adding latest types to `tsconfig.json`
│ added @cloudflare/workers-types/2023-07-01
│
├ Retrieving current workerd compatibility date
│ compatibility date 2024-10-22
│
├ Do you want to use git for version control?
│ no git
│
╰ Application configured
```
Install Upstash Redis:
```shell theme={"system"}
cd greetings-cloudflare
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `wrangler.toml` file.
```toml wrangler.toml theme={"system"}
# existing config
[vars]
UPSTASH_REDIS_REST_URL =
UPSTASH_REDIS_REST_TOKEN =
```
Using CLI Tab in the Upstash Console, add some greetings to your database:
### Greetings Function Setup
Update `src/index.ts`:
```typescript src/index.ts theme={"system"}
import { Redis } from '@upstash/redis/cloudflare';
type RedisEnv = {
UPSTASH_REDIS_REST_URL: string;
UPSTASH_REDIS_REST_TOKEN: string;
};
export default {
async fetch(request: Request, env: RedisEnv) {
const redis = Redis.fromEnv(env);
const country = request.headers.get('cf-ipcountry');
if (country) {
const greeting = await redis.get(country);
if (greeting) {
return new Response(greeting);
}
}
return new Response('Hello!');
},
};
```
The code tries to find out the user's location checking the "cf-ipcountry"
header. Then it loads the corresponding greeting for that location using the Redis
REST API.
### Run Locally
Run the following command to start your dev session:
```shell theme={"system"}
npx wrangler dev
```
Visit [localhost:8787](http://localhost:8787)
### Build and Deploy
Build and deploy your app to Cloudflare:
```shell theme={"system"}
npx wrangler deploy
```
Visit the output url.
# Backendless Coin Price List with GraphQL API, Serverless Redis and Next.JS
Source: https://upstash.com/docs/redis/tutorials/coin_price_list
In this tutorial, we will develop a simple coin price list using GraphQL API of
Upstash. You can call the application `backendless` because we will access the
database directly from the client (javascript). See the
[code](https://github.com/upstash/examples/tree/master/examples/coin-price-list).
## Motivation
We want to give a use case where you can use the GraphQL API without any backend
code. The use case is publicly available read only data for web applications
where you need low latency. The data is updated frequently by another backend
application, you want your users to see the last updated data. Examples:
Leaderboards, news list, blog list, product list, top N items in the homepages.
### `1` Project Setup:
Create a Next application: `npx create-next-app`.
Install Apollo GraphQL client: `npm i @apollo/client`
### `2` Database Setup
If you do not have one, create a database following this
[guide](../overall/getstarted). Connect your database via Redis CLI and run:
```shell theme={"system"}
rpush coins '{ "name" : "Bitcoin", "price": 56819, "image": "https://s2.coinmarketcap.com/static/img/coins/64x64/1.png"}' '{ "name" : "Ethereum", "price": 2130, "image": "https://s2.coinmarketcap.com/static/img/coins/64x64/1027.png"}' '{ "name" : "Cardano", "price": 1.2, "image": "https://s2.coinmarketcap.com/static/img/coins/64x64/2010.png"}' '{ "name" : "Polkadot", "price": 35.96, "image": "https://s2.coinmarketcap.com/static/img/coins/64x64/6636.png"}' '{ "name" : "Stellar", "price": 0.506, "image": "https://s2.coinmarketcap.com/static/img/coins/64x64/512.png"}'
```
### `3` Code
In the Upstash console, copy the read only access key in your API configuration
page (GraphQL Explorer > Configure API). In the `_app.js` create the Apollo
client and replace the your access key as below:
You need to use Read Only Access Key, because the key will be accessible
publicly.
```javascript theme={"system"}
import "../styles/globals.css";
import {
ApolloClient,
ApolloProvider,
createHttpLink,
InMemoryCache,
} from "@apollo/client";
const link = createHttpLink({
uri: "https://graphql-us-east-1.upstash.io/",
headers: {
Authorization: "Bearer YOUR_ACCESS_TOKEN",
},
});
const client = new ApolloClient({
uri: "https://graphql-us-east-1.upstash.io/",
cache: new InMemoryCache(),
link,
});
function MyApp({ Component, pageProps }) {
return (
{" "}
);
}
export default MyApp;
```
Edit `index.js` as below:
```javascript theme={"system"}
import Head from "next/head";
import styles from "../styles/Home.module.css";
import { gql, useQuery } from "@apollo/client";
import React from "react";
const GET_COIN_LIST = gql`
query {
redisLRange(key: "coins", start: 0, stop: 6)
}
`;
export default function Home() {
let coins = [];
const { loading, error, data } = useQuery(GET_COIN_LIST);
if (!loading && !error) {
for (let x of data.redisLRange) {
let dd = JSON.parse(x);
coins.push(dd);
}
}
return (
Create Next App
Coin Price List
{!loading ? (
coins.map((item, ind) => (
{item.name}
${item.price}
))
) : (
)}
);
}
```
### `4` Run
Run your application locally: `npm run dev`
### `5` Live!
Go to [http://localhost:3000/](http://localhost:3000/) 🎉
# Build a Leaderboard API At Edge using Cloudflare Workers and Redis
Source: https://upstash.com/docs/redis/tutorials/edge_leaderboard
This tutorial shows how to build a Leaderboard API At Edge using Cloudflare Workers and Redis.
With edge functions, it is possible to run your backend at the closest location
to your users. Cloudflare Workers and Fastly Compute\@Edge runs your function at
the closest location to your user using their CDN infrastructure.
In this article we will implement a very common web use case at Edge. We will
implement a leaderboard API without any backend servers, containers or even
serverless functions. We will just use edge functions. Leaderboard will have the
following APIs:
* addScore: Adds a score with the player's name. This will write the score to
the Upstash Redis directly from the Edge functions.
* getLeaderBoard: Returns the list of score-player pairs. This call will first
check the Edge cache. If the leaderboard does not exist at the Edge Cache then
it will fetch it from the Upstash Redis.
Edge caching is deprecated. Please use global database instead.
## Project Setup
In this tutorial, we will use Cloudflare Workers and Upstash. You can create a
free database from [Upstash Console](https://console.upstash.com). Then create a
Workers project using
[Wrangler](https://developers.cloudflare.com/workers/get-started/guide).
Install wrangler: `npm install -g @cloudflare/wrangler`
Authenticate: `wrangler login` or `wrangler config`
Then create a project: `wrangler generate edge-leaderboard`
Open `wrangler.toml`. Run `wrangler whoami` and copy/paste your account id to
your wrangler.toml.
Find your REST token from database details page in the
[Upstash Console](https://console.upstash.com). Copy/paste your token to your
wrangler toml as below:
```
name = "edge-leaderboard"
type = "javascript"
account_id = "REPLACE_YOUR_ACCOUNT_ID"
workers_dev = true
route = ""
zone_id = ""
[vars]
TOKEN = "REPLACE_YOUR_UPSTASH_REST_TOKEN"
```
## The Code
The only file we need is the Workers Edge function. Update the index.js as
below:
```javascript theme={"system"}
addEventListener("fetch", (event) => {
event.respondWith(handleRequest(event.request));
});
async function handleRequest(request) {
if (request.method === "GET") {
return getLeaderboard();
} else if (request.method === "POST") {
return addScore(request);
} else {
return new Response("Invalid Request!");
}
}
async function getLeaderboard() {
let url =
"https://us1-full-bug-31874.upstash.io/zrevrange/scores/0/1000/WITHSCORES/?_token=" +
TOKEN;
let res = await fetch(new Request(url), {
cf: {
cacheTtl: 10,
cacheEverything: true,
cacheKey: url,
},
});
return res;
}
async function addScore(request) {
const { searchParams } = new URL(request.url);
let player = searchParams.get("player");
let score = searchParams.get("score");
let url =
"https://us1-full-bug-31874.upstash.io/zadd/scores/" +
score +
"/" +
player +
"?_token=" +
TOKEN;
let res = await fetch(url);
return new Response(await res.text());
}
```
We route the request to two methods: if it is a GET, we return the leaderboard.
If it is a POST, we read the query parameters and add a new score.
In the getLeaderboard() method, you will see we pass a cache configuration to
the fetch() method. It caches the result of the request at the Edge for 10
seconds.
## Test The API
In your project folder run `wrangler dev`. It will give you a local URL. You can
test your API with curl:
Add new scores:
```shell theme={"system"}
curl -X POST http://127.0.0.1:8787\?player\=messi\&score\=13
curl -X POST http://127.0.0.1:8787\?player\=ronaldo\&score\=17
curl -X POST http://127.0.0.1:8787\?player\=benzema\&score\=18
```
Get the leaderboard:
```shell theme={"system"}
curl -w '\n Latency: %{time_total}s\n' http://127.0.0.1:8787
```
Call the “curl -w '\n Total: %{time_total}s\n'
[http://127.0.0.1:8787](http://127.0.0.1:8787)” multiple times. You will see the
latency becomes very small with the next calls as the cached result comes from
the edge.
If you wait more than 10 seconds then you will see the latency becomes higher as
the cache is evicted and the function fetches the leaderboard from the Upstash
Redis again.
## Deploy The API
First change the type in the wrangler.toml to `webpack`
```
name = "edge-leaderboard"
type = "webpack"
```
Then, run `wrangler publish`. Wrangler will output the URL. If you want to
deploy to a custom domain see
[here](https://developers.cloudflare.com/workers/get-started/guide#optional-configure-for-deploying-to-a-registered-domain).
# Express Session with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/express_session
This tutorial shows how to use Upstash as the session storage of your Express application.
This tutorial shows how to use Serverless Redis as your session storage for your
Express Applications.
See the
[code](https://github.com/upstash/examples/tree/main/examples/express-session-with-redis)
### Step-1: Create Project
Create a folder for your project and run: `npm init`
### Step-2: Install Redis and Express
In your project folder run:
`npm install express redis connect-redis express-session`
### Step-3: Create a Redis (Upstash) Database For Free
Create a database as described [here](../overall/getstarted).
### Step-4: index.js
In Upstash console, click the `Connect` button, copy the connection code
(Node.js node-redis). Create index.js file as below and replace the Redis
connection part.
```javascript theme={"system"}
var express = require("express");
var parseurl = require("parseurl");
var session = require("express-session");
const redis = require("redis");
var RedisStore = require("connect-redis")(session);
var client = redis.createClient({
// REPLACE HERE
});
var app = express();
app.use(
session({
store: new RedisStore({ client: client }),
secret: "forest squirrel",
resave: false,
saveUninitialized: true,
})
);
app.use(function (req, res, next) {
if (!req.session.views) {
req.session.views = {};
}
// get the url pathname
var pathname = parseurl(req).pathname;
// count the views
req.session.views[pathname] = (req.session.views[pathname] || 0) + 1;
next();
});
app.get("/foo", function (req, res, next) {
res.send("you viewed this page " + req.session.views["/foo"] + " times");
});
app.get("/bar", function (req, res, next) {
res.send("you viewed this page " + req.session.views["/bar"] + " times");
});
app.listen(3000, function () {
console.log("Example app listening on port 3000!");
});
```
### Step-5: Run the app
`node index.js`
### Step-6: Check your work
Open [http://localhost:3000/bar](http://localhost:3000/bar) and [http://localhost:3000/foo](http://localhost:3000/foo) in different
browsers. Check if the view-count is incrementing as expected.
### FAQ:
**There is a default session storage of express-session. Why do I need Redis?**
*Default session store loses the session data when the process crashes.
Moreover, it does not scale. You can not utilize multiple web servers to serve
your sessions.*
**Why Upstash?**
*You can use any Redis offering or self hosted one. But Upstash's serverless
approach with per-request-pricing will help you to minimize your cost with zero
maintenance.*
**How to configure the session storage?**
*See [here](https://github.com/expressjs/session#readme)*
# Serverless Golang API with Redis
Source: https://upstash.com/docs/redis/tutorials/goapi
This tutorial shows how to build a serverless API with Golang and Redis. The API
will simply count the page views and show it in JSON format.
### The Stack
* Serverless compute: AWS Lambda (Golang)
* Serverless data store: Redis via Upstash
* Deployment tool: AWS SAM
### Prerequisites:
* An AWS account for AWS Lambda functions.
* Install AWS SAM CLI tool as described here to create and deploy the project.
* An Upstash account for serverless Redis.
### Step 1: Init the Project
Run the sam init and then
* Select AWS Quick Start Templates
* Select 4 - go1.x
* Enter your project name: go-redis-example
* Select 1 - Hello World Example SAM will generate your project in a new folder.
### Step 2: Install a Redis Client
Our only dependency is redis client. Install go-redis via
`go get github.com/go-redis/redis/v8`
### Step 3: Create a Redis Database
Create a Redis database from Upstash console. Free tier should be enough. It is
pretty straight forward but if you need help, check
[getting started](../overall/getstarted) guide. In the database details page,
click the Connect button. You will need the endpoint and password in the next
step.
### Step 4: The function Code
Edit the hello-world>main.go as below:
```go theme={"system"}
package main
import (
"context"
"encoding/json"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/go-redis/redis/v8"
"strconv"
)
var ctx = context.Background()
type MyResponse struct {
Count string `json:"count:"`
}
var rdb = redis.NewClient(&redis.Options{
Addr: "YOUR_REDIS_ENDPOINT",
Password: "YOUR_REDIS_PASSWORD",
DB: 0,
})
func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
count, err := rdb.Incr(ctx, "count").Result()
if err != nil {
panic(err)
}
response := &MyResponse{
Count: strconv.FormatInt(count, 10),
}
body, err := json.Marshal(response)
return events.APIGatewayProxyResponse{
Headers: map[string]string{"Content-Type": "application/json"},
Body: string(body),
StatusCode: 200,
}, nil
}
func main() {
lambda.Start(handler)
}
```
Replace the "YOUR\_REDIS\_ENDPOINT" and "YOUR\_REDIS\_PASSWORD" with your database's
endpoint and password which you created in the Step 3. The code simply
increments a counter in Redis database and returns its value in json format.
### Step 5: Deployment
Now we are ready to deploy our API. First build it via `sam build`. Then run the
command `sam local start-api`. You can check your API locally on
[http://127.0.0.1:3000/hello](http://127.0.0.1:3000/hello)
If it is working, you can deploy your app to AWS by running `sam deploy --guided`.
Enter a stack name and pick your region. After confirming changes, the deployment
should begin. The command will output API Gateway endpoint URL, check the API in
your browser. You can also check your deployment on your AWS console. You will see
your function has been created.
Click on your function, you will see the code is uploaded and API Gateway
is configured.
### Notes
* Check the template.yaml file. You can add new functions and APIGateway
endpoints editing this file.
* It is a good practice to keep your Redis endpoint and password as environment
variable.
* You can use [serverless framework](https://www.serverless.com/) instead of AWS
SAM to deploy your function.
# Build a Serverless Histogram API with Redis
Source: https://upstash.com/docs/redis/tutorials/histogram
This tutorial shows how to build a histogram API with Redis.
While developing
[the latency benchmark for the serverless databases (DynamoDB, FaunaDB, Upstash)](https://blog.upstash.com/latency-comparison),
I wished there was an API where I will record the latency numbers and get the
histogram back. In this tutorial, I will build such an API where you can record
your latency values from any application. It will be a REST API with following
methods:
* record: Records numeric values into the histogram.
* get: Returns the histogram object.
### Motivation
I will show how easy to develop a generic API using AWS Lambda and Serverless
Redis.
See [code](https://github.com/upstash/examples/tree/master/examples/histogram-api).
### `1` Create a Redis (Upstash) Database
Create a database as [getting started](../overall/getstarted)
### `2` Serverless Project Setup
If you do not have it already install serverless framework via:
`npm install -g serverless`
In any folder run `serverless` as below:
```text theme={"system"}
>> serverless
Serverless: No project detected. Do you want to create a new one? Yes
Serverless: What do you want to make? AWS Node.js
Serverless: What do you want to call this project? histogram-api
Project successfully created in 'histogram-api' folder.
You can monitor, troubleshoot, and test your new service with a free Serverless account.
Serverless: Would you like to enable this? No
You can run the “serverless” command again if you change your mind later.
```
See [Using AWS SAM](/redis/tutorials/using_aws_sam), if you prefer AWS SAM
over Serverless Framework.
Inside the project folder create a node project with the command:
```
npm init
```
Then install the redis client and histogram library with:
```
npm install ioredis
npm install hdr-histogram-js
```
Update the `serverless.yml` as below. Copy your Redis URL from console and
replace below:
```yaml theme={"system"}
service: histogram-api
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
environment:
REDIS_URL: REPLACE_YOUR_URL_HERE
functions:
record:
handler: handler.record
events:
- httpApi:
path: /record
method: post
cors: true
get:
handler: handler.get
events:
- httpApi:
path: /get
method: get
cors: true
```
### `3` Code
Edit handler.js as below.
```javascript theme={"system"}
const hdr = require("hdr-histogram-js");
const Redis = require("ioredis");
if (typeof client === "undefined") {
var client = new Redis(fixUrl(process.env.REDIS_URL));
}
const headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true,
};
const SIZE = 10000;
module.exports.get = async (event) => {
if (!event.queryStringParameters || !event.queryStringParameters.name) {
return {
statusCode: 400,
headers: headers,
body: JSON.stringify({
message: "Invalid parameters. Name is needed.",
}),
};
}
const name = event.queryStringParameters.name;
const data = await client.lrange(name, 0, SIZE);
const histogram = hdr.build();
data.forEach((item) => {
histogram.recordValue(item);
});
return {
statusCode: 200,
body: JSON.stringify({
histogram: histogram,
}),
};
};
module.exports.record = async (event) => {
let body = JSON.parse(event.body);
if (!body || !body.name || !body.values) {
return {
statusCode: 400,
headers: headers,
body: JSON.stringify({
message: "Invalid parameters. Name and values are needed.",
}),
};
}
const name = body.name;
const values = body.values;
await client.lpush(name, values);
return {
statusCode: 200,
body: JSON.stringify({
message: "Success",
name: name,
}),
};
};
function fixUrl(url) {
if (!url) {
return "";
}
if (url.startsWith("redis://") && !url.startsWith("redis://:")) {
return url.replace("redis://", "redis://:");
}
if (url.startsWith("rediss://") && !url.startsWith("rediss://:")) {
return url.replace("rediss://", "rediss://:");
}
return url;
}
```
We have two serverless functions above. `get` takes `name` as parameter and
loads a list from Redis. Then builds a histogram using the values in the list.
The `record` function takes `name` and `values` as parameters. It adds the
`values` to the Redis List with name `name`.
The `get` function calculates the histogram over the latest 10000 latency
records. Update the SIZE parameter to change this number.
The `fixUrl` is a helper method which corrects the Redis url format.
### `4` Deploy and Try the API
Deploy your functions with:
```bash theme={"system"}
serverless deploy
```
The command will deploy two functions and output two endpoints. Try the
endpoints with setting parameters as below:
Record latency numbers to `perf-test-1`:
```shell theme={"system"}
curl --header "Content-Type: application/json" -d "{\"name\":\"perf-test-1\", \"values\": [90,80,34,97,93,45,49,57,99,12]}" https://v7xx4aa2ib.execute-api.us-east-1.amazonaws.com/record
```
Get the histogram for `perf-test-1`:
```shell theme={"system"}
curl https://v7xx4aa2ib.execute-api.us-east-1.amazonaws.com/get?name=perf-test-1
```
### Batching
It can be costly to call a remote function each time for latency calculation. In
your application, you should keep an array or queue as a buffer for the latency
numbers, then submit them in batches to the API when the array reaches the batch
size. Something like below:
```javascript theme={"system"}
let records = [];
let batchSize = 1000;
function recordLatency(value) {
records.push(value);
if (records.length >= batchSize) {
// the below submits the records to the API then empties the records array.
submitToAPI(records);
}
}
```
# Job Processing and Event Queue with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/job_processing
This tutorial shows how to use Upstash Redis for job/task processing.
### Motivation
Serverless functions are great for many tasks with their dynamic scaling and
flexible pricing models. But when you have a task which is composed of long
running complex steps, it is not feasible to run it in a single serverless
function. A simple solution is simply to offload complicated tasks from the
serverless function. You can process those asynchronously in your preferred
environment, this can be other serverless functions, serverless containers or
traditional server based processes too. To offload your tasks, you need a
reliable event queue. In this article we will use Upstash Redis for this
purpose.
### Scenario
You are developing a `New Employee Registration` form for your company. Saving
employee records to the database is the easy part. Here possible things to do:
* Create accounts (email, slack etc).
* Send email to the employee.
* Send email to the hiring manager and others.
* Create a JIRA ticket for the IT department so they will set up the employee’s
computer.
This list can be longer for bigger companies.
* You want the form to be responsive. You do want a new employee to wait for
minutes after clicking submit.
* The above steps are subject to change. You do not want to update your code
whenever a new procedure is added.
Decoupling the side procedures will solve the above issues. When a new employee
is registered, you can push a new event to the related task queue; then another
process will consume the task.
Let’s build the sample application:
### Project Setup
The project will consist of two modules:
* Producer will be a serverless function which will receive input parameters
required to register a new employee. It will also produce events for the task
queue.
* Consumer will be a worker application which will continuously consume the task
queue.
(See
[the source code](https://github.com/upstash/examples/tree/main/examples/task-queue))
### Tech Stack
* AWS Lambda for Serverless computing
* [Upstash](https://upstash.com) as Serverless Redis
* [Bull](https://github.com/OptimalBits/bull) as task queue implementation
* [Serverless framework](https://www.serverless.com/) for project deployment
### Upstash Database
You can create a free Redis database from [Upstash](https://docs.upstash.com/).
After creating a database, copy the endpoint, port and password as you will need
in the next steps.
### Producer Code
Our producer will be the serverless function which will get the request
parameters and produce the task for the queue. In the real world this code
should do things like saving to the database but I will not implement this for
the sake of simplicity.
1- Create a Serverless project by `serverless` command.
```shell theme={"system"}
➜ serverless
Serverless: No project detected. Do you want to create a new one? Yes
Serverless: What do you want to make? AWS Node.js
Serverless: What do you want to call this project? producer
Project successfully created in 'producer' folder.
You can monitor, troubleshoot, and test your new service with a free Serverless account.
Serverless: Would you like to enable this? No
You can run the “serverless” command again if you change your mind later.
```
2- Install [bull](https://github.com/OptimalBits/bull):
`npm install bull`
3- Function code:
```javascript theme={"system"}
var Queue = require("bull");
var settings = {
stalledInterval: 300000, // How often check for stalled jobs (use 0 for never checking).
guardInterval: 5000, // Poll interval for delayed jobs and added jobs.
drainDelay: 300, // A timeout for when the queue is in drained state (empty waiting for jobs).
};
module.exports.hello = async (event) => {
var taskQueue = new Queue(
"employee registration",
{
redis: {
port: 32016,
host: "us1-upward-ant-32016.upstash.io",
password: "ake4ff120d6b4216df220736be7eab087",
tls: {},
},
},
settings
);
await taskQueue.add({ event: event });
// TODO save the employee record to a database
return { message: "New employee event enqueued! 34", event };
};
```
Note1: Do not forget to replace your own Redis endpoint, port and password.
Remove the TLS part if you disabled TLS.
Note2: We give extra parameters (settings) to the event queue (Bull), so it will
not exploit Upstash quotas. Update the interval parameters depending on your
tolerance to event latency.
### Consumer Code
We will write a basic Node application to consume the events. Create a new
directory and run `npm init` and `npm install bull`. Then create index.js as
below:
```javascript theme={"system"}
var Queue = require("bull");
var settings = {
stalledInterval: 300000, // How often check for stalled jobs (use 0 for never checking).
guardInterval: 5000, // Poll interval for delayed jobs and added jobs.
drainDelay: 300, // A timeout for when the queue is in drained state (empty waiting for jobs).
};
var taskQueue = new Queue(
"employee registration",
{
redis: {
port: 32016,
host: "us1-upward-ant-32016.upstash.io",
password: "ake4ff120d6b4216df220736be7eab087",
tls: {},
},
},
settings
);
taskQueue
.process(function (job, done) {
console.log(job.data);
// TODO process the new employee event
done();
})
.catch((err) => {
console.log(err);
});
```
Note1: Do not forget to replace your own Redis endpoint, port and password.
Remove the TLS part if you disabled TLS.
Note2: We give extra parameters (settings) to the event queue (Bull), so it will
not exploit Upstash quotas. Update the interval parameters depending on your
tolerance to event latency.
### Test the Application
First run the consumer application with
`node index`
To test the producer code, run:
```shell theme={"system"}
serverless invoke local -f hello -d "{name:'Bill Gates', email:'bill@upstash.com', position:'Developer', date:'20210620'}"
```
You will see producer will log as below:
And consumer will log as below:
# Caching in Laravel with Redis
Source: https://upstash.com/docs/redis/tutorials/laravel_caching
## Project Setup
Create a new Laravel application:
```shell theme={"system"}
laravel new todo-cache
cd todo-cache
```
## Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com). Go to the **Connect to your database** section and click on Laravel. Copy those values into your .env file:
```shell .env theme={"system"}
REDIS_HOST=""
REDIS_PORT=6379
REDIS_PASSWORD=""
```
### Cache Setup
To use Upstash Redis as your caching driver, update the CACHE\_STORE in your .env file:
```shell .env theme={"system"}
CACHE_STORE="redis"
REDIS_CACHE_DB="0"
```
## Creating a Todo App
First, we'll create a Todo model with its associated controller, factory, migration, and API resource files:
```shell theme={"system"}
php artisan make:model Todo -cfmr --api
```
Next, we'll set up the database schema for our todos table with a simple structure including an ID, title, and timestamps:
```php database/migrations/2025_02_10_111720_create_todos_table.php theme={"system"}
id();
$table->string('title');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('todos');
}
};
```
We'll create a factory to generate fake todo data for testing and development:
```php database/factories/TodoFactory.php theme={"system"}
*/
class TodoFactory extends Factory
{
/**
* Define the model's default state.
*
* @return array
*/
public function definition(): array
{
return [
'title' => $this->faker->sentence,
];
}
}
```
In the database seeder, we'll set up the creation of 50 sample todo items:
```php database/seeders/DatabaseSeeder.php theme={"system"}
times(50)->create();
}
}
```
Run the migration to create the todos table in the database:
```shell theme={"system"}
php artisan migrate
```
Seed the database with our sample todo items:
```shell theme={"system"}
php artisan db:seed
```
Install the API package:
```shell theme={"system"}
php artisan install:api
```
Set up the API routes for our Todo resource:
```php routes/api.php theme={"system"}
*/
use HasFactory;
protected $fillable = ['title'];
}
```
Next, we'll update the methods in the TodoController to use caching:
```php app/Http/Controllers/TodoController.php theme={"system"}
validate([
'title' => 'required|string|max:255',
]);
$todo = Todo::create($request->all());
// Invalidate the todos cache
Cache::forget(self::CACHE_KEY);
return response()->json($todo, Response::HTTP_CREATED);
}
/**
* Display the specified resource.
*/
public function show(Todo $todo): Todo
{
return Cache::flexible(
"todo.{$todo->id}",
self::CACHE_TTL,
function () use ($todo) {
return $todo;
}
);
}
/**
* Update the specified resource in storage.
*/
public function update(Request $request, Todo $todo): JsonResponse
{
$request->validate([
'title' => 'required|string|max:255',
]);
$todo->update($request->all());
// Invalidate both the collection and individual todo cache
Cache::forget(self::CACHE_KEY);
Cache::forget("todo.{$todo->id}");
return response()->json($todo);
}
/**
* Remove the specified resource from storage.
*/
public function destroy(Todo $todo): JsonResponse
{
$todo->delete();
// Invalidate both the collection and individual todo cache
Cache::forget(self::CACHE_KEY);
Cache::forget("todo.{$todo->id}");
return response()->json(null, Response::HTTP_NO_CONTENT);
}
}
```
Now we can test our methods with the following curl commands:
```shell theme={"system"}
# Get all todos
curl http://todo-cache.test/api/todos
# Get a specific todo
curl http://todo-cache.test/api/todos/1
# Create a new todo
curl -X POST http://todo-cache.test/api/todos \
-H "Content-Type: application/json" \
-d '{"title":"New Todo"}'
# Update a todo
curl -X PUT http://todo-cache.test/api/todos/1 \
-H "Content-Type: application/json" \
-d '{"title":"Updated Todo"}'
# Delete a todo
curl -X DELETE http://todo-cache.test/api/todos/1
```
Visit Redis Data Browser in Upstash Console to see the cached data.
# Next.js with Redis
Source: https://upstash.com/docs/redis/tutorials/nextjs_with_redis
You can find the project source code on GitHub.
This tutorial uses Next.js App Router. If you want to use Pages Router, check out our [Pages Router tutorial](/redis/quickstarts/nextjs-pages-router).
This tutorial uses Redis as state store for a Next.js application. We simply add
a counter that pulls the data from Redis.
### Project Setup
Let's create a new Next.js application with App Router and install `@upstash/redis` package.
```shell theme={"system"}
npx create-next-app@latest
cd my-app
npm install @upstash/redis
```
### Database Setup
Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
```shell .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=
```
If you are using the Vercel & Upstash integration, you may use the following environment variables:
```shell .env theme={"system"}
KV_REST_API_URL=
KV_REST_API_TOKEN=
```
### Home Page Setup
Update `/app/page.tsx`:
```tsx /app/page.tsx theme={"system"}
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv();
export default async function Home() {
const count = await redis.incr("counter");
return (
Counter: {count}
)
}
```
### Run & Deploy
Run the app locally with `npm run dev`, check `http://localhost:3000/`
Deploy your app with `vercel`
You can also integrate your Vercel projects with Upstash using Vercel
Integration module. Check [this article](../howto/vercelintegration).
# Building a Serverless Notification API for Your Web Application with Redis
Source: https://upstash.com/docs/redis/tutorials/notification
This tutorial shows how to create a Serverless Notification API for Your Web Application with Redis.
Notifications and announcements help you communicate with your web site
visitors. It is not feasible to update your code and redeploy your website each
time you want to show a new message. It may also be too much investment to set
up a backend and maintain it to just serve these notifications. In this article,
we will build a website which will load the notification message directly from
the Redis database without a backend.
### Backendless? How is that possible?
Yes, we will not use any backend service, even a serverless function. We will
access Redis from the client side directly. This is possible with the read only
REST API provided by Upstash.
### Requirements
* The page will display a notification if the user has not already seen the
notification before.
* The page will only show the latest notification.
Check out
[the code here](https://github.com/upstash/examples/tree/master/examples/serverless-notification-api).
### Project Setup
I will create a React application but you can use any other web framework. It
will simply call the Redis REST API and show the message as a notification.
Create the app:
```shell theme={"system"}
npx create-react-app serverless-notification-api
```
Install a toast component to show the notification:
```shell theme={"system"}
npm install --save react-toastify
```
Create a free database from [Upstash](https://console.upstash.com/) and copy the
REST url and read only token. You should switch the Read-Only Token setting. In
the database details page, click on the `Read-Only Token` switch.
### Implementation
The logic is simple. We will keep the notifications in a Redis Sorted Set. We
will keep a version (integer) in the local storage. We will use the versions as
scores in the sorted set. Each notification message will have a version (score)
and the higher score means the newer message. At each page load, we will query
the Redis sorted set to load the messages which have higher scores than the
locally stored version. After loading a notification message I will set my local
version equal to the latest notification’s version. This will prevent showing
the same notification to the same users more than once. Here the implementation:
```javascript theme={"system"}
import logo from "./logo.svg";
import "./App.css";
import { toast, ToastContainer } from "react-toastify";
import "react-toastify/dist/ReactToastify.css";
import { useEffect } from "react";
function App() {
useEffect(() => {
async function fetchData() {
try {
let version = localStorage.getItem("notification-version");
version = version ? version : 0;
const response = await fetch(
"REPLACE_UPSTASH_REDIS_REST_URL/zrevrangebyscore/messages/+inf/" +
version +
"/WITHSCORES/LIMIT/0/1",
{
headers: {
Authorization: "Bearer REPLACE_UPSTASH_REDIS_REST_TOKEN",
},
}
);
const res = await response.json();
const v = parseInt(res.result[1]);
if (v) {
localStorage.setItem("notification-version", v + 1);
}
toast(res.result[0]);
} catch (e) {
console.error(e);
}
}
fetchData();
});
return (
);
}
export default App;
```
### How to Add New Notification Messages
You can simply add new messages to the Redis sorted set with a highest score so
it will be displayed to the user with page loads. For our application the name
of the sorted set is `messages`.
You can also remove a message using the [ZREM](https://redis.io/commands/zrem)
command.
### Conclusion
You do not need a backend to access Upstash Redis thanks to the REST API. You
can expose the token with your client side application, as the token only allows
read-only access. This helps developers to build applications without backend
for many use cases where the data is already available publicly.
# Nuxt with Redis
Source: https://upstash.com/docs/redis/tutorials/nuxtjs_with_redis
This tutorial shows how to use Upstash inside your Nuxt application.
This tutorial uses Redis as state store for a Nuxt application. In it, we will build an application
which simply increments a counter and saves & fetches the last increment time.
See [code](https://github.com/upstash/examples/tree/master/examples/nuxt-with-redis) and
[demo](https://nuxt-with-redis.vercel.app)
### `1` Create Nuxt.js Project
Run this in terminal
```bash theme={"system"}
npx nuxi@latest init nuxtjs-with-redis
```
Go to the new directory `nuxtjs-with-redis` and install `@upstash/redis`:
```
npm install @upstash/redis
```
### `2` Create a Upstash Redis database
Next, you will need an Upstash Redis database. You can follow
[our guide for creating a new database](/redis/overall/getstarted).
### `3` Set up environment variables
Copy the `.env.example` file in this directory to `.env`
```bash theme={"system"}
cp .env.example .env
```
Then, set the following environment variables:
```
UPSTASH_REDIS_REST_URL=""
UPSTASH_REDIS_REST_TOKEN=""
```
You can get the values of these env variables on the page of your Redis database.
If you are using the Vercel & Upstash integration, you may use the following environment variables:
```shell .env theme={"system"}
KV_REST_API_URL=
KV_REST_API_TOKEN=
```
### `4` Define the endpoint
Next, we will define the endpoint which will call Redis:
```javascript title="server/api/increment.ts" theme={"system"}
import { defineEventHandler } from "h3";
import { Redis } from "@upstash/redis";
// Initialize Redis
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL || "",
token: process.env.UPSTASH_REDIS_REST_TOKEN || ""
});
export default defineEventHandler(async () => {
const identifier = "api_call_counter";
try {
// Increment the API call counter and get the updated value
const count = await redis.incr(identifier);
// Optionally, you can also retrieve other information like the last time it was called
const lastCalled = await redis.get("last_called");
const lastCalledAt = lastCalled || "Never";
// Store the current timestamp as the last called time
await redis.set("last_called", new Date().toISOString());
// Return the count and last called time
return {
success: true,
count: count,
lastCalled: lastCalledAt,
};
} catch (error) {
console.error("Redis error:", error);
return {
success: false,
message: "Error interacting with Redis",
};
}
});
```
### `5` Run
Finally, we can run the application and call our endpoint:
```bash theme={"system"}
npm run dev
```
If you are using [our example app](https://github.com/upstash/examples/tree/master/examples/nuxt-with-redis),
you can simply click the `Increment` button to run the endpoint we defined.
Otherwise, you can simply make a curl request:
```
curl http://localhost:3000/api/increment
```
When you make the request, you should see something like this:
```
{
"success": true,
"count": 166,
"lastCalled": "2024-10-10T07:04:42.381Z"
}
```
### Notes:
* For best performance the application should run in the same region with the
Redis database's region.
# Redis as a Cache for Your FastAPI App
Source: https://upstash.com/docs/redis/tutorials/python_fastapi_caching
### Introduction
In this tutorial, we’ll learn how to use Redis to add caching to a FastAPI application. By caching API responses in Redis, we can reduce database queries, improve response times, and ensure that frequently requested data is delivered quickly.
We’ll create a simple FastAPI app that fetches weather data from an external API. The app will store the results in Redis, so the next time someone requests the same data, it can be returned from the cache instead of making a new API request. Let’s get started!
### Environment Setup
First, install FastAPI, the Upstash Redis client, and an ASGI server:
```shell theme={"system"}
pip install fastapi upstash-redis uvicorn[standard]
```
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli), and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment:
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
We'll also need to generate a `WEATHER_API_KEY` from [Weather API Website](https://www.weatherapi.com) for free and we will export it.
```shell theme={"system"}
export WEATHER_API_KEY=
```
You can also use `python-dotenv` to load environment variables from your `.env` file.
### Application Setup
In this example, we will build an API that fetches weather data and caches it in Redis.
Create `main.py`:
```python main.py theme={"system"}
from fastapi import FastAPI
from upstash_redis import Redis
import requests
import os
app = FastAPI()
# Connect to Redis using environment variables
redis = Redis.from_env()
# Mock API endpoint for weather data
WEATHER_API_URL = "https://api.weatherapi.com/v1/current.json"
API_KEY = os.getenv("WEATHER_API_KEY")
@app.get("/weather/{city}")
def get_weather(city: str):
cache_key = f"weather:{city}"
# Check if the data exists in cache
cached_data = redis.get(cache_key)
if cached_data:
return {"source": "cache", "data": cached_data}
# Fetch data from external API
response = requests.get(f"{WEATHER_API_URL}?key={API_KEY}&q={city}")
weather_data = response.json()
# Store the data in Redis cache with a 10-minute expiration
redis.setex(cache_key, 600, weather_data)
return {"source": "api", "data": weather_data}
```
### Running the Application
Run the FastAPI app with Uvicorn:
```shell theme={"system"}
uvicorn main:app --reload
```
To test the application you can visit `http://127.0.0.1:8000/weather/istanbul` in your browser or use curl to get the weather data for Istanbul. The first request will fetch the data from the weather API and cache it, and subsequent requests will return the cached data until the cache expires after 10 minutes.
To monitor your data in Redis, you can use the [Upstash Console](https://console.upstash.com) and check out the Data Browser tab.
### Code Breakdown
1. **Redis Setup**: We use `Redis.from_env()` to initialize the Redis connection using the environment variables. Redis will store the weather data with city names as cache keys.
2. **Cache Lookup**: When a request is made to the `/weather/{city}` endpoint, we check if the weather data is already cached by looking up the `weather:{city}` key in Redis. If the data is found in cache, it's returned immediately.
3. **Fetching External Data**: If the data is not in cache, the app sends a request to the external weather API to fetch the latest data. The response is then cached using `redis.setex()`, which stores the data with a 10-minute expiration.
4. **Cache Expiration**: We use a 10-minute TTL (time-to-live) for the cached weather data to ensure it's periodically refreshed. After the TTL expires, the next request will fetch fresh data from the external API and store it in cache again.
# Multithreaded Web Scraping with Redis Caching
Source: https://upstash.com/docs/redis/tutorials/python_multithreading
In this tutorial, we’ll build a multithreaded web scraper in Python that leverages Redis for caching responses to minimize redundant HTTP requests. The scraper will be capable of handling groups of URLs across multiple threads while caching responses to reduce load and improve performance.
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli), and add `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your `.env` file:
```bash theme={"system"}
UPSTASH_REDIS_REST_URL=your_upstash_redis_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
```
This file will be used to load environment variables.
### Installation
First, install the necessary libraries using the following command:
```bash theme={"system"}
pip install threading requests upstash-redis python-dotenv
```
### Code Explanation
We’ll create a multithreaded web scraper that performs HTTP requests on a set of grouped URLs. Each thread will check if the response for a URL is cached in Redis. If the URL has been previously requested, it will retrieve the cached response; otherwise, it will perform a fresh HTTP request, cache the result, and store it for future requests.
### Code
Here’s the complete code:
```python theme={"system"}
import threading
import requests
from upstash_redis import Redis
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Initialize Redis client
redis = Redis.from_env()
# Group URLs by thread, with one or two overlapping URLs across groups
urls_to_scrape_groups = [
[
'https://httpbin.org/delay/1',
'https://httpbin.org/delay/4',
'https://httpbin.org/delay/2',
'https://httpbin.org/delay/5',
'https://httpbin.org/delay/3',
],
[
'https://httpbin.org/delay/5', # Overlapping URL
'https://httpbin.org/delay/6',
'https://httpbin.org/delay/7',
'https://httpbin.org/delay/2', # Overlapping URL
'https://httpbin.org/delay/8',
],
[
'https://httpbin.org/delay/3', # Overlapping URL
'https://httpbin.org/delay/9',
'https://httpbin.org/delay/10',
'https://httpbin.org/delay/4', # Overlapping URL
'https://httpbin.org/delay/11',
],
]
class Scraper(threading.Thread):
def __init__(self, urls):
threading.Thread.__init__(self)
self.urls = urls
self.results = {}
def run(self):
for url in self.urls:
cache_key = f"url:{url}"
# Attempt to retrieve cached response
cached_response = redis.get(cache_key)
if cached_response:
print(f"[CACHE HIT] {self.name} - URL: {url}")
self.results[url] = cached_response
continue # Skip to the next URL if cache is found
# If no cache, perform the HTTP request
print(f"[FETCHING] {self.name} - URL: {url}")
response = requests.get(url)
if response.status_code == 200:
self.results[url] = response.text
# Store the response in Redis cache
redis.set(cache_key, response.text)
else:
print(f"[ERROR] {self.name} - Failed to retrieve {url}")
self.results[url] = None
def main():
threads = []
for urls in urls_to_scrape_groups:
scraper = Scraper(urls)
threads.append(scraper)
scraper.start()
# Wait for all threads to complete
for scraper in threads:
scraper.join()
print("\nScraping results:")
for scraper in threads:
for url, result in scraper.results.items():
print(f"Thread {scraper.name} - URL: {url} - Response Length: {len(result) if result else 'Failed'}")
if __name__ == "__main__":
main()
```
### Explanation
1. **Threaded Scraper Class**: The `Scraper` class is a subclass of `threading.Thread`. Each thread takes a list of URLs and iterates over them to retrieve or fetch their responses.
2. **Redis Caching**:
* Before making an HTTP request, the scraper checks if the response is already in the Redis cache.
* If a cached response is found, it uses that response instead of making a new request, marked with `[CACHE HIT]` in the logs.
* If no cached response exists, it fetches the content from the URL, caches the result in Redis, and proceeds.
3. **Overlapping URLs**:
* Some URLs are intentionally included in multiple groups to demonstrate the cache functionality across threads. Once a URL’s response is cached by one thread, another thread retrieving the same URL will pull it from the cache instead of re-fetching.
4. **Main Function**:
* The `main` function initiates and starts multiple `Scraper` threads, each handling a group of URLs.
* It waits for all threads to complete before printing the results.
### Running the Code
Once everything is set up, run the script using:
```bash theme={"system"}
python your_script_name.py
```
### Sample Output
You will see output similar to this:
```
[FETCHING] Thread-1 - URL: https://httpbin.org/delay/1
[FETCHING] Thread-1 - URL: https://httpbin.org/delay/4
[CACHE HIT] Thread-2 - URL: https://httpbin.org/delay/5
[FETCHING] Thread-3 - URL: https://httpbin.org/delay/3
...
```
### Benefits of Using Redis Cache
Using Redis as a cache reduces the number of duplicate requests, particularly for overlapping URLs. It allows for quick retrieval of previously fetched responses, enhancing performance and reducing load.
# Rate Limiting for Your FastAPI App
Source: https://upstash.com/docs/redis/tutorials/python_rate_limiting
### Introduction
In this tutorial, we’ll learn how to add rate limiting to a FastAPI application using Upstash Redis. Rate limiting is essential for controlling API usage and with Upstash Redis, you can easily implement rate limiting to protect your API resources.
We’ll set up a simple FastAPI app and apply rate limiting to its endpoints. With Upstash Redis, we’ll configure a fixed window rate limiter that allows a specific number of requests per given time period.
### Environment Setup
First, install FastAPI, the Upstash Redis client, the Upstash rate limiting package, and an ASGI server:
```shell theme={"system"}
pip install fastapi upstash-redis upstash-ratelimit uvicorn[standard]
```
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli), and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` to your environment:
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
You can also use `python-dotenv` to load environment variables from your `.env` file.
### Application Setup
In this example, we will build an API endpoint that is rate-limited to a certain number of requests per time window. If the limit is exceeded (e.g., by making more than 10 requests in 10 seconds), the API will return an HTTP 429 error with the message "Rate limit exceeded. Please try again later."
Create `main.py`:
```python main.py theme={"system"}
from fastapi import FastAPI, HTTPException
from upstash_ratelimit import Ratelimit, FixedWindow
from upstash_redis import Redis
from dotenv import load_dotenv
import requests
# Load environment variables from .env file
load_dotenv()
# Initialize the FastAPI app
app = FastAPI()
# Initialize Redis client
redis = Redis.from_env()
# Create a rate limiter that allows 10 requests per 10 seconds
ratelimit = Ratelimit(
redis=redis,
limiter=FixedWindow(max_requests=10, window=10), # 10 requests per 10 seconds
prefix="@upstash/ratelimit"
)
@app.get("/expensive_calculation")
def expensive_calculation():
identifier = "api" # Common identifier for rate limiting all users equally
response = ratelimit.limit(identifier)
if not response.allowed:
raise HTTPException(status_code=429, detail="Rate limit exceeded. Please try again later.")
# Placeholder for a resource-intensive operation
result = do_expensive_calculation()
return {"message": "Here is your result", "result": result}
# Simulated function for an expensive calculation
def do_expensive_calculation():
return "Expensive calculation result"
# Test function to check rate limiting
def test_rate_limiting():
url = "http://127.0.0.1:8000/expensive_calculation"
success_count = 0
fail_count = 0
# Attempt 15 requests in quick succession
for i in range(15):
response = requests.get(url)
if response.status_code == 200:
success_count += 1
print(f"Request {i+1}: Success - {response.json()['message']}")
elif response.status_code == 429:
fail_count += 1
print(f"Request {i+1}: Failed - Rate limit exceeded")
# Small delay to avoid flooding
print("\nTest Summary:")
print(f"Total Successful Requests: {success_count}")
print(f"Total Failed Requests due to Rate Limit: {fail_count}")
if __name__ == "__main__":
# Run the FastAPI app in a separate thread or terminal with:
# uvicorn main:app --reload
# To test rate limiting after the server is running
test_rate_limiting()
```
### Running the Application
Run the FastAPI app with Uvicorn:
```shell theme={"system"}
uvicorn main:app --reload
```
Run the test function to check the rate limiting:
```shell theme={"system"}
python main.py
```
### Testing Rate Limiting
Here's the output you should see when running the test function:
```
Request 1: Success - Here is your result
Request 2: Success - Here is your result
Request 3: Success - Here is your result
Request 4: Success - Here is your result
Request 5: Success - Here is your result
Request 6: Success - Here is your result
Request 7: Success - Here is your result
Request 8: Success - Here is your result
Request 9: Success - Here is your result
Request 10: Success - Here is your result
Request 11: Failed - Rate limit exceeded
Request 12: Failed - Rate limit exceeded
Request 13: Failed - Rate limit exceeded
Request 14: Failed - Rate limit exceeded
Request 15: Failed - Rate limit exceeded
Test Summary:
Total Successful Requests: 10
Total Failed Requests due to Rate Limit: 5
```
### Code Breakdown
1. **Redis and Rate Limiter Setup**:
* We initialize a `Redis` client with `Redis.from_env()` using environment variables for configuration.
* We create a rate limiter using `Ratelimit` with a `FixedWindow` limiter that allows 10 requests per 10 seconds. The `prefix` option is set to organize the Redis keys used by the rate limiter.
2. **Rate Limiting the Endpoint**:
* For the `/expensive_calculation` endpoint, the rate limiter is applied by calling `ratelimit.limit(identifier)`.
* The `identifier` variable uniquely identifies this rate limit. You could use user-specific identifiers (like user IDs) to implement per-user limits.
* If the request exceeds the allowed limit, an HTTP 429 error is returned.
3. **Expensive Calculation Simulation**:
* The `do_expensive_calculation` function simulates a resource-intensive operation. In real scenarios, this could represent database queries, file processing, or other time-consuming tasks.
### Benefits of Rate Limiting with Redis
Using Redis for rate limiting helps control API usage across multiple instances of your app, making it highly scalable. Redis’s in-memory storage provides fast access to rate-limiting data, ensuring minimal performance impact on your API.
# Build a Real-Time Chat Application with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/python_realtime_chat
In this tutorial, we will build a real-time chat application using Flask and SocketIO, leveraging Upstash Redis for efficient message handling. Redis, being a fast, in-memory data store, provides an ideal backbone for real-time messaging systems due to its low latency and support for Pub/Sub messaging patterns.
## Why Upstash Redis?
* **Scalability:** Handles large volumes of messages with minimal latency.
* **Simplicity:** Easy to set up with minimal configuration.
* **Cost-Efficiency:** Serverless model reduces operational costs.
***
## **Setup**
### **1. Install the Required Libraries**
Install Flask, Flask-SocketIO, and the Redis library by running:
```bash theme={"system"}
pip install flask flask-socketio redis
```
### **2. Create a Redis Database**
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli).
Create a `.env` file in the root of your project with the following content:
```bash theme={"system"}
UPSTASH_REDIS_HOST=your_upstash_redis_host
UPSTASH_REDIS_PORT=your_upstash_redis_port
UPSTASH_REDIS_PASSWORD=your_upstash_redis_password
```
## **Code**
Now, it's time to implement the chat application. We'll create a Flask server that uses SocketIO for real-time communication. We'll also configure the server to use Upstash Redis as the message queue.
We need to use the `rediss://` protocol instead of `redis://` to connect to Redis over TLS. This ensures secure communication between the server and the Redis instance.
```python main.py theme={"system"}
from flask import Flask, render_template
from flask_socketio import SocketIO
import os
# Initialize Flask app
app = Flask(__name__)
app.config["SECRET_KEY"] = os.getenv("SECRET_KEY", os.urandom(24))
# Set up Redis URL with TLS
redis_password = os.getenv('UPSTASH_REDIS_PASSWORD')
redis_host = os.getenv('UPSTASH_REDIS_HOST')
redis_port = int(os.getenv('UPSTASH_REDIS_PORT', 6379))
redis_url = f"rediss://:{redis_password}@{redis_host}:{redis_port}"
# Initialize SocketIO with Redis message queue
socketio = SocketIO(app, message_queue=redis_url, cors_allowed_origins="*")
# WebSocket handlers
@socketio.on("connect")
def handle_connect():
print("Client connected.")
@socketio.on("disconnect")
def handle_disconnect():
print("Client disconnected.")
@socketio.on("message")
def handle_message(data):
"""Handle incoming chat messages."""
print(f"Message received: {data}")
# Broadcast the message to all connected clients except the sender
socketio.emit("message", data, include_self=False)
# Serve the chat HTML page
@app.route("/")
def index():
return render_template("chat.html") # Render the chat interface template
if __name__ == "__main__":
socketio.run(app, debug=True, host="0.0.0.0", port=8000)
```
### **Code Explanation**
* We initialized a Flask app and set a secret key for session management.
* We set up the Redis URL with TLS for secure communication.
* We initialize a SocketIO instance with the Flask app and configure it to use Redis as the message queue.
* We define WebSocket event handlers for `connect`, `disconnect`, and `message` events.
* The `handle_message` function broadcasts the received message to all connected clients except the sender.
* We define a route to serve the chat interface template.
Now let's create a template for the chat interface. We're not going to go into the details of the HTML and CSS, as the focus is on the real-time messaging functionality.
```html chat.html theme={"system"}
Real-Time Chat
```
***
### **Running the Application**
1. Start the server:
```bash theme={"system"}
python app.py
```
2. Open your web browser and go to `http://localhost:8000/`.
You should see the chat interface. You can send and recieve messages in real-time. Just open the same URL in multiple tabs or browsers to simulate multiple users chatting with each other.
***
## **Conclusion**
In this tutorial, we built a real-time chat application using Flask, SocketIO, and Upstash Redis. Redis, with its low latency and high throughput, is an ideal choice for real-time messaging systems.
To learn more about Upstash Redis, visit the [Upstash Redis Documentation](/redis).
# Manage Sessions in Python with Serverless Redis
Source: https://upstash.com/docs/redis/tutorials/python_session
In this tutorial, we’ll see how to implement session management in a FastAPI application using Upstash Redis. We’ll use cookies to store session IDs, while session data is maintained in Redis for its speed and expiration features.
## **What Are Sessions and Cookies?**
* **Session:** A session is a mechanism to store user-specific data (like authentication status) between requests. It allows the server to "remember" users as they interact with the application.
* **Cookie:** A small piece of data stored in the client’s browser. In this tutorial, we’ll use cookies to store session IDs, which the server uses to fetch session details from Redis.
## **Why Redis?**
Redis is a great choice for session management because:
1. **Fast Lookups:** Redis is an in-memory database, ensuring near-instantaneous access to session data.
2. **Expiration Control:** Built-in expiration functionality allows sessions to automatically expire after a defined timeout.
***
## **Setup**
### **1. Install the Required Libraries**
Install FastAPI, Upstash Redis, and other necessary dependencies:
```bash theme={"system"}
pip install fastapi upstash-redis uvicorn python-dotenv
```
### **2. Create a Redis Database**
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli).
Create a `.env` file in the root of your project with the following content:
```bash theme={"system"}
UPSTASH_REDIS_REST_URL=your_upstash_redis_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
```
## **Code**
Let's implement a simple FastAPI application that handles login, profile access, and logout using Redis for session management. We use sliding expiration by updating the session expiration time on every request. If a session is inactive for 15 minutes (900 seconds), it will automatically expire.
```python main.py theme={"system"}
from fastapi import FastAPI, Response, Cookie, HTTPException
from pydantic import BaseModel
from upstash_redis import Redis
from dotenv import load_dotenv
import uuid
# Load environment variables
load_dotenv()
redis = Redis.from_env()
app = FastAPI()
SESSION_TIMEOUT_SECONDS = 900 # 15 minutes
# Define the request body model for login
class LoginRequest(BaseModel):
username: str
@app.post("/login/")
async def login(request: LoginRequest, response: Response):
session_id = str(uuid.uuid4())
redis.hset(f"session:{session_id}", values={"user": request.username, "status": "active"})
redis.expire(f"session:{session_id}", SESSION_TIMEOUT_SECONDS)
response.set_cookie(key="session_id", value=session_id, httponly=True)
return {"message": "Logged in successfully", "session_id": session_id}
@app.get("/profile/")
async def get_profile(session_id: str = Cookie(None)):
if not session_id:
raise HTTPException(status_code=403, detail="No session cookie found")
session_data = redis.hgetall(f"session:{session_id}")
if not session_data:
response = Response()
response.delete_cookie(key="session_id") # Clear the expired cookie
raise HTTPException(status_code=404, detail="Session expired")
# Update the session expiration time (sliding expiration)
redis.expire(f"session:{session_id}", SESSION_TIMEOUT_SECONDS)
return {"session_id": session_id, "session_data": session_data}
@app.post("/logout/")
async def logout(response: Response, session_id: str = Cookie(None)):
if session_id:
redis.delete(f"session:{session_id}")
response.delete_cookie(key="session_id")
return {"message": "Logged out successfully"}
```
Let's test the implementation using the following script:
```python test_script.py theme={"system"}
import requests
base_url = "http://127.0.0.1:8000"
# Test login
response = requests.post(f"{base_url}/login/", json={"username": "abdullah"})
print("Login Response:", response.json())
# In the browser, you don't need to set cookies manually. The browser will handle it automatically.
session_cookie = response.cookies.get("session_id")
# Test profile
profile_response = requests.get(f"{base_url}/profile/", cookies={"session_id": session_cookie})
print("Access Profile Response:", profile_response.json())
# Test logout
logout_response = requests.post(f"{base_url}/logout/", cookies={"session_id": session_cookie})
print("Logout Response:", logout_response.json())
# Test profile after logout
profile_after_logout_response = requests.get(f"{base_url}/profile/", cookies={"session_id": session_cookie})
print("Access Profile After Logout Response:", profile_after_logout_response.text)
```
***
### **Code Explanation**
1. **`/login/` Endpoint:**
* Generates a unique session ID using `uuid.uuid4()`.
* Stores the session data in Redis using the session ID as the key.
* Sets a cookie named `session_id` with the generated session ID.
* Returns a success message along with the session ID.
2. **`/profile/` Endpoint:**
* Retrieves the session ID from the cookie.
* Fetches the session data from Redis using the session ID.
* Updates the session expiration time.
* Returns the session ID and session data.
3. **`/logout/` Endpoint:**
* Deletes the session data from Redis using the session ID.
* Clears the `session_id` cookie.
***
### **Run the Application**
1. Start the FastAPI server:
```bash theme={"system"}
uvicorn main:app --reload
```
2. Run the test script:
```bash theme={"system"}
python test_script.py
```
Here's what you should expect:
```plaintext theme={"system"}
Login Response: {'message': 'Logged in successfully', 'session_id': '68223c50-ede4-48eb-9d26-4a4dd735c10d'}
Access Profile Response: {'session_id': '68223c50-ede4-48eb-9d26-4a4dd735c10d', 'session_data': {'user': 'abdullah', 'status': 'active'}}
Logout Response: {'message': 'Logged out successfully'}
Access Profile After Logout Response: {"detail":"Session not found or expired"}
```
***
## **Conclusion**
By combining FastAPI, cookies, and Upstash Redis, we’ve created a reliable session management system. With Redis’s speed and built-in expiration features, this approach ensures secure and efficient handling of user sessions.
To learn more about Upstash Redis, visit the [Upstash Redis Documentation](/redis).
# Building a URL Shortener with Redis
Source: https://upstash.com/docs/redis/tutorials/python_url_shortener
### Introduction
In this tutorial, we’ll build a simple URL shortener using Redis and Python. The short URL service will generate a random short code for each URL, store it in Redis, and allow users to retrieve the original URL using the short code. We’ll also implement an expiration time for each shortened URL, making it expire after a specified period.
### Environment Setup
First, install the necessary dependencies, including Upstash Redis and `python-dotenv` for environment variables:
```shell theme={"system"}
pip install upstash-redis
```
### Database Setup
Create a Redis database using the [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli), and export the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REST_TOKEN` to your environment:
```shell theme={"system"}
export UPSTASH_REDIS_REST_URL=
export UPSTASH_REDIS_REST_TOKEN=
```
You can also use `python-dotenv` to load environment variables from a `.env` file:
```text .env theme={"system"}
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=