Try Redis Cloud free for 14 days in AWS Marketplace

Learn More

Vector Database

Making it easy to build Generative AI applications with Redis Enterprise

Use AI to reimagine search across unstructured data

Users expect search functionality in every application and website they encounter. Yet more than 80% of business data is unstructured, stored as text, images, audio, video, or other formats.

Organizations need to reimagine the ways to make every kind of data discoverable – not least of which is because users demand it. Powerful search features will fuel the next generation of applications.

What is a vector database?

A vector database is a type of database that stores data in the form of vectors or mathematical representations of data points. AI and machine learning is what is enabling this transformation of unstructured data into numeric representations (vectors) that capture meaning and context, benefiting from advances in natural language processing and computer vision.

Vector search is a key feature of a vector database. It is the process of finding data points that are similar to a given query vector in a vector database. Popular vector search uses include chatbots, recommendation systems, document search, image and video search, natural language processing, and anomaly detection. For example, if you build a recommendation system, you can use vector search to find (and suggest) products that are similar to a product in which a user previously showed interest. (Need a deep dive? This should do it.)

Why vector search is a crucial component
for vector databases

Traditional keyword matching and filtering only takes you so far. Sure, ordinary search algorithms are useful for text and document use cases. However, search results are limited when they do not incorporate meaning or context. The proliferation of unstructured data created a huge gap in the effectiveness of traditional keyword matching and filtering. Every organization that stores non-textual data – and that’s just about everyone – can benefit from improving search functionality across unstructured data.

Tech Talk

New to vector databases and how they help with LLM challenges? Watch the Driving AI Innovation:  Making it easy to build Generative AI apps with Redis Enterprise Tech Talk webinar.

Redis Enterprise: the vector database
solution for every organization

Redis Cloud integration with AWS Bedrock now available

vector similarity diagram

Real-time search performance

Search and recommendation systems have to run incredibly fast. The vector search functionality in Redis Enterprise guarantees low search latency, whether the data collection is tens of thousands or hundreds of millions of objects, distributed across a number of database nodes.

Built-in fault tolerance and resilience

To ensure your search applications never experience downtime, Redis Enterprise uses a shared-nothing cluster architecture. It is fault tolerant at all levels, with automated failover at the process level, for individual nodes, and across infrastructure availability zones. To ensure your unstructured data and vectors are never lost, Redis Enterprise includes tunable persistence and disaster recovery mechanisms.

Reduce architectural and application complexity

Most likely, your organization already benefits from Redis Enterprise for its caching needs. Instead of spinning up yet another costly single-point solution, extend your database to take advantage of vector search in your applications. Developers can store vectors just as easily as any other field in a Redis hash or JSON object.

Flexibility across clouds and geographies

Choose where your databases should run. Redis Enterprise can be deployed anywhere, on any cloud platform, on-premises, or in a multi-cloud or hybrid cloud architecture.

Use cases

Retrieval Augmented Generation (RAG)

Redis Enterprise stores external domain-specific knowledge and provides powerful semantic search capabilities to infuse relevant contextual data into a prompt before it’s sent to a LLM for improved result quality.  

Semantic Caching

Redis Enterprise identifies and retrieves cached responses that are semantically similar enough to the input query, dramatically reducing the response time and number of requests sent to an LLM.

Recommendation Systems

Redis Enterprise helps recommendation engines deliver fresh, relevant suggestions to users at low-latency. It helps them find similar products to those that a shopper enjoys.

Document Search

Redis Enterprise makes it easier to discover and retrieve information from a large corpus of documents, using natural language and semantic search.

Featured customers

Vector search features

brain icon

Vector indexing algorithms

Redis Enterprise manages vectors in an index data structure to enable intelligent similarity search that balances search speed and search quality. Choose from two popular techniques, FLAT (a brute force approach) and HNSW (a faster, and approximate approach), based on your data and use cases.

search distances icon

Vector search distance metrics

Redis Enterprise uses a distance metric to measure the similarity between two vectors. Choose from three popular metrics – Euclidean, Inner Product, and Cosine Similarity – used to calculate how “close” or “far apart” two vectors are.

hybrid filter

Powerful hybrid filtering

Take advantage of the full suite of search features available in Redis Enterprise query and search. Enhance your workflows by combining the power of vector search with more traditional numeric, text, and tag filters. Incorporate more business logic into queries and simplify client application code.

realtime icon

Real-time updates

Real-time search and recommendation systems generate large volumes of changing data. New images, text, products, or metadata? Perform updates, insertions, and deletes to the search index seamlessly as your dataset changes overtime. Redis Enterprise reduces costly impacts of stagnant data.

vector range icon

Vector range queries

Traditional vector search is performed by finding the “top K” most similar vectors. Redis Enterprise also enables the discovery of relevant content within a predefined similarity range or threshold for an alternative, and offers a more flexible search experience.

Ecosystem collaborators and integrations

FAQs

  • What are vector databases?
    • Vector databases are specifically designed to store, retrieve and search dense numerical vectors in an efficient manner. While traditional databases typically organize data in rows and columns, Vector databases cater to applications leveraging image recognition, natural language processing, and recommendation systems, where data is represented as vectors in a multi-dimensional space. They are designed to efficiently handle the storage and retrieval of these dense numerical Vectors. They leverage specialized data structures and indexing techniques, such as hierarchical navigable small world (HNSW) and product quantization, to enable swift similarity and semantic searches. These databases enable users to find vectors that are most similar to a given query vector based on a chosen distance metric, such as euclidean distance, cosine similarity, or dot product.
  • What are vector embeddings?
    • Vector embeddings are numerical representations of unstructured data, such as text, images, or audio, in the form of vectors (lists of numbers). These embeddings capture the semantic similarity of objects by mapping them to points in a vector space, where similar objects are represented by vectors that are close to each other.
  • What is vector indexing?
    • Vector indexing is a technique used to organize and retrieve data based on vector representations. Instead of storing data in traditional tabular or document formats, vector indices represent data objects as vectors in a multi-dimensional space.
  • What are distance metrics?
    • In the context of a Vector Database, a Distance Metric refers to a mathematical function that takes two vectors as input and calculates a distance value representing their similarity or dissimilarity. The primary purpose of these metrics is to determine how close or distant the two vector embeddings are from each other.  Redis employs three distance measures to gauge the similarity of vectors. Selecting an effective distance measure considerably enhances classification and clustering performance.
  • What are Large Language Models (LLMs)?
    • Large language models (LLMs) are advanced deep-learning models that have been specifically developed to process and analyze human languages. These models showcase remarkable abilities and have found extensive applications in various fields. A large language model operates as a highly potent deep-learning model, possessing the capability to comprehend and generate text similar to how humans do. At its core, this model utilizes a large-scale transformer model to achieve its impressive performance.