Skip to main content

Pinecone

Overview

This page guides you through the process of setting up the Pinecone destination connector.

There are three parts to this:

  • Processing - split up individual records in chunks so they will fit the context window and decide which fields to use as context and which are supplementary metadata.
  • Embedding - convert the text into a vector representation using a pre-trained model (Currently, OpenAI's text-embedding-ada-002 and Cohere's embed-english-light-v2.0 are supported.)
  • Indexing - store the vectors in a vector database for similarity search

Prerequisites

To use the Pinecone destination, you'll need:

  • An account with API access for OpenAI or Cohere (depending on which embedding method you want to use)
  • A Pinecone project with a pre-created index with the correct dimensionality based on your embedding method

You'll need the following information to configure the destination:

  • Embedding service API Key - The API key for your OpenAI or Cohere account
  • Pinecone API Key - The API key for your Pinecone account
  • Pinecone Environment - The name of the Pinecone environment to use
  • Pinecone Index name - The name of the Pinecone index to load data into

Features

FeatureSupported?Notes
Full Refresh SyncYes
Incremental - Append SyncYes
Incremental - Append + DedupedYesDeleting records via CDC is not supported (see issue #29827)
NamespacesYes

Data type mapping

All fields specified as metadata fields will be stored in the metadata object of the document and can be used for filtering. The following data types are allowed for metadata fields:

  • String
  • Number (integer or floating point, gets converted to a 64 bit floating point)
  • Booleans (true, false)
  • List of String

All other fields are ignored.

Configuration

Processing

Each record will be split into text fields and meta fields as configured in the "Processing" section. All text fields are concatenated into a single string and then split into chunks of configured length. If specified, the metadata fields are stored as-is along with the embedded text chunks. Please note that meta data fields can only be used for filtering and not for retrieval and have to be of type string, number, boolean (all other values are ignored). Please note that there's a 40kb limit on the total size of the metadata saved for each entry. Options around configuring the chunking process use the Langchain Python library.

When specifying text fields, you can access nested fields in the record by using dot notation, e.g. user.name will access the name field in the user object. It's also possible to use wildcards to access all fields in an object, e.g. users.*.name will access all names fields in all entries of the users array.

The chunk length is measured in tokens produced by the tiktoken library. The maximum is 8191 tokens, which is the maximum length supported by the text-embedding-ada-002 model.

The stream name gets added as a metadata field _ab_stream to each document. If available, the primary key of the record is used to identify the document to avoid duplications when updated versions of records are indexed. It is added as the _ab_record_id metadata field.

Embedding

The connector can use one of the following embedding methods:

  1. OpenAI - using OpenAI API , the connector will produce embeddings using the text-embedding-ada-002 model with 1536 dimensions. This integration will be constrained by the speed of the OpenAI embedding API.

  2. Cohere - using the Cohere API, the connector will produce embeddings using the embed-english-light-v2.0 model with 1024 dimensions.

For testing purposes, it's also possible to use the Fake embeddings integration. It will generate random embeddings and is suitable to test a data pipeline without incurring embedding costs.

Indexing

To get started, use the Pinecone web UI or API to create a project and an index before running the destination. All streams will be indexed into the same index, the _ab_stream metadata field is used to distinguish between streams. Overall, the size of the metadata fields is limited to 30KB per document.

OpenAI and Fake embeddings produce vectors with 1536 dimensions, and the Cohere embeddings produce vectors with 1024 dimensions. Make sure to configure the index accordingly.

Changelog

Expand to review
VersionDatePull RequestSubject
0.1.312024-12-2150203Update dependencies
0.1.302024-12-1449303Update dependencies
0.1.292024-11-2548654Update dependencies
0.1.282024-11-0548323Update dependencies
0.1.272024-10-2947106Update dependencies
0.1.262024-10-1246782Update dependencies
0.1.252024-10-0546474Update dependencies
0.1.242024-09-2846127Update dependencies
0.1.232024-09-2145791Update dependencies
0.1.222024-09-1445490Update dependencies
0.1.212024-09-0745247Update dependencies
0.1.202024-08-3145063Update dependencies
0.1.192024-08-2444669Update dependencies
0.1.182024-08-1744302Update dependencies
0.1.172024-08-1243932Update dependencies
0.1.162024-08-1043701Update dependencies
0.1.152024-08-0343134Update dependencies
0.1.142024-07-2742594Update dependencies
0.1.132024-07-2042243Update dependencies
0.1.122024-07-1341901Update dependencies
0.1.112024-07-1041598Update dependencies
0.1.102024-07-0941194Update dependencies
0.1.92024-07-0740753Fix a regression with AirbyteLogger
0.1.82024-07-0640780Update dependencies
0.1.72024-06-2940627Update dependencies
0.1.62024-06-2740215Replaced deprecated AirbyteLogger with logging.Logger
0.1.52024-06-2540430Update dependencies
0.1.42024-06-2240150Update dependencies
0.1.32024-06-0639148[autopull] Upgrade base image to v1.2.2
0.1.22023-05-17#38336Fix for regression:Custom namespaces not created automatically
0.1.12023-05-14#38151Add airbyte source tag for attribution
0.1.02023-05-06#37756Add support for Pinecone Serverless
0.0.242023-04-15#37333Update CDK & pytest version to fix security vulnerabilities.
0.0.232023-03-22#35911Bump versions to latest, resolves test failures.
0.0.222023-12-11#33303Fix bug with embedding special tokens
0.0.212023-12-01#32697Allow omitting raw text
0.0.202023-11-13#32357Improve spec schema
0.0.192023-10-20#31329Improve error messages
0.0.182023-10-20#31329Add support for namespaces and fix index cleaning when namespace is defined
0.0.172023-10-19#31599Base image migration: remove Dockerfile and use the python-connector-base image
0.0.162023-10-15#31329Add OpenAI-compatible embedder option
0.0.152023-10-04#31075Fix OpenAI embedder batch size
0.0.142023-09-29#30820Update CDK
0.0.132023-09-26#30649Allow more text splitting options
0.0.122023-09-25#30649Fix bug with stale documents left on starter pods
0.0.112023-09-22#30649Set visible certified flag
0.0.102023-09-20#30514Fix bug with failing embedding step on large records
0.0.92023-09-18#30510Fix bug with overwrite mode on starter pods
0.0.82023-09-14#30296Add Azure embedder
0.0.72023-09-13#30382Promote to certified/beta
0.0.62023-09-09#30193Improve documentation
0.0.52023-09-07#30133Refactor internal structure of connector
0.0.42023-09-05#30086Switch to GRPC client for improved performance.
0.0.32023-09-01#30079Fix bug with potential data loss on append+dedup syncing. 🚨 Streams using append+dedup mode need to be reset after upgrade.
0.0.22023-08-31#29442Improve test coverage
0.0.12023-08-29#29539Pinecone connector with some embedders