Try it out
This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary data—without the need to train your own LLM.
What is Pinecone Assistant?
Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you.
Prerequisites
Setup
- Create a Pinecone Assistant in the Pinecone Console here
- Name your Assistant
n8n-assistant
- No need to configure a Chat model or Assistant instructions
- Setup your Google Drive OAuth2 API credential in n8n
- In the File added node -> Credential to connect with, select Create new credential
- Set the Client ID and Client Secret from the values generated in the prerequisites
- Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions)
- Name this credential
Google Drive account so that other nodes reference it
- Setup Pinecone API key credential in n8n
- In the Upload file to Assistant node -> PineconeApi section, select Create new credential
- Paste in your Pinecone API key in the API Key field
- Select your Assistant Name in each of the Pinecone Assistant nodes
- Setup the Open AI credential in n8n
- In the OpenAI Chat Model node -> Credential to connect with, select Create new credential
- Set the API Key field to your OpenAI API key
- Add your files to a Drive folder named
n8n-pinecone-demo in the root of your My Drive
- If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change
- Activate the workflow or test it with a manual execution to ingest the documents
- Chat with your docs!
Ideas for customizing this workflow
- Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant
- Change the Top K and/or Snippet Size values to help manage token consumption by adding the Top K and/or Snippet Size parameters to Get context from Assistant node
- Filter the context snippets even further by adding metadata filters to the Get context from Assistant node
Need help?
You can find help by asking in the Pinecone Discord community or filing an issue on this repo.