✨ Vector Search


This template uses all the files in the docs/ directory to do "ChatGPT-style document search" on them. A sample component that makes use of this feature is ChatModal.

Vector search is split into two parts:

Build time

  1. Generate embeddings for all documents. This splits or "chunks" the documents into tiny pieces and creates an embedding vector for them.

🆕 TemplateAI uses OpenAI's new text-embedding-3-small model, 5x cheaper than the previous one.

  1. Store document chunks and their embeddings in Supabase with pgvector

Run time

  1. When a user asks a question, do vector similarity search to find the documents chunks that are most 'semantically related' to the question

  2. Inject these chunks as context into a GPT-3 prompt, and stream the response back to the user

3-Min Walkthrough


Setup

  1. Set the OPENAI_API_KEY in your .env file

  2. Make sure you have Supabase configs set correctly, and have run migrations on your database. (Follow the Database setup first if you haven't completed these steps.) The migrations will enable vector search functions and create a table to store your embedded documents.

  3. Put the documents you want to perform vector search on in the docs/ folder.

  4. Run the following command in your shell to generate and store embeddings for all the documents

npm run embeddings

Note: The splitting or "chunking" of documents works best for text and markdown files. If you want to run vector search on other file types, like HTML, CSV or PDFs, follow the LangChain guides on loading the documents, and make the appropriate changes in the src/utils/generate-embeddings.ts script.

Note: The docs/ folder is gitignore-d by default, so your documents are not version-tracked or uploaded on Github.

You should now have embeddings for all the documents stored in Supabase, under the documents table.

This sets you up to run vector search by calling the POST api/vector-search endpoint in your app with the parameter prompt. Here's a short example from the ChatModal component that uses the Vercel ai sdk:

// Example of calling the vector-search endpoint
const { completion, input, isLoading, handleInputChange, handleSubmit } =
    useCompletion({
      api: '/api/vector-search',
    });
 
...
 
return (
  <div>
    <form onSubmit={handleSubmit}>
      <input
        placeholder='Ask a question...'
        value={input}
        onChange={handleInputChange}
      />
    </form>
    {completion && (
      <p>{completion}</p>
    )}
  </div>
);
 

Last Updated: March 5