Say goodbye to eye-straining scans and data extraction struggles. Find that elusive passage in seconds and make informed decisions faster than ever before!

Have you ever had to squint your eyes scanning through hundreds of PDF pages just to find that passage you read a few days ago? Or have you ever looked at a stack of papers and wished you could extract the data for informed decisions more quickly?

Well, now it’s possible. Traditionally you can search keywords in your documents but that does not provide an understanding of the topic. This tutorial takes a step further by defining practical use cases using AI. Imagine diving into a new technology by just uploading the document and learning on the go – chat with it, unravel its complexities, and make your exploration seamless. In this blog, we guide you through creating your app, putting the capability to extract insights from your private documents at your fingertips, without requiring expert developer skills.


By harnessing the latest breakthroughs in AI, we showcase the incredible potential of combining text embedding and powerful large language models. The result? An efficient system that can answer questions based on specific documents.

If you were to create the same application using traditional development methods, without utilizing no-code or low-code tools, you would have to write more custom code and build certain components from scratch.

But what if you didn’t have to do all that..?

No-code and low-code platforms, as demonstrated in the tutorial, provide a faster way to prototype and deploy applications with reduced coding efforts. This makes them particularly valuable for rapid development and prototyping purposes.

On a similar note, I’m hosting a free 45 minute webinar on the topic of building GPT Apps. Sign up below to get exclusive access to this session where I walk you through the highly refined approach we use to find opportunities, build GPT apps, and lauch them. Unleash your product potential.

Overview of the Tools to be used

A popular Python library for quickly creating web applications, Streamlit simplifies the process of turning data scripts into shareable web apps. It provides easy-to-use widgets for user input, such as file upload and text input, making it ideal for building interactive interfaces.

Llama Index

An in-memory vector database service, it offers efficient storage and retrieval of high-dimensional embeddings. It is specifically designed for use cases like similarity search and recommendation systems. In the app, the llama Index stores and retrieves document embeddings, enabling fast and efficient retrieval based on user queries.


GPT is an advanced language model capable of understanding and generating human-like text. In the app, GPT4 is used to generate responses based on user queries. It takes the matched document from llama as context and produces relevant responses to the user’s questions.

By leveraging these tools, you can create a low-code solution for a chatbot app for our private documents. Users can upload documents, ask questions, and receive responses without the need for extensive coding.


Let’s walk through the steps:

Step 1: Setting a System

To get started with the project, first, download a source code from the GitHub repository using the following command.
You will

git clone

Thank you to the wonderful folks at a16z for creating the this codebase.

After that, go to the newly created folder. If you’re the kind of person who wants to avoid making a mess of your previously installed libraries, it is highly recommended that you begin by setting up a virtual environment. This way, you can ensure that your computer won’t judge you for making a mistake while coding and you can keep your code organized and super tidy. Make sure you have Python installed already.

1) Create a new Python environment

python -m venv env

source env/bin/activate

2) Install dependent libraries in the newly created environment:

pip install -r requirements.txt

Step 2: Configure Secrets
To use the application you will need an OpenAI API key. Visit OpenAI’s website and sign up.

Navigate to the API Key page and click on “Create New Secret Key”. Go to “.streamlit” folder and there is a file named “secrets.toml”. Set the following values in that file.

openai_key = “sk-************”

Step 3: Set Up your Private Data

Once you have set up your Python environment and secrets, copy the required documents that you want to use for the chatbot to /data folder. The library supports .txt, .pdf .md, and many others

Step 4: Create Streamlit App with File Upload

Once everything is done, we are good to go. Run the following command to start a server and chat according to your documents.

streamlit run

Step 5: Customize your ChatBot (Advanced Step)

If you want to change the behavior of your chatbot you can change the `system_prompt`in the code. System prompt defines how the chatbot should act, for example, if you do not want the chatbot to answer other than the documents you can mention it there. The current system prompt is given below:

You are an expert, you analyze our private documents and answer user’s query based on that. If information is not present in the document generate response on your own but let the user know that it was not present in the document. Keep your answers technical and based on facts – do not hallucinate features.

AI’s Role: A Maestro in the Orchestra of Words

In a digital landscape teeming with information, our PDF-based Question-Answering app stands out as a reliable guide, helping you navigate the intricate web of knowledge. It goes beyond mere text extraction, enabling you to actively engage in meaningful conversations with the very essence of the document.

But as seen with other GPTs, the app can be tricked into answering outside the context of uploaded PDFs. To limit the app the answering based on presented data only, we will have to use a little prompt engineering and define these conditions in the system prompt..


Now, you have a simplified low-code private document-based chatbot application. Users can add documents, ask questions, and receive responses without delving deeply into code. Keep in mind that this approach sacrifices some customization for simplicity. Adjustments can be made based on the complexity of the use case and your comfort level with low-code tools.

Request Syllabus