Chat
Interactive chat interface for a RAG-powered language model.
chat(project_name, llm_provider, llm, llm_temperature, llm_top_p, llm_top_k, embeddings_provider, embedding_model, chunk_size, chunk_overlap, search_type, k_docs)
Start an interactive chat session using a RAG pipeline.
This function initializes the RAG components with the given parameters and launches
a loop that allows users to send queries to the model. It supports simple commands
for help (/?), clearing history (/clear), and exiting (/bye).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
project_name
|
str
|
The name of the LangChain project. |
required |
llm_provider
|
str
|
The LLM provider (e.g., "google", "ollama", "hf"). |
required |
llm
|
str
|
The LLM model identifier. |
required |
llm_temperature
|
float
|
Sampling temperature for the LLM. |
required |
llm_top_p
|
float
|
Top-p nucleus sampling parameter. |
required |
llm_top_k
|
int
|
Top-k sampling parameter. |
required |
embeddings_provider
|
str
|
The provider for the embeddings model. |
required |
embedding_model
|
str
|
The embeddings model identifier. |
required |
chunk_size
|
int
|
Size of each document chunk for retrieval. |
required |
chunk_overlap
|
int
|
Number of overlapping tokens between chunks. |
required |
search_type
|
str
|
The type of retrieval search to use. |
required |
k_docs
|
int
|
Number of top documents to retrieve for each query. |
required |
Source code in ragbot\chat.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | |