Answer Relevance
Evaluation metric for answer relevance in RAG systems.
AnswerRelevance
dataclass
Bases: MetricWithLLM
Metric to evaluate the relevance of a generated answer to a user question.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
The name of the metric. |
Source code in ragbot\evaluation\metrics\answer_relevance.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | |
score(sample, **kwargs)
Score the answer relevance using a language model.
The score is based on a scale from 1 to 5
5 - Excellent 4 - Good 3 - Acceptable 2 - Poor 1 - Not Relevant
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sample
|
Sample
|
A sample containing a question and generated answer. |
required |
**kwargs
|
Any
|
Additional keyword arguments (e.g., for callbacks). |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
float |
float
|
A relevance score between 1 and 5. |
Source code in ragbot\evaluation\metrics\answer_relevance.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | |