This commit is contained in:
zxhlyh
2023-09-28 11:26:04 +08:00
committed by GitHub
parent 5e511e01bf
commit bcd744b6b7
3 changed files with 324 additions and 141 deletions

View File

@@ -71,7 +71,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
/>
<Row>
<Col>
### Path Query
### Query
<Properties>
<Property name='page' type='string' key='page'>
Page number
@@ -136,7 +136,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
<Col>
This api is based on an existing dataset and creates a new document through text based on this dataset.
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -153,22 +153,22 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
</Property>
<Property name='indexing_technique' type='string' key='indexing_technique'>
Index mode
- high_quality High quality: embedding using embedding model, built as vector database index
- economy Economy: Build using inverted index of Keyword Table Index
- <code>high_quality</code> High quality: embedding using embedding model, built as vector database index
- <code>economy</code> Economy: Build using inverted index of Keyword Table Index
</Property>
<Property name='process_rule' type='object' key='process_rule'>
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- <code>mode</code> (string) Cleaning, segmentation mode, automatic / custom
- <code>rules</code> (object) Custom rules (in automatic mode, this field is empty)
- <code>pre_processing_rules</code> (array[object]) Preprocessing rules
- <code>id</code> (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
- <code>remove_extra_spaces</code> Replace consecutive spaces, newlines, tabs
- <code>remove_urls_emails</code> Delete URL, email address
- <code>enabled</code> (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- <code>segmentation</code> (object) segmentation rules
- <code>separator</code> Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- <code>max_tokens</code> Maximum length (token) defaults to 1000
</Property>
</Properties>
</Col>
@@ -238,7 +238,8 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
<Row>
<Col>
This api is based on an existing dataset and creates a new document through a file based on this dataset.
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -259,22 +260,22 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
</Property>
<Property name='indexing_technique' type='string' key='indexing_technique'>
Index mode
- high_quality High quality: embedding using embedding model, built as vector database index
- economy Economy: Build using inverted index of Keyword Table Index
- <code>high_quality</code> High quality: embedding using embedding model, built as vector database index
- <code>economy</code> Economy: Build using inverted index of Keyword Table Index
</Property>
<Property name='process_rule' type='object' key='process_rule'>
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- <code>mode</code> (string) Cleaning, segmentation mode, automatic / custom
- <code>rules</code> (object) Custom rules (in automatic mode, this field is empty)
- <code>pre_processing_rules</code> (array[object]) Preprocessing rules
- <code>id</code> (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
- <code>remove_extra_spaces</code> Replace consecutive spaces, newlines, tabs
- <code>remove_urls_emails</code> Delete URL, email address
- <code>enabled</code> (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- <code>segmentation</code> (object) segmentation rules
- <code>separator</code> Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- <code>max_tokens</code> Maximum length (token) defaults to 1000
</Property>
</Properties>
</Col>
@@ -338,7 +339,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
<Col>
This api is based on an existing dataset and updates the document through text based on this dataset.
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -358,17 +359,17 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
</Property>
<Property name='process_rule' type='object' key='process_rule'>
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- <code>mode</code> (string) Cleaning, segmentation mode, automatic / custom
- <code>rules</code> (object) Custom rules (in automatic mode, this field is empty)
- <code>pre_processing_rules</code> (array[object]) Preprocessing rules
- <code>id</code> (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
- <code>remove_extra_spaces</code> Replace consecutive spaces, newlines, tabs
- <code>remove_urls_emails</code> Delete URL, email address
- <code>enabled</code> (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- <code>segmentation</code> (object) segmentation rules
- <code>separator</code> Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- <code>max_tokens</code> Maximum length (token) defaults to 1000
</Property>
</Properties>
</Col>
@@ -435,7 +436,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
<Col>
This api is based on an existing dataset, and updates documents through files based on this dataset
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -455,17 +456,17 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
</Property>
<Property name='process_rule' type='object' key='process_rule'>
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- <code>mode</code> (string) Cleaning, segmentation mode, automatic / custom
- <code>rules</code> (object) Custom rules (in automatic mode, this field is empty)
- <code>pre_processing_rules</code> (array[object]) Preprocessing rules
- <code>id</code> (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
- <code>remove_extra_spaces</code> Replace consecutive spaces, newlines, tabs
- <code>remove_urls_emails</code> Delete URL, email address
- <code>enabled</code> (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- <code>segmentation</code> (object) segmentation rules
- <code>separator</code> Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- <code>max_tokens</code> Maximum length (token) defaults to 1000
</Property>
</Properties>
</Col>
@@ -527,7 +528,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
/>
<Row>
<Col>
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -582,7 +583,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
/>
<Row>
<Col>
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -624,14 +625,14 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
/>
<Row>
<Col>
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
</Property>
</Properties>
### Path Query
### Query
<Properties>
<Property name='keyword' type='string' key='keyword'>
Search keywords, currently only search document names(optional)
@@ -699,7 +700,7 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
/>
<Row>
<Col>
### Path Params
### Params
<Properties>
<Property name='dataset_id' type='string' key='dataset_id'>
Dataset ID
@@ -712,10 +713,9 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
### Request Body
<Properties>
<Property name='segments' type='object list' key='segments'>
segments (object list) Segmented content
- content (text) Text content/question content, required
- answer(text) Answer content, if the mode of the data set is qa mode, pass the value(optional)
- keywords(list) Keywords(optional)
- <code>content</code> (text) Text content/question content, required
- <code>answer</code> (text) Answer content, if the mode of the data set is qa mode, pass the value(optional)
- <code>keywords</code> (list) Keywords(optional)
</Property>
</Properties>
</Col>
@@ -778,14 +778,106 @@ import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from
---
Error message
- **document_indexing**: Document indexing failed
- **provider_not_initialize**: Embedding model is not configured
- **not_found**, Document does not exist
- **dataset_name_duplicate**: Duplicate dataset name
- **provider_quota_exceeded**: Model quota exceeds limit
- **dataset_not_initialized**: The dataset has not been initialized yet
- **unsupported_file_type**: Unsupported file types.
- Currently only supports, txt, markdown, md, pdf, html, htm, xlsx, docx, csv
- **too_many_files**: There are too many files. Currently, only a single file is uploaded
- **file_too_large*: The file is too large, support below 15M based on you environment configuration
<Row>
<Col>
### Error message
<Properties>
<Property name='code' type='string' key='code'>
Error code
</Property>
</Properties>
<Properties>
<Property name='status' type='number' key='status'>
Error status
</Property>
</Properties>
<Properties>
<Property name='message' type='string' key='message'>
Error message
</Property>
</Properties>
</Col>
<Col>
<CodeGroup title="Example">
```json {{ title: 'Response' }}
{
"code": "no_file_uploaded",
"message": "Please upload your file.",
"status": 400
}
```
</CodeGroup>
</Col>
</Row>
<table className="max-w-auto border-collapse border border-slate-400" style={{ maxWidth: 'none', width: 'auto' }}>
<thead style={{ background: '#f9fafc' }}>
<tr>
<th class="p-2 border border-slate-300">code</th>
<th class="p-2 border border-slate-300">status</th>
<th class="p-2 border border-slate-300">message</th>
</tr>
</thead>
<tbody>
<tr>
<td class="p-2 border border-slate-300">no_file_uploaded</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">Please upload your file.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">too_many_files</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">Only one file is allowed.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">file_too_large</td>
<td class="p-2 border border-slate-300">413</td>
<td class="p-2 border border-slate-300">File size exceeded.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">unsupported_file_type</td>
<td class="p-2 border border-slate-300">415</td>
<td class="p-2 border border-slate-300">File type not allowed.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">high_quality_dataset_only</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">Current operation only supports 'high-quality' datasets.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">dataset_not_initialized</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">The dataset is still being initialized or indexing. Please wait a moment.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">archived_document_immutable</td>
<td class="p-2 border border-slate-300">403</td>
<td class="p-2 border border-slate-300">The archived document is not editable.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">dataset_name_duplicate</td>
<td class="p-2 border border-slate-300">409</td>
<td class="p-2 border border-slate-300">The dataset name already exists. Please modify your dataset name.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">invalid_action</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">Invalid action.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">document_already_finished</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">The document has been processed. Please refresh the page or go to the document details.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">document_indexing</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">The document is being processed and cannot be edited.</td>
</tr>
<tr>
<td class="p-2 border border-slate-300">invalid_metadata</td>
<td class="p-2 border border-slate-300">400</td>
<td class="p-2 border border-slate-300">The metadata content is incorrect. Please check and verify.</td>
</tr>
</tbody>
</table>
<div class="pb-4" />