Python SDK
Refuel provides a Python library for programmatically interacting with the platform. This guide describes how to install and use the Refuel SDK.
refuel
is available as a library on PyPI. The code is open source and available on GitHub.
Getting Started
Installation
Install the SDK using a package installer such as pip:
Installing and using the SDK requires Python 3.6+.
Initializing a client session
Set the environment variable REFUEL_API_KEY. The SDK will read it during initialization, and use this value when sending requests:
Alternatively, you can supply the API key as a parameter during initializtion as shared below. In the cloud application, there is a top-level dropdown to select the project you’re working on currently. And this selection powers all the pages downstream (datasets, labeling tasks etc). The SDK allows you to do this by setting the project during initialization as well:
Here’s the complete list of initialization options currently supported:
Option | Is Required | Default Value | Comments |
---|---|---|---|
api_key | Yes | None | Used to authenticate all requests to the API server |
project | Yes | None | The name of the project you plan to use for the current session |
timeout | No | 60 | Timeout in seconds |
max_retries | No | 3 | Max number of retries for failed requests |
max_workers | No | Num CPUs (os.cpu_count()) | Max number of concurrent requests to the API server |
Projects
These functions let you create a new project in your team’s Refuel account, or get information about projects already defined.
Get Projects
The get_projects API will return a list of all projects that belong to your team:
You can also retrieve information about a specific project, either by name or by UUID
Create Project
Datasets
These functions let you upload/export a full dataset, or fetch rows and LLM labels within a dataset.
Get all current datasets
The get_datasets function will return a list of all datasets in the project:
Upload Dataset
This function lets you upload a local CSV file as a new dataset to Refuel.
Some details about the function parameters:
Option | Is Required | Default Value | Comments |
---|---|---|---|
file_path | Yes | - | Path to the data you wish to upload |
dataset_name | Yes | - | Unique name of the dataset being uploaded |
source | Yes | local | Place where the file resides |
wait_for_completion | No | False | Whether to poll for the completion of the dataset ingestion |
Note: When uploading dataset from uri
source, the file_path
should be publicly accessible (eg. S3 presigned url) for Refuel to process it.
Download Dataset
This function create a snapshot of your dataset and generate presigned URL for secure download.
This is an asynchronous workflow, and when the dataset is available for download, Refuel will send an email to the email address. Make sure the email address belongs to a valid user from your team. Depending on the size of the dataset, this export step can take a few minutes. Once the download link is created and emailed, it will be valid for 24 hours.
Adding items (rows) to an uploaded dataset
This function lets you append new rows to an existing dataset. Keep in mind that the dataset schema is decided during initial upload and is not updated here.
Querying items (rows) in a dataset
In addition to downloading the entire dataset, you can also fetch a list of items (rows) from the dataset as follows:
This function will return a pandas dataframe. Some details about the function parameters:
Option | Is Required | Default Value | Comments |
---|---|---|---|
dataset | Yes | - | Name of the dataset you want to query and retrieve items (rows) from |
max_items | No | 20 | Max number of rows you want to fetch |
offset | No | 0 | If this is set to a positive number, say N, then the first N rows will be skipped and the API will return “max_items” number of rows after skipping the first N rows. |
Querying items, along with labels from a labeling task
get_items() also allows you to provide an optional parameter - a labeling task. When provided, the function will also include the task results (LLM labels, confidence and manually confirmed labels, if any) for the returned items.
Applying sort ordering when querying items
By default, the API will use Refuel’s sort order (by decreasing order of diversity). You can use the order_by
param to sort by any other columns in the dataset or by the label or confidence score from a labeling task.
- Sort by dataset column
- Sort by label or confidence score from a labeling task. Note that this requires a task name and a subtask name to be specified.
field
can be either ‘label’ or ‘confidence’.
You may have multiple dicts in the order_by
list if you would like to sort by multiple columns (used in the case of ties). Some details about the keys for each dict in the order_by
list:
Key | Is Required | Default Value | Description | Comments |
---|---|---|---|---|
field | Yes | The name of the column in the dataset to sort by | In addition to the columns in the dataset, the field can also be ‘label’ or ‘confidence’, if the task and subtask names are specified. | |
direction | No | ASC | The direction that you would like to sort the specified column by | Should be ASC or DESC |
subtask | No | null | The name of the subtask for which you would like to sort by label or confidence | This should only be provided if the field is ‘label’ or ‘confidence’ and requires a task name to be specified in the function params. |
Applying filters when querying items
In addition to sorting options, you can also define filters to only fetch items in the dataset that match a specific criteria. The SDK supports three types of filters:
- Metadata Filters:
Filter based on the value of a specific column, for e.g. a filter such as “column = value” is defined as a Python dictionary with three keys:
- field: This is the column on which you want to apply the filter
- operator: This is the comparison operator
- value: This is the value to compare to
Here’s an example:
- LLM output value/confidence filter:
Filter items based on the LLM output value or confidence score from a specific task configured in Refuel.
Here’s a concrete example: Let’s say you configured a classification task called Sentiment Analysis
in your Refuel account, which has two subtasks (output fields):
(i) predicted_sentiment
- the predicted sentiment
(ii) explanation
- a one sentence explanation of why the LLM output the predicted sentiment as Positive or Negative for the item.
Here are a few filters we can define for this task:
- “predicted sentiment is Positive”:
- “predicted sentiment confidence >= 80%”:
- “predicted sentiment does not agree with the ground truth label (available in a column called
ground_truth_sentiment
)”:
Here’s the complete list of filter operators that are currently supported:
Operator | Description | |
---|---|---|
> | Greater than | |
>= | Greater than or equal to | |
= | Equals | |
<> | Not equal to | |
< | Less than | |
<= | Less than or equal to | |
IS NULL | True if field is undefined | |
IS NOT NULL | True if field is defined | |
LIKE | String matching: True if value is in field | |
ILIKE | String matching (case insensitive): True if value is in field | |
NOT LIKE | String does not match: True if value is not in field | |
NOT ILIKE | String does not match (case insensitive): True if value is not in field |
Tasks
These functions let you retrieve information about labeling tasks defined within a project, and start and cancel a task run.
Define a new Task
You can create a new task programmatically within a given project using the create_task
function:
Some details about the various parameters you see in the function signature above:
Parameter | Is Required | Default Value | Comments |
---|---|---|---|
task | Yes | None | Name of the new task you’re creating |
dataset | Yes | None | Dataset (in Refuel) for which you are defining this task |
context | Yes | None | Context is a high level description of the problem domain and the dataset that the LLM will be working with. It typically starts with something like ‘You are and expert at …’ |
fields | Yes | None | This is a list of dictionaries. Each entry in this list defines an output field generated in the task. See below for details about the schema of each field |
model | No | team default | LLM that will be used for this task. If not specified, we will use the default LLM set for your team, e.g. GPT-4 Turbo |
Next, let’s take a look at the schema of each entry in the fields
list above:
Parameter | Is Required | Default Value | Comments |
---|---|---|---|
name | Yes | None | Name of the output field, e.g. llm_predicted_sentiment |
type | Yes | None | Type of output field. This is one of: [classification , multilabel_classification , attribute_extraction , webpage_transform , web_search ] |
guidelines | Yes | None | Output guidelines for the LLM for this field. Note that if the field type is a web_search type, the guidelines will be simply the query template |
labels | Yes (for classification field types) | None | list of valid labels, this field is only required for classification type tasks |
input_columns | Yes | None | Columns from the dataset to use as input when passing a “row” in the dataset to the LLM. |
ground_truth_column | No | None | A column in the dataset that contains ground truth value for this field, if one exists. Note this is an optional parameter. |
fallback_value | No | None | A fallback/default value that the LLM should generate for this field if a row cannot be processed successfully |
Finally, here is the list of LLMs currently supported (use the model name as the parameter value):
Provider | Name |
---|---|
OpenAI | GPT-4 Turbo |
OpenAI | GPT-4o |
OpenAI | GPT-4o mini |
OpenAI | GPT-4 |
OpenAI | GPT-3.5 Turbo |
Anthropic | Claude 3.5 (Sonnet) |
Anthropic | Claude 3 (Opus) |
Anthropic | Claude 3 (Haiku) |
Gemini 1.5 (Pro) | |
Mistral | Mistral Small |
Mistral | Mistral Large |
Refuel | Refuel LLM-2 |
Refuel | Refuel LLM-2-small |
Get Tasks
You can retrieve a list of all tasks within a given project as follows
Start a Labeling Task Run
You can begin running a labeling task on a dataset with the following:
This will kick off a bulk labeling run for the specified task and dataset, and label 100 items in the dataset. If num_items
parameter is not specified, it will label the entire dataset.
Cancel an ongoing labeling task run
You can also cancel an ongoing labelling task with the same function as follows.
Get Task run status/progress
To check on the status of an ongoing labeling task run, you can use the following function
The task_run
object has the following schema:
status
enum shows the current task run status. It can be one of the following values:
not_started
: this is the starting state before the batch run has been kicked offactive
: a batch task run is ongoingpaused
: the batch run was paused before the full dataset was labeled.failed
: the batch run failed due to a platform error. This should ideally never happencompleted
: the batch run was completed successfully
metrics
is a list containing all metrics for the current task run. Currently the platform supports the following metrics:
num_labeled
: number of rows from the dataset that have been labelednum_remaining
number of rows from the dataset that are remainingtime_elapsed_seconds
: time (in seconds) since the task run started. This is only populated when the task run is active (since this metric is not valid when there is no active run).time_remaining_seconds
estimated time (in seconds) remaining for the task run to complete. This is only populated when the task run is active (since this metric is not valid when there is no active run).
Applications
Refuel allows you to deploy a labeling task as an application. Applications allow you to label data synchronously on demand, primarily for online workloads.
Deploy labeling application
To deploy an existing task as a labeling application, you can use the following function
Get all labeling application
To get all labeling applications that are currently deployed, use the following function
Label using a deployed application
You can use the deployed application for online predictions as follows:
Each element in inputs
is a dictionary, with keys as names of the input columns defined in the task. For example, let’s consider an application for sentiment classification called my_sentiment_classifier
, with two input fields - source
and text
. You can use it as follows:
response
has the following schema:
refuel_output[i]
contains the output forinputs[i]
refuel_fields
is a list whose length is equal to the number of fields defined in the application. For example, let’s saymy_sentiment_classifier
has just one field,sentiment
. In this case the output will be:
You can also set the optional explain
parameter to True
to get an explanation for why the provided label was returned. The explanation will be returned in the explanation
field in the response, along with the label
and confidence
:
If you would only like to get an explanation for certain fields, you can optionally provide a list of field names in the explain_fields
parameter for which you want an explanation returned. If explain_fields
is provided, explanations will be returned regardless of whether explain
is set to True
or False
. Here’s an example of how to get explanations for the sentiment
field in the my_sentiment_classifier
application:
You can also set the optional telemetry
parameter to True
to get additional info such as the model, provider, and number of tokens used (prompt, output, and total) in the request. The telemetry data will be returned in the usage
field in the response.
Async Labeling
If you do not want to wait for the labeling to be completed, you can instead use the method alabel
with the exact same parameters as with label
. This will submit the inputs for labeling with Refuel and returns the refuel_uuid to get the labeled item back.
The output will be:
You can eith use the refuel_uuid
from the output to get the labeled item back using get_labeled_item
method.
You can also directly call the returned uri to get the labeled item back.
Async Labeling is also useful when the input data is large or the LLM generates large amount of data which can lead to timeouts on Refuel’s Application.
Share feedback for application outputs
The SDK allows users to log feedback for online predictions. When logging predictions, it is important to identify the input request for which you’re logging feedback using refuel_uuid
from the response above:
Any row with logged feedback will appear with the verified check mark (”✓”) in the cloud application.
Finetuning
Refuel allows you to finetune a model based on all the human reviewed data and optionally data labeled by an LLM. Finetuned models allow you to reduce labeling cost and latency while achieving similar, and in some cases better, performance than LLMs like GPT-4.
Starting a finetuning run
To start a model finetuning run, you can use the following function:
Supported Base Models: [‘refuel-llm-v2-large’, ‘refuel-llm-v2-small’] Supported hyperparameters: [‘lora_r’, ‘lora_alpha’, ‘lora_dropout’, ‘weight_decay’, ‘learning_rate’, ‘cosine_min_lr_ratio’]
Get all finetuned models
You can retrieve a list of all models within a given project as follows
Cancel a finetuning run
You can also cancel an ongoing finetuning run as follows.