Welcome to the Daios Ethics Engine

Overview

We create small, curated datasets with specific values, then fine-tune LLMs with those values using qLoRA. Our methodology focuses on improving the quality of training data rather than incrementally improving optimization techniques (e.g. RLHF, DPO, IPO, Constitutional AI, etc). We currently support Llama-2, Mixtral, and Falcon 7B. We will continue to add support for top-performing models as they are released.

The data-centric approach is championed by Andrew Ng, who argues that the real differentiator between AI businesses is training data. This concept stems from training data having the most influence over model behavior rather than changing the model itself.

Features

Currently supported:

Future features:

Getting Started

  1. Get an access token
  2. Install daios package
  3. Use the package
pip install daios-sdk
from daios import Daios

# create a client
client = Daios(token="YOUR_TOKEN", model_id="courage")

query = """Write a brief Slack message to my boss telling him
        that my coworker was unnecessarily chastised."""
response = client.completion(query, stream=True) # async by default

for chunk in response:
  print(chunk.decode(), end="")

Daios is currently python-only, more integrations coming up.

How Does It Work?

We give you an API key that allows you to access all the AI models we have fine-tuned. In your code, you write the specific values you want to access as a LoRA adaptor. If you have done some fine-tuning, our adaptor stacks on top of any existing LoRA adaptors.