Rust Crates for Working with LLM

In this article, we’ll look at five Rust crates, each offering tools to easily integrate these advanced models into your Rust projects…

Rust Crates for Working with LLM

In this article, we’ll look at five Rust crates, each offering tools to easily integrate these advanced models into your Rust projects. You’ll find an overview of each crate’s features, use cases, and sample code snippets.

1. llm-rs Crate

This first crate in the article is also one of the most popular, deserving a slightly deeper overview.

llm in Rust is a comprehensive ecosystem designed for working seamlessly with LLMs. It provides a set of tools and libraries that enable developers to easily integrate and manipulate large language models within their Rust applications. The key to llm's performance lies in its underlying foundation: the GGML library, renowned for its fast and efficient machine learning computations.

Core Features

  • Efficient Handling of LLMs: Utilizes the GGML library for optimized performance.
  • Modular Design: A suite of libraries catering to different aspects of LLM integration and manipulation.
  • Rust-Centric Development: Tailored for the Rust programming language, ensuring seamless integration and safety.

Getting Started with llm

Installation

To begin using llm, you'll need to include it in your Cargo.toml:

[dependencies] 
llm = "0.1" 
ggml = "0.1"

Basic Text Generation

Here’s a simple example to demonstrate text generation with llm:

use llm::{LlmClient, TextGenerationRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = LlmClient::new("your_api_key"); 
    let request = TextGenerationRequest::new("Write a story about a robot"); 
    let story = client.generate_text(&request).await.unwrap(); 
    println!("Generated Story: {}", story); 
}

Exploring llm's Libraries

LlmClient Library

The LlmClient library is the gateway to interacting with various LLMs. It handles tasks like authentication, request formulation, and response parsing.

Example: Language Translation

use llm::{LlmClient, TranslationRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = LlmClient::new("your_api_key"); 
    let request = TranslationRequest::new("Hello, world!", "en", "es"); 
    let translation = client.translate_text(&request).await.unwrap(); 
    println!("Translation: {}", translation); 
}

GGML Integration

llm utilizes GGML for tasks that require machine learning computations, such as text analysis or sentiment detection.

Example: Sentiment Analysis

use llm::{LlmClient, SentimentAnalysisRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = LlmClient::new("your_api_key"); 
    let request = SentimentAnalysisRequest::new("This is a fantastic day!"); 
    let sentiment = client.analyze_sentiment(&request).await.unwrap(); 
    println!("Sentiment: {}", sentiment); 
}

Custom Model Training

For those needing bespoke solutions, llm offers tools to train custom models on top of GGML.

Advanced Features of llm

Moving beyond basic functionalities, llm also offers advanced features that cater to more specialized needs in the realm of LLMs. These include fine-tuning existing models, working with domain-specific language models, and integrating custom datasets for personalized model behavior.

Fine-Tuning Existing Models

Fine-tuning allows developers to adapt pre-trained models to specific contexts or domains, enhancing the model’s performance on particular types of data or tasks.

Example: Fine-Tuning a Model

use llm::{LlmClient, FineTuneRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = LlmClient::new("your_api_key"); 
    let request = FineTuneRequest::new("model_id", "dataset_id"); 
    let fine_tuned_model = client.fine_tune_model(&request).await.unwrap(); 
    // Use the fine-tuned model for specific tasks 
}

Working with Domain-Specific Models

For applications requiring expertise in specific fields like law, medicine, or finance, llm can interact with domain-specific models, offering more accurate and relevant outputs.

Example: Using a Medical Model

use llm::{LlmClient, DomainSpecificRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = LlmClient::new("your_api_key"); 
    let request = DomainSpecificRequest::new("Explain the symptoms of diabetes", "medical"); 
    let response = client.query_domain_specific_model(&request).await.unwrap(); 
    println!("Medical Advice: {}", response); 
}

You can read more about the llm-rs crate on its documentation here.

2. async-gpt Crate

async-gpt is a Rust crate designed for asynchronous communication with GPT-based models. It's particularly useful for applications that require non-blocking calls to LLMs. This crate excels in environments where concurrency and high throughput are crucial.

Key Features:

  • Asynchronous API for non-blocking calls
  • Support for batch processing
  • Customizable query parameters for fine-tuning responses

Code Example:

use async_gpt::GptClient; 
 
#[tokio::main] 
async fn main() { 
    let client = GptClient::new("your_api_key_here"); 
    let response = client.query("Explain quantum computing in simple terms.", None).await.unwrap(); 
    println!("Response: {}", response); 
}

3. rust-gpt3 Crate

rust-gpt3 crate is a comprehensive library for interacting with OpenAI's GPT-3. It covers a wide range of functionalities, including text completion, translation, and summarization. This crate is ideal for developers looking to leverage the full potential of GPT-3 in their Rust applications.

Key Features:

  • Broad range of functionalities (text completion, summarization, etc.)
  • Easy integration with existing Rust applications
  • Detailed error handling and debugging support

Code Example:

use rust_gpt3::{OpenAIClient, TextCompletionRequest}; 
 
async fn generate_text() { 
    let client = OpenAIClient::new("your_api_key_here"); 
    let request = TextCompletionRequest::new("Once upon a time"); 
    let response = client.complete_text(&request).await.unwrap(); 
    println!("Generated story: {}", response); 
}

4. gpt-neo-rs Crate

gpt-neo-rs crate is tailored for interacting with GPT-Neo, an alternative to GPT-3 that is open source. This crate is particularly useful for developers who prefer working with open-source models and need a Rust-based interface. It supports various functionalities similar to GPT-3 but with the flexibility and openness of the GPT-Neo architecture.

Key Features:

  • Open-source friendly
  • Comprehensive API for GPT-Neo functionalities
  • Customizable query options

Code Example:

use gpt_neo_rs::{GptNeoClient, CompletionRequest}; 
 
async fn get_completion() { 
    let client = GptNeoClient::new("your_api_key_here"); 
    let request = CompletionRequest::new("The future of AI in healthcare is"); 
    let response = client.get_completion(&request).await.unwrap(); 
    println!("AI Prediction: {}", response); 
}

5. transformers-rs Crate

transformers-rs is a Rust crate for interfacing with Hugging Face's Transformers library, which provides state-of-the-art machine learning models, including LLMs like GPT and BERT. This crate is ideal for Rust developers who want to leverage a wide range of pre-trained models and tools available in the Transformers ecosystem.

Key Features:

  • Access to a wide range of pre-trained models
  • Integration with Hugging Face’s ecosystem
  • High-level abstractions for complex tasks

Code Example:

use transformers_rs::{TransformersClient, ModelRequest}; 
 
#[tokio::main] 
async fn main() { 
    let client = TransformersClient::new(); 
    let request = ModelRequest::new("gpt2", "What is the capital of Canada?"); 
    let response = client.send_request(&request).await.unwrap(); 
    println!("Response: {}", response); 
}

That’s all!

Have you worked with other LLM crates you would add to the list? Let me know!

🚀 Explore a Wealth of Resources in Software Development and More by Luis Soares

📚 Learning Hub: Expand your knowledge in various tech domains, including Rust, Software Development, Cloud Computing, Cyber Security, Blockchain, and Linux, through my extensive resource collection:

  • Hands-On Tutorials with GitHub Repos: Gain practical skills across different technologies with step-by-step tutorials, complemented by dedicated GitHub repositories. Access Tutorials
  • In-Depth Guides & Articles: Deep dive into core concepts of Rust, Software Development, Cloud Computing, and more, with detailed guides and articles filled with practical examples. Read More
  • E-Books Collection: Enhance your understanding of various tech fields with a series of free e-Books, including titles like “Mastering Rust Ownership” and “Application Security Guide” Download eBook
  • Project Showcases: Discover a range of fully functional projects across different domains, such as an API Gateway, Blockchain Network, Cyber Security Tools, Cloud Services, and more. View Projects
  • LinkedIn Newsletter: Stay ahead in the fast-evolving tech landscape with regular updates and insights on Rust, Software Development, and emerging technologies by subscribing to my newsletter on LinkedIn. Subscribe Here

🔗 Connect with Me:

  • Medium: Read my articles on Medium and give claps if you find them helpful. It motivates me to keep writing and sharing Rust content. Follow on Medium
  • Personal Blog: Discover more on my personal blog, a hub for all my Rust-related content. Visit Blog
  • LinkedIn: Join my professional network for more insightful discussions and updates. Connect on LinkedIn
  • Twitter: Follow me on Twitter for quick updates and thoughts on Rust programming. Follow on Twitter

Wanna talk? Leave a comment or drop me a message!

All the best,

Luis Soares
luis.soares@linux.com

Senior Software Engineer | Cloud Engineer | SRE | Tech Lead | Rust | Golang | Java | ML AI & Statistics | Web3 & Blockchain

Read more