Building a Function-as-a-Service (FaaS) in Rust

Welcome to this tutorial on building a Function as a Service (FaaS) system using Rust.

Building a Function-as-a-Service (FaaS) in Rust

Welcome to this tutorial on building a Function as a Service (FaaS) system using Rust.

If you’ve been exploring cloud computing, you’ve likely come across FaaS platforms like AWS Lambda or Google Cloud Functions. These platforms allow developers to deploy single functions that are run in response to events without maintaining the underlying infrastructure.

In this article, we’ll be creating our own simple FaaS platform using Rust, a language known for its performance and safety guarantees. This tutorial aims to provide a clear and concise walkthrough, offering a practical way to understand FaaS and how to implement it.

Let’s dive in!

Understanding Function As A Service (FaaS)

Before diving into the nitty-gritty of our Rust implementation, it’s crucial to have a solid grasp of the FaaS concept. Let’s take a moment to break it down.

What is FaaS?

Function as a Service, commonly abbreviated as FaaS, represents a cloud computing service model that allows developers to execute and manage code in response to particular events without the complexity of building and maintaining the underlying infrastructure. Think of it as a way to run your back-end code without dealing with servers.

Key Concepts:

  1. Event-driven: FaaS functions are typically executed in response to events. These events could range from HTTP requests to changes in database entries or even file uploads.
  2. Stateless: Each function invocation is independent. This means the system does not retain any persistent local state between function executions. Any external state must be stored in a database or other external storage system.
  3. Short-lived: FaaS functions are designed to execute quickly. Long-running operations might be better suited for different architectures.
  4. Automatic Scaling: One of the most significant advantages of FaaS is its ability to scale automatically. As the number of requests increases, the platform can handle them by instantaneously launching more instances of the function. This is managed without any intervention from the developer.
  5. Billing Model: With FaaS platforms, you’re typically billed for the actual amount of resources consumed during the execution of your functions rather than pre-allocated server sizes or uptime. This can lead to cost savings since you only pay for what you use.

With the basics out of the way, let’s get our hands dirty and start building!

Essential Crates for Our FaaS Project

In our journey to implement a Function as a Service platform using Rust, we’ll be leveraging several powerful crates from the Rust ecosystem:

  1. hyper: A fast and efficient HTTP client and server framework. In our project, hyper forms the backbone of our server infrastructure, allowing us to handle incoming requests efficiently.
  2. tokio: An asynchronous runtime tailored for Rust. It’s instrumental in managing I/O-bound tasks and other asynchronous operations. For our FaaS platform, tokio serves as our primary runtime, ensuring that file I/O and network requests are executed smoothly.
  3. reqwest: An elegant high-level HTTP client built atop hyper. While hyper provides the building blocks for HTTP communication, reqwest offers a more ergonomic interface. In our client application, it streamlines the process of sending HTTP requests.
  4. libloading: This crate offers safe and performant dynamic linking capabilities for Rust. The core functionality of our FaaS platform, which involves dynamically loading and invoking functions from shared libraries, hinges on libloading.
  5. multipart: Tailored for parsing and generating multipart (notably multipart/form-data) requests and responses, multipart is crucial in our setup. It manages the file upload process, enabling users to submit their function libraries seamlessly.
  6. anyhow: Handling errors in Rust becomes a breeze with anyhow. In our application, it plays a pivotal role in managing and presenting errors in a way that's user-friendly.

Server Setup and Logic

Setup

Before diving into the code, ensure you have the latest Rust and Cargo installed. You can do this via rustup. Once ready, create a new project:

cargo new rust_faas_server 
cd rust_faas_server

Next, add the required dependencies to your Cargo.toml:

[dependencies] 
hyper = "0.14" 
tokio = { version = "1", features = ["full"] } 
reqwest = "0.11" 
libloading = "0.7" 
multipart = "0.17" 
anyhow = "1.0"

Run cargo build to ensure everything is set up correctly.

Server Logic

The heart of our server is the hyper crate. We'll be crafting endpoints to upload and invoke functions.

1. Starting the Server

To kick things off, we’ll create a simple HTTP server using hyper. The server will listen for incoming requests, routing them to the appropriate handler based on their path.

use hyper::{service::make_service_fn, server::Server}; 
use hyper::{Body, Request, Response}; 
use std::convert::Infallible; 
 
#[tokio::main] 
async fn main() { 
    let make_svc = make_service_fn(|_conn| { 
        async { Ok::<_, Infallible>(service_fn(route_request)) } 
    }); 
    let addr = ([127, 0, 0, 1], 8080).into(); 
    let server = Server::bind(&addr).serve(make_svc); 
    println!("Listening on http://{}", addr); 
    server.await.expect("Server failed"); 
}

This code sets up a basic server that listens on http://127.0.0.1:8080/. All incoming requests are routed to the route_request function.


Download Now!


2. Routing the Requests

We aim to support two main operations: uploading a function and invoking it. We can determine which operation to perform based on the HTTP method and path:

async fn route_request(req: Request<Body>) -> Result<Response<Body>, Infallible> { 
    match (req.method(), req.uri().path()) { 
        (&hyper::Method::POST, "/upload") => handle_upload(req).await, 
        (&hyper::Method::GET, path) if path.starts_with("/invoke/") => handle_invoke(req).await, 
        _ => Ok(Response::new(Body::from("Not Found"))), 
    } 
}

The routing logic is straightforward: POST requests to /upload are directed to the upload handler, while GET requests to paths starting with /invoke/ are directed to the invoke handler.

3. Handling Uploads

The upload handler’s responsibility is to save the uploaded shared library and ensure it’s ready for invocation:

async fn handle_upload(mut req: Request<Body>) -> Result<Response<Body>, Infallible> { 
    let headers = req.headers().clone(); 
 
    if let Some(content_type) = headers.get(CONTENT_TYPE) { 
        if let Ok(ct) = std::str::from_utf8(content_type.as_bytes()) { 
            if ct.starts_with("multipart/form-data") { 
                let boundary = ct.split("boundary=").collect::<Vec<_>>().get(1).cloned().unwrap_or_default(); 
                let body_bytes = hyper::body::to_bytes(req.into_body()).await.unwrap(); 
 
                let parts = body_bytes.split(|&b| b == b'\n') 
                    .filter(|line| !line.starts_with(b"--") && !line.is_empty() && line != &boundary.as_bytes()) 
                    .collect::<Vec<_>>(); 
 
                // Rudimentary parsing, assuming every two slices are headers and content 
                for i in (0..parts.len()).step_by(2) { 
                    let headers = parts.get(i); 
                    let content = parts.get(i + 1); 
 
                    if let (Some(headers), Some(content)) = (headers, content) { 
                        if headers.starts_with(b"Content-Disposition") { 
                            // Extract filename and write content to a file 
                            // This is a simple example; in a real-world scenario, you'll need more comprehensive parsing 
                            if let Some(start) = headers.windows(b"filename=\"".len()).position(|w| w == b"filename=\"") { 
                                let filename_start = start + b"filename=\"".len(); 
                                let filename_end = headers[filename_start..].iter().position(|&b| b == b'"').unwrap_or(0) + filename_start; 
                                let filename = &headers[filename_start..filename_end]; 
                                let file_path = format!("./uploads/{}", std::str::from_utf8(filename).unwrap()); 
 
                                tokio::fs::write(&file_path, content).await.unwrap(); 
                            } 
                        } 
                    } 
                } 
 
                return Ok(Response::new(Body::from("File uploaded successfully"))); 
            } 
        } 
    } 
    Ok(Response::new(Body::from("Invalid request"))) 
}

Here, you’ll leverage the multipart crate to process uploaded files and the libloading crate to ensure they can be loaded as shared libraries. Remember to perform security checks, as loading untrusted shared libraries can be a risk.

4. Handling Invocations

Once a function has been uploaded, it can be invoked using its name:

async fn handle_invoke(req: Request<Body>) -> Result<Response<Body>, Infallible> { 
    // Parse the path to get the function name 
    let path = req.uri().path(); 
    let parts: Vec<&str> = path.split('/').collect(); 
    if parts.len() < 3 { 
        return Ok(Response::builder().status(StatusCode::BAD_REQUEST).body(Body::from("Invalid path")).unwrap()); 
    } 
    let function_name = parts[2]; 
 
    // Construct the path to the uploaded library 
    let lib_path = format!("uploads/{}.so", function_name); // Assuming a Unix-like system; adjust extension if necessary 
 
    if !std::path::Path::new(&lib_path).exists() { 
        return Ok(Response::builder().status(StatusCode::NOT_FOUND).body(Body::from("Library not found")).unwrap()); 
    } 
 
    // Load the library 
    let lib = unsafe { Library::new(&lib_path) }.expect("Failed to load library"); 
 
    unsafe { 
        // Load the function symbol from the library 
        let function_name_bytes = function_name.as_bytes(); 
        let func: Symbol<Func> = lib.get(function_name_bytes).expect("Failed to get symbol"); // Replace "my_function" with your function's name 
 
        // Call the function 
        let result = func(); 
 
        Ok(Response::new(Body::from(format!("Function returned: {}", result)))) 
    } 
}

With the libloading crate, you can dynamically load functions from shared libraries and invoke them, returning the results to the client.

We’ve laid the foundation for our FaaS server using Rust. By leveraging crates like hyper, multipart, and libloading, we can securely and efficiently handle the upload and invocation of functions. In the next section, we'll focus on the client side, which will interact with our server, making it a complete FaaS ecosystem.

Building a FaaS CLI

Now that we have our FaaS server set up, we need a way to interact with it — a client tool to deploy and invoke functions. Rust offers excellent capabilities for CLI tool development thanks to its lightweight binaries and performance. In this section, we’ll walk through the development of a simple command-line interface (CLI) for deploying and invoking serverless functions on our FaaS platform.

Setup

Begin by creating a new Rust project specifically for our client:

cargo new rust_faas_client 
cd rust_faas_client

Next, update the Cargo.toml with necessary dependencies:

[dependencies] 
reqwest = "0.11" 
tokio = { version = "1", features = ["full"] } 
anyhow = "1.0"

Remember, reqwest helps with making HTTP requests, tokio is for asynchronous programming, and anyhow will assist with error handling.

CLI Logic

Our client should support two primary operations: upload and invoke. Let's build a command-line interface around these actions.

1. Parsing Command Line Arguments

Begin by structuring the expected input:

use std::env; 
 
fn main() { 
    let args: Vec<String> = env::args().collect(); 
    if args.len() < 3 { 
        println!("Usage: cli <upload/invoke> <path/name> [arg]"); 
        return; 
    } 
    let action = &args[1]; 
    match action.as_str() { 
        "upload" => { 
            let path = &args[2]; 
            upload_function(path).unwrap_or_else(|e| println!("Error: {}", e)); 
        }, 
        "invoke" => { 
            let name = &args[2]; 
            let arg = args.get(3).cloned().unwrap_or_default(); 
            invoke_function(name, &arg).unwrap_or_else(|e| println!("Error: {}", e)); 
        }, 
        _ => println!("Invalid action. Use 'upload' or 'invoke'.") 
    } 
}

The client accepts commands in the form cli upload <path_to_function> and cli invoke <function_name> [arg].

2. Uploading a Function

To upload a function, send it as a multipart request to our server:

use reqwest::multipart; 
 
async fn upload_function(path: &str) -> Result<(), anyhow::Error> { 
    let client = reqwest::Client::new(); 
    let form = multipart::Form::new() 
        .file("file_field_name", path)?; 
    client.post("http://127.0.0.1:8080/upload") 
        .multipart(form) 
        .send() 
        .await?; 
    println!("Function uploaded successfully!"); 
    Ok(()) 
}

Here, the client sends a POST request to /upload on our server with the function file attached.

3. Invoking a Function

To invoke a function, simply send a GET request to the server:

async fn invoke_function(name: &str, arg: &str) -> Result<(), anyhow::Error> { 
    let client = reqwest::Client::new(); 
    let response = client.get(&format!("http://127.0.0.1:8080/invoke/{}/{}", name, arg)) 
        .send() 
        .await?; 
 
let result = response.text().await?; 
    println!("Function response: {}", result); 
    Ok(()) 
}

This client sends a GET request to /invoke/<function_name>/<arg> on our server, then prints the function's output.

Our FaaS CLI is a simple, yet effective, interface for uploading and invoking functions on our FaaS platform. With just a couple of commands, users can deploy and run serverless functions, harnessing the power of Rust’s performance and safety features. Combined with the server logic from the previous section, we’ve crafted a full-fledged FaaS ecosystem in Rust, ready for expansion and further enhancement.

Crafting a Sample Function: The add Library in Rust

Having developed the FaaS server and the CLI to interact with it, it’s time to create a function to deploy and test our system. We’ll create a simple Rust library that adds two numbers, then deploy and invoke it using our new tools.

1. Creating the add Library

Begin by setting up a new Rust library:

cargo new --lib add_lib 
cd add_lib

Update the main library file, src/lib.rs, to contain our addition function:

#[no_mangle] 
pub extern "C" fn add(x: i32, y: i32) -> i32 { 
    x + y 
}

The #[no_mangle] attribute ensures that the Rust compiler doesn't mangle the function name, making it easier to load dynamically. The extern "C" designation ensures that the function uses the C calling convention, which is essential for dynamic loading.

2. Building the Library

Now, build the library to generate the shared object file:

cargo build --release

This command will produce the libadd_lib.so file in the target/release directory, which is our shared library.

3. Deploying and Invoking with our CLI

Navigate back to where our CLI tool resides and deploy the add function:

./cli upload path/to/add_lib/target/release/libadd_lib.so

If successful, you’ll receive a confirmation message. Now let’s invoke our function:

./cli invoke add 5,10

You should see an output of 15, proving our function-as-a-service system works!

Wrap Up

The beauty of our FaaS system is in its simplicity. With a straightforward Rust function, we’ve demonstrated the entire lifecycle: from creating a function, to deploying it on our server, and then invoking it through our CLI. This modular approach allows developers to focus solely on writing the business logic in Rust, while the infrastructure handles the deployment and execution.

In future enhancements, you can expand the system to support more complex functions, handle multiple functions simultaneously, provide versioning, or introduce error handling and logging mechanisms.

You can find the complete implementation over at my GitHub repository: https://github.com/luishsr/rust-faas-serverless.

Your feedback, suggestions, or contributions are always welcome.


Download Now!


Check out some interesting hands-on Rust articles!

🌟 Developing a Fully Functional API Gateway in Rust — Discover how to set up a robust and scalable gateway that stands as the frontline for your microservices.

🌟 Implementing a Network Traffic Analyzer — Ever wondered about the data packets zooming through your network? Unravel their mysteries with this deep dive into network analysis.

🌟 Building an Application Container in Rust — Join us in creating a lightweight, performant, and secure container from scratch! Docker’s got nothing on this. 😉

🌟 Crafting a Secure Server-to-Server Handshake with Rust & OpenSSL — 
If you’ve been itching to give your servers a unique secret handshake that only they understand, you’ve come to the right place. Today, we’re venturing into the world of secure server-to-server handshakes, using the powerful combo of Rust and OpenSSL.

🌟 Building a Function-as-a-Service (FaaS) in Rust: If you’ve been exploring cloud computing, you’ve likely come across FaaS platforms like AWS Lambda or Google Cloud Functions. In this article, we’ll be creating our own simple FaaS platform using Rust.

🌟 Implementing a P2P Database in Rust: Today, we’re going to roll up our sleeves and get our hands dirty building a Peer-to-Peer (P2P) key-value database.

🌟 Implementing a VPN Server in Rust: Interested in understanding the inner workings of a VPN? Thinking about setting up your own VPN server? Today, we’re taking a straightforward look at how to set up a basic VPN server using Rust.

Happy coding, and keep those Rust gears turning!


Read more articles about Rust in my Rust Programming Library!


Visit my Blog for more articles, news, and software engineering stuff!

Follow me on Medium, LinkedIn, and Twitter.

Leave a comment, and drop me a message!

All the best,

Luis Soares

CTO | Tech Lead | Senior Software Engineer | Cloud Solutions Architect | Rust 🦀 | Golang | Java | ML AI & Statistics | Web3 & Blockchain

Read more