Posted on :: 441 Words :: Tags: , , , , ,

Preface

Omniscient is a relatively small project, but a project I learned A LOT from. Specifically, how IPC (Interprocess Communication Works) and Linux shmem() - shared memory works.

This started because I wanted to do something EXTRA for the programming portion of me and my group's omnichicken.

So I had a bright idea to create something ridiculously complex and unneeded - a self-contained observer for the robot where I can monitor what the robot is doing at a given moment.

Front-End Overview

Now, I already learned a thing or two while drafting up my initial pipin web server, so I could at least use that as a foundation for this specific project.

First of all, the entire program including the front end should be self-contained - as in, you wouldn't need to reference a separate front-end, where ever the binary goes the entire program with it, web server and all.

I can do this relatively easy in Axum like so:

async fn serve_html() -> Html<&'static str> {
    let html = include_str!("./assets/index.html");
    Html(html)
}

Repeat that process for any other assets I want to embed: styles.css, and script.js.

From what I gather, this pretty much converts whatever is passed inside the include_str!() macro into a string literal during comp-time. The string is then rendered based on whatever header::CONTENT_TYPE we set to.

For the serve_html() function above, it's already a built-in from Axum so it can literally just infer that specific content type. For others however, I need to specify the content type as well as return with a IntoResponse trait.

async fn serve_js() -> impl IntoResponse {
    let js = include_str!("./assets/script.js");
    ([(header::CONTENT_TYPE, "application/javascript")], js)
}

Also, I'm doing raw JavaScript this time around since I don't want to deal with the HTMX stuff I had to wad through in the pipin project. That means I'll be processing JSON instead, which in my opinion is much easier, especially with serde_json.

Setting up the WebSocket client for the front-end JavaScript side was pretty trivial. There's a shit ton of tutorials out there and I went with the relatively simple Mozilla Dev Docs. Client connection looked like so:

const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
const ws_url = `${protocol}//${window.location.host}/ws`;

socket = new WebSocket(ws_url);

socket.addEventListener("open", (_) => {
  console.log("Connected to WebSocket server");
});

socket.addEventListener("message", (event) => {
  const message = event.data;
  process_msg(message);
});

socket.addEventListener("close", (_) => {
  console.log("Disconnected from WebSocket server");
});

It was extremely straightforward in terms of the JavaScript receiving side, I'll go over the sending side (Rust back-end) later on. But I basically get a giant blob of JSON and just parse that message, it's as simple as:

const data = JSON.parse(msg);

document.getElementById("mem").textContent = data.shared_mem;
document.getElementById("bot-mode").textContent = data.bot_mode;
// repeat for the other important data ...

Shared Memory

TODO

In C

typedef struct {
    int ver;
    int direction;
    int motor_power[3];
    int bot_mode;
    int obstacle;
    int go_left;
    int go_right;
    int sensor_mode;
    int sensors[4];
} Shared;

In Rust:

#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct Shared {
    ver: i32,
    direction: i32,
    motor_power: [i32; 3],
    bot_mode: i32,
    obstacle: i32,
    obstacle_mode: i32,
    go_left: i32,
    go_right: i32,
    sensor_mode: i32,
    sensors: [i32; 5],
}

ALSA and Cross-Compiling

TODO

As a system daemon

TODO

Attribution