a blurhash library that can hash byte streams
Go to file
asonix 45f5238f45
/ tests (push) Successful in 45s Details
/ clippy (push) Successful in 13s Details
/ check (aarch64-unknown-linux-musl) (push) Successful in 7s Details
/ check (armv7-unknown-linux-musleabihf) (push) Successful in 7s Details
/ check (x86_64-unknown-linux-musl) (push) Successful in 6s Details
demo script - hash first frame
2024-03-06 13:28:59 -06:00
.forgejo/workflows runs on base image 2024-02-25 21:06:12 -06:00
benches Bench and test auto 2024-02-22 22:38:53 -06:00
data Build & bench 2024-02-18 00:13:20 -06:00
examples Add auto configuration option for example 2024-02-23 14:10:35 -06:00
scripts demo script - hash first frame 2024-03-06 13:28:59 -06:00
src Calculate exact capacity needed for hash 2024-03-06 13:27:03 -06:00
.gitignore New project 2024-02-17 14:54:09 -06:00
Cargo.toml Add license 2024-02-25 21:20:08 -06:00
LICENSE-APACHE Add pixel skipping, readme & docs 2024-02-22 21:17:29 -06:00
LICENSE-MIT Add pixel skipping, readme & docs 2024-02-22 21:17:29 -06:00
README.md Update readme 2024-02-23 14:40:16 -06:00
flake.lock New project 2024-02-17 14:54:09 -06:00
flake.nix Add script for hashing arbitrary files with imagemagick and stdin example 2024-02-23 14:13:25 -06:00

README.md

blurhash-update

A blurhash encoder for streaming bytes

Supports

  • Encoding
  • Decoding

Motivation

There exists already a blurhash crate, which is a good choice for creating blurhashes, however, it requires that all pixels for a given image exist in memory in order to compute it. For very large images, this might not be ideal.

blurhash-update provides an API for processing bytes from an image as they are made available. This isn't as performant as blurhash in like-for-like comparisons, but the benefit of a lower memory overhead can be useful in some scenarios.

blurhash-update also provides the ability to reduce accuracy by skipping processing of some of the input pixels. This greatly improves performance, but might lead to blurhashes that don't look quite right. Using blurhash-update's auto encoder configuration will target an extremely performant but very loose profile based on the image dimensions.

Usage

use std::io::Read;

use blurhash_update::{Components, Encoder, ImageBounds};
use clap::Parser;

#[derive(clap::Parser)]
struct Args {
    /// Width of the provided image
    #[clap(long)]
    width: u32,

    /// Height of the provided image
    #[clap(long)]
    height: u32,
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let Args { width, height } = Args::parse();
    let mut encoder = Encoder::new(Components { x: 4, y: 3 }, ImageBounds { width, height }, 1)?;

    let mut stdin = std::io::stdin().lock();
    let mut buf = [0u8; 1024];

    loop {
        let n = stdin.read(&mut buf)?;

        if n == 0 {
            break;
        }

        encoder.update(&buf[..n]);
    }

    println!("{}", encoder.finalize());

    Ok(())
}

Example usage:

magick convert /path/to/image RGBA:- | \
    cargo r --example --release -- --width blah --height blah

License

blurhash-update is licensed under either of the following: