Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: ML-based PNG Compression #15

Open
redactedJare opened this issue May 23, 2023 · 4 comments
Open

Feature: ML-based PNG Compression #15

redactedJare opened this issue May 23, 2023 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@redactedJare
Copy link

https://github.com/casey/ord/pull/2055

https://github.com/casey/ord/pull/2103

Notably other compression methodologies should be used for other media types - brotli is great at 3d models and text for e.g.

Also the commits don't need to know about their pending reveals beforehand - leading me to think of two products off the hop,

candy machine (entirely in browser)

https://github.com/staccdotsol/ord/tree/features/cmv2

dead man's switch

think of a merkle tree where an ordinal is a secret and not committed or signed until a given timelock interval is not paid. The creator provides the data, et voile.

@tyjvazum tyjvazum changed the title stuck in hospital not able to do much of anthing Feature: ML-based PNG Compression May 23, 2023
@tyjvazum
Copy link
Owner

@redactedJare, thank you for opening the issue. I'm sorry to hear about your hospital situation. I hope everything resolves smoothly.

This all sounds interesting. I'll invest some time learning about it. In particular, the ML-based PNG compression sounds awesome and I'm glad you were able to bring it to my attention. I'm wondering how much compute it takes. Ideally it'd be possible without a GPU, even if it's slow in that mode.

@redactedJare
Copy link
Author

All good remedying slowly and surely.

There's better news: this is rust, we can multithread out to many (slower) processes and have them run the magick. Whenever I have a machine for a little while I'll try to bench for you. If the goal is to respect the 400k max then it should be almost neglibile.

... I know someone who had at some point started wasm work towards multimachine crunching along these veins.

Mind, it's worth nothing solana devnet is the ultimate size compressor for any file of any kind. If you have an engineer that can extract the tensorflow compress/decompress functions, implement slices for e.g. and some other features into seahorse-lang.org and then replicate this https://twitter.com/aeyakovenko/status/1655624489332531200 except that it constantly retrains itself on any new onchain datatypes et voila

@tyjvazum
Copy link
Owner

I think a minimal set up that runs on, or at least decompresses on, typical hardware and is packaged in some way to be distributed alongside the Rust binary should be the initial goal. If that can be accomplished then additional work could be integrated later. Just the compression performance for PNG sounds super useful if it can be designed to work well with the existing project structure.

@redactedJare
Copy link
Author

I will tell you in the current implementation:

  1. the compression happens in a spawned thread,
  2. the initial decompression stores a inscription id #.png to /tmp/
  3. then whatever garbage collection can happen

I have no computer for a few more days

@tyjvazum tyjvazum added enhancement New feature or request help wanted Extra attention is needed labels May 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants