googleinterns / paksha

Compiling JAX to WebAssembly for exploring client-side machine learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

JAX on the Web

Why run JAX ML models on the Web?

  • Privacy: Running on edge means data doesn't have to be sent back to servers and can power local, privacy-first machine learning, such as federated learning or executing models on private PII information.
  • Low-Latency: Interacting with client-side data avoids round-trip of server back and forth and allows for computation to be performed on the client's device itself. You can also consider a combination of low-latency local decisions combined with slower, longer round-trip for more powerful models running in the cloud that are queried at a different time scale.
  • Run Anywhere: No need to worry about user's OS or user interaction, and no server costs or scaling when exploring demos with users.

About

Compiling JAX to WebAssembly for exploring client-side machine learning


Languages

Language:WebAssembly 98.7%Language:C++ 0.8%Language:Python 0.5%