Hi, everyone! Wanted to share my take on Llama2 implementation with Kotlin multiplatform.
Currently supports running inference on llama2 models on JVM/Linux/Macos/Windows/Nodejs
Any feedback is greatly appreciated
https://github.com/stepango/llama2-kmp
🚀 4
🎉 1
m
mbonnin
12/04/2023, 10:04 PM
Cool stuff 👏! Would be a cool benchmark to compared against other implementations. Do you already have an idea how it compares to C?
s
stepango
12/04/2023, 11:49 PM
Yeah, I'm thinking about building a benchmark as one of the next steps. My guess is - JVM will be faster, but js/native will be slower.
Here some benchmark against JVM Kotlin implementation https://github.com/madroidmaq/llama2.kt#llama2c
💙 1
z
zaleslaw
12/07/2023, 11:18 AM
@stepango you are awesome, interesting to test!
❤️ 1
zaleslaw
12/07/2023, 11:19 AM
Could you please post it in #datascience slack channel also, I think you will find there a few if early adopters