Hi, everyone! Wanted to share my take on Llama2 im...
# opensource
s
Hi, everyone! Wanted to share my take on Llama2 implementation with Kotlin multiplatform. Currently supports running inference on llama2 models on JVM/Linux/Macos/Windows/Nodejs Any feedback is greatly appreciated https://github.com/stepango/llama2-kmp
🚀 4
🎉 1
m
Cool stuff 👏! Would be a cool benchmark to compared against other implementations. Do you already have an idea how it compares to C?
s
Yeah, I'm thinking about building a benchmark as one of the next steps. My guess is - JVM will be faster, but js/native will be slower. Here some benchmark against JVM Kotlin implementation https://github.com/madroidmaq/llama2.kt#llama2c
💙 1
z
@stepango you are awesome, interesting to test!
❤️ 1
Could you please post it in #datascience slack channel also, I think you will find there a few if early adopters
👍 1