I was just wondering if it would be worth the effo...
# compiler
d
I was just wondering if it would be worth the effort creating a interoperability layer between intellisense and compiler. So that while you code the knowledge the intellisense gains, can directly be used by the compiler. And vice versa, the intellisense can hot start upon the last cache of the compiler. So that all the information doesn't need to be cached twice. Maybe you could also start compiling small changes in the background so that when you just make a little change the compiler only needs to pack the executable together and is ready to go. With optimizations to gradle this could also improve developer experience with Kotlin/JS and continous development. Personally i find it a bit annoying that it takes so long to apply the changes, maybe you can kill multiple birds with one stone there.
t
I think the major improvement on Kotlin/JS developer experience would be a proper use of browser side module loaders with incremental compilation. A few months ago I started to play with a compiler plugin and did some measurements of compilation time. The compiler is actually quite fast. The slow thing is loading the compiler into the Java VM and all the gradle magic. A cold start compilation took 8 seconds, a subsequent compilation in the same VM took 1 second. So, I think a good idea would be a continuously running compiler that feeds a continuously running web browser. As far as I know the incremental compilation is already on the table. I do plan to write a the continuously running compiler and the browser side to implement a fast JS playground. However I do not plan to cover React and Compose as my goal is to give a playground for my plugin PoC (a Svelte like declarative/reactive UI in Kotlin).