Hey Folks! Spark Jupyter question here. I got sp...
# datascience
i
Hey Folks! Spark Jupyter question here. I got spark working using
spark-streaming
and `withSpark`and I’m happily crunching through big jobs. However I noticed the resources are set to a single core and 1gb of memory which might be why my jobs are taking a long time. Is there a way to configure the environment to upgrade those values? Also, any recommendations for what you might set am i7 Macbook Pro to?
a
I am not familiar with how Spark is configured, but the jupyter itself depends on command line arguments as it is described here: https://github.com/Kotlin/kotlin-jupyter#usage (use -Xmx argument).
The multicore is governed by the framework itself
r
cc @Jolan Rensen [JetBrains]
j
@iamsteveholmes Do you mean the resources of Spark or of Jupyter?
i
Spark
j
Alright, yeah then you can give the props as a map to the withSpark() command :) all spark environment variables should work through that. It's the same as defining options in the normal spark session builder
i
Thank you so much! Can't wait to play with that.