If i have to manually put the IPs in - in a cloud ...
# vertx
a
If i have to manually put the IPs in - in a cloud environment which is highly dynamic - seems like a dealbreaker
d
That's why I suggested orchestration like Docker Swarm that takes care of all that for you... I think these features were developed before orchestration technology existed... so people had to manage service location across servers themselves.... k8s is a monster, so if you need something simple, just install docker on all the servers, run docker swarm init on one, and docker swarm join on the other pointing to the ip of the master and using the token to form the cluster. Then you can make main service of vert.x run and make another empty service to join the main one as a cluster (there you use the main one's service name - not a hard coded server ip and docker resolve the service whereever it is on the cluster...). It's a bit of a learning curve, but when you spread out to more than 3-4 servers, and alot of microservices, and other containers, it starts paying off in big. But if you stay with only 2 servers, than there shouldn't be a problem with hard coding ips... at worst use a floating ip from your cloud provider... we are currently running 16 servers, and wouldn't have been able to manage w/o docker swarm... but that's your choice I guess, you need to see if your use case warrents all this 🙂
âž• 1
a
Haha I did use docker swarm - i got it up and running on my 2 instances on gcp - loved it - absolutely insane --- I am doing multiple separate experiments - this is experiment 2 - just vertx capabilities with hazelcast trying to get things to connect to each other
and then depending on how this one goes the next one is vertx+docker+swarm and vertx+docker+k8
how are you getting the eventbus to communicate with multiple nodes via docker?
Gonna use jclouds to see if i can get it to work before moving onto the docker experiment
d
We have services written in php and python too, and the vert.x event bus doesn't have support for php yet... so so fat we've been using regular internal docker overlay networking and http endpoints... thinking about something like kafka or rabbitmq in the near future though...
a
So none of the vertx service discovery and circuit breakers
All kube dns rest stuff?
d
Circuit breakers are very good, the eventbus is also useful between verticles, and i think once you set up your vert.x cluster, it should manage eventbus communications across nodes. You even have a key value map across nodes to use... all those are great features. All I mean is that you should keep your eyes out for easier or better ways to do things w/o getting swamped w/ so many that you don't accomplish anything anyways. You have to see the market trend, more and more cloud providers are racing to provide solutions for k8s, and docker swarm also has a respectable community of users (although the future is k8s, the simplicity and elegance of swarm is hard to beat if you don't need all of k8s features...) That said, if all you need are a few vert.x services w/o an overly complicated setup of other non vert.x services, and you don't need to grow so much, you could very well manage wtih vert.x alone...
a
That’s exactly why i m doing these sandbox experiments - I totally agree - i dont want to add unnecessary bloat
When u say once i set up my vertx cluster - do u mean in docker?
Or k8?
Or just with the -cluster command ?
Udp/multicast basic hazelcast across multiple machines definitely isn’t working
Jclouds isn’t helping either
Thinking of just going grpc + k8 route - but with that the issue becomes l4 vs l7 load balancing - everything ends up becoming a block in terms of what i m trying to get done
I m working on the supporting infrastructure more so than the actual application - that can be whatever
d
Like I said, swarm is much simpler and it just works... k8s is a monster, but has tons of features and is great for growing infrastructure. We went with swarm in the beginning, knowing that we would grow, and the books i mentioned helped alot in making the right choices, with all the common good practices he brings out. But it all boils down to what you're trying to accomplish.. the real minimum for a production ready swarm or k8s is having 3 master nodes spread in 3 availability zones, since if you loose master quorum, you're cooked. Then you need worker nodes to have all the heavier services needing to be deployed. In swarm, you need a seperate staging cluster to do tests on, which is more nodes... but then, any system that needs to recover its cluster will need some kind of quorum... In vert.x you need the -cluster command to get these cross node features to work... I4, i7 lb? Maybe you should ask on swarm or k8s slacks or devops20 slack, there's very big communities behind them...
Also gcp has managed k8s service, why not try that instead of trying your own, if you go for k8s? Aws has kops , and Digital Ocean is coming out w/ theirs in september (i hope)