pwcazenave / fvcom-toolbox

Fork of the fvcom-toolbox (original available at https://github.com/GeoffCowles/fvcom-toolbox)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Make Parallel in FVCOM

dharmaraharja96 opened this issue · comments

Hi!

I have an issue to make a parallel setup for FVCOM. When I'm running it with 8 processors on one pc, it is running quickly.
then when I was combined in with 2 pc, with 16 total of processors. the running time is getting lower and also when using 24 processors when using 3 pc. For the set up I used 1 server and 3 clients. have anyone have solutions on this case? thanks

thanks for your response, for my setup parallel I'm using this link:
https://www.youtube.com/watch?v=gvR1eQyxS9I (for the communication parallel I'm using LAN)
and for the setup in fvcom I'm using the example tutor, how about that? Do you have an assumption for my setup? thanks

Hi dharmaraharja,

I haven't watched all of the video, but i'm guessing that the two computers are connected to each other using standard gigabit ethernet (or maybe even 100mbps, depending on the network cards). While some software will work just fine on this sort of home-built cluster, FVCOM usually won't. FVCOM requires a lot of communication between subdomains, and hence the network connections can easily be a bottleneck.
HPCssystems that are designed for things like FVCOM use higher performance network links, eg things like Infiniband, Omnilink, etc

Sorry for the bad news!

Hi Simon

thanks for your information, I'm also trying to setup fvcom parallel on my campus. in my campus using gigabit switch with LAN cable (cat 5e) and the transfer rate data is 1000mbps, but I have the same problem, communication parallel setup not optimum.
Do we have to use the above 1000 mbps also? and my specific problem is the network?

I can't promise that the network is the problem, but it is very likely. Gigabit ethernet (the 1000mbps that you mention) is generally not good enough for FVCOM. I'm not sure whether the bandwidth or the latency is the problem.

As Ricardo mentioned, infiniband (often used for HPC networks) goes up to 200 times faster, and it also has much lower latency. Unfortunately this is probably not a problem you can solve cheaply. If you have some money to spend, then best to talk to an HPC expert.

thank you so much Ricardo and Simon for the information
your information really gives me a lot of information

@riquitorres do you have the ability to close this "issue"? Don't think I can.