+ 3
Any difference in programming language regarding simulations?
i'm running a simulation as part of academic research, in Fortran. the simulation is simple, a 3 dimensional lattice, each site updated according to the state of its neighbors. sweeping the lattice tens of thousands time as the update depends on the state of the neighbors it's not possible to divide the lattice to smaller parts & let the different cores simulate simultaneously & independently. question:could c++ or other language do it faster & efficient or is fortran just ok? just cpu speed that matters?
6 odpowiedzi
0
hey,
the more information the better the help.
For sequence numeric(!) processing Fortran is good and in some cases even faster than C!
Google for vectorization, maybe this is valuable for you. it's a concept of cpu optimization.
note that easy loops the compiler do it automatically, but for more complex scenarios you might have to help ;)
0
thanjs for you answer gunther :)
the simulation is this: a 3dimensional lattice (gitter) cobsisting of cells like a huge rubik's cube. the value of each cell is determined by the nearest neighboring cells. (a cellular automaton). the program goes from one cell to another and evaluates its value the same way. and does thar over and over again. a whole sweep of the lattice is equal to 1 step. a total if a million steps is executed.
as you mentioned it's sequential. it has to be done as described and you cannot divide the lattice to say 4 pieces, do the same calculations on those parts and then put it together, like making use of multiple corws simultaneously. each cell is dependent on it's neighbor, so no independent sub-lattices are allowed.
in general there is nothing wrong with fortran and from what i read it's quite powerful in this libe of computation. i was just wondering if other more modern languages (as firtran is relatively old) could make use of new features of either the language itself or of the newer cpu's to speed up these simulations. i've read an article on doing similar simulations on gpu's with CUDA rather than on classical cpu's does speed up the simulations enormously(!) as gpu's are faster than cpu's. any ideas/experience/comments on that? :)
0
correction: sub lattices are not allowed.( oh god, so much typos. that's what happens when you type on a SIII-mini :)
0
ah! cellular automata! I haven't read about those so far.
BUT GPU is only useful when you do the same operation often. the bootleneg with GPU processing is the coping to and back from the graphic card. so maybe you would have to optimize/change your way in that direction to gain benefit.
But that seems wrong, because if it should model a biological cell conglomerate, all processes happen simultaneously. but maybe i am again misleaded by the word cellular :)
0
Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously,[4]
Wikipedia, https://en.m.wikipedia.org/wiki/Cellular_automaton
0
yep, that's it: cellular automata (CA). the word cellular might be misleading but you can use it to simulate bilogical systems as well. CA function like these: each cell's value is determined by it's neighbors. for example:if 3 of 6 neighbors have + value then make the cell value a. if 4 of 6 are + then make the value y. you define a rule as to how the cell value is determined and apply it to all cells. as to give ab idea about the duration of this simulation: a 3D lattice consisting of 30x30x30=27000 cells evolving for 1 million steps (calculating a few formulas along the way for our evaluations afterwards) takes a bit more than 2 days on a linux server. the bigger the lattice and more time steps the better it would be, but i would love to finish it on this life time :)