PG8911
Aerospace
- May 26, 2015
- 5
Hello
I've been asked to help spec out a new analysis machine for our office. We're currently looking at Dell Workstations, in particular I am having an issue working out what GPU to get in order to allow us to do GPU accelerated runs.
It's only recently that we've started looking into GPU runs, but so far we don't see much improvement if any. The main error we get is that the supernode is too large to be loaded into the GPU which I believe indicates a lack of GRAM.
My problem is that I am struggling to convince my managers of the business case of having GPU acceleration since I am unable to show it with the current hardware. My managers aren't convinced by the press releases, based on the assumption that they are showing off so the decks are stacked in their favour (pun intended). To show it on one of our working models, I think I need a newer or higher memory GPU, which I can't obtain unless I show some kind of business case - my current catch22.
Is there any relationship between the model size and the required amount of GRAM?
If so I can then work backwards to a series of dummy models that can be used to demonstrate improvements.
I've been asked to help spec out a new analysis machine for our office. We're currently looking at Dell Workstations, in particular I am having an issue working out what GPU to get in order to allow us to do GPU accelerated runs.
It's only recently that we've started looking into GPU runs, but so far we don't see much improvement if any. The main error we get is that the supernode is too large to be loaded into the GPU which I believe indicates a lack of GRAM.
My problem is that I am struggling to convince my managers of the business case of having GPU acceleration since I am unable to show it with the current hardware. My managers aren't convinced by the press releases, based on the assumption that they are showing off so the decks are stacked in their favour (pun intended). To show it on one of our working models, I think I need a newer or higher memory GPU, which I can't obtain unless I show some kind of business case - my current catch22.
Is there any relationship between the model size and the required amount of GRAM?
If so I can then work backwards to a series of dummy models that can be used to demonstrate improvements.