vike
Agricultural
- Sep 9, 2010
- 4
I have encountered a problem with using Solver to optimize some of the inputs of a user defined function. I would like to use Excel's Solver to minimize the sum of the square errors between observed data and a model.
I read some other posts or threads that dealt with the same issue. I believe that I am passing the inputs through the user defined function that I am trying to get Solver to adjust. Which was the issue with ExcelSolverUser's post.
I believe that I am having a similar issue that yngeng33 originally described. When I use a workaround, I can use solver to minimize the sum of square error between the model and trial data. However, when I use my user defined function to define the model output, solver will indicate that it has found an optimum solution when in fact it is far from an optimal solution ( i.e. I can manually adjust the cells to decrease the sum of square error between the model output and trial data).
I have attached a file that contains the user defined function, model output and trial data for your viewing.
Any advice on this issue would be greatly appreciated!
I read some other posts or threads that dealt with the same issue. I believe that I am passing the inputs through the user defined function that I am trying to get Solver to adjust. Which was the issue with ExcelSolverUser's post.
I believe that I am having a similar issue that yngeng33 originally described. When I use a workaround, I can use solver to minimize the sum of square error between the model and trial data. However, when I use my user defined function to define the model output, solver will indicate that it has found an optimum solution when in fact it is far from an optimal solution ( i.e. I can manually adjust the cells to decrease the sum of square error between the model output and trial data).
I have attached a file that contains the user defined function, model output and trial data for your viewing.
Any advice on this issue would be greatly appreciated!