If there is at least one stochastic design parameter in the experiment, The item for design of experiment will display in the explorer and its options have to be set. Select the item "Design of Experiment" in the explorer and edit its options in the property window:
Response Surface
Adaptive Sampling
If the method of approximation for constraints or criteria is set to Gaussian process at least for one variable or there is at least one 1D-variable with approximation and the option "Adaptive Sampling = True", the adaptive Gaussian process will be proceed by rebuild the response surface. After building the response surfaces for all constraints and criteria, a new parameter point in the design space will be suggested, calculated by the original model and add to the training data. Then the new response surfaces will be rebuild based on all calculated points. The loop will be continues until either the approximation process the desired accuracy or the defined number of maximal points has been obtained. This approach is high efficient. Only required points in the design space will be calculated by the original model to build the response surfaces accurately.
The data generated by design ofexperiment will be divided into 2 parts. One part will be used for building the meta-model and other part will be used for testing the meta-model. This option is the percentage of data using for building themeta-model. If it is zero, no any data-set will be used . If it is 100, alldata-sets will be used. This option is very useful for testing and evaluate the meta-model extracted from the DOE-data
If the meta-model is generated, user can validate the parameters of the meta-model by measurement data. The measurement data columns obatin the criteria, constraints and parameters. Other parametes will be variables for optimization by local or global fitting measurement to fit the meta-model to this meaurement data.
Adaptive Gaussian Process
Max. Confidence Interval [%]
This is the accuracy of the approximation by the adaptive Gaussian process. The option can be set between [1-100] and it is calculated by percentage of the absolute difference values |Ymax-Ymin| of the response surface. If the found maximal confindece interval is smaller then this value, the adpative Gaussian process will be stopped.
Max Points
For the adaptive sampling, a maximal number of new points (model calculations) can be defined. Within, the second stop condition beside of the max. confidence interval is given. The adaptive sampling will be stopped if either the defined accuracy or the maximal number of new points are obtained. If the number of design parameters is large and the nonlinearity of the response surface is high, a lot of required model calculations is needed to get the defined accuracy of the response surface. With maximal points, user can limit the computing effort.
Parent Number
This is the number of parents for the evolution strategies. It is used to find the suggested points in each loop based on the Gaussian process.
Children Number
This is the number of children for the evolution strategies. It is used to find the suggested points in each loop based on the Gaussian process
Training Data
Selection Method
There are 4 methods to select the data-sets from DOE: Uniformly, Randomly, First and List. by Uniformy, all points with the same distance to each other in the DOE will be selected. By Randomly, the points will be selected randomly. First selects the first data from DOE. For List, user can input the index list from DOE.
Kernel Method
Max. Order
The maximal order for the polynomial approximation cen be set.here. Thus, user can limit the polynomial order for huge DOE data size.
Noise Optimization
If the option is selected, the Gaussian noise will be considered as variable being optimized for machine learning. Otherway, the noise will be fixed.
Weight Optimization
by different optimization criteria for machine learning, the weights of these critera will be considered as variables being optimized, if the option is selected. Otherway, the weights of the criteria will be fixed.
Optimization Method
The selected method for kernel optimization of the machine learning: Gradient Based, L-BFGS, Evolution Strategies
Parent Number
This option is visible if evolution strategies are used for training the meta model by Gaussian process. It is the number of parents for the evolution strategies.
Children Number
This option is visible if evolution strategies are used. It is the number of children for the evolution strategies.
Max. Iterations
User can linmit here the max iterations for the optimization process.
Include Hilbert Space
If this option is selected, the Hilbert space approximation (regression and classification) will be always solved by nonlinear optimization method. Other way, the Hilbert space will use nonlinear optimization if the diffrential equation is nonlinear. By linear differential equation, the linear kernel method will be used.
Weight Optimization
by different optimization criteria for machine learning, the weights of these critera will be considered as variables being optimized, if the option is selected. Otherway, the weights of the criteria will be fixed.
Regularization
The reguklarization for nonlinear optimization of the machine learning: None, L1 or L2.
Optimization Method
The selected method for nonlinear optimization of the machine learning: Stochastic Gradient Descent, L-BFGS, Gauss.-Newton
Max. Iterations
User can linmit here the max iterations for the optimization process.
X-Max
The max value for the X-axis for all D1-variables. The X-axis starts from 0 and ends at this max value.
X-Step
It is the step size for X-axis.
X-Integration
User can choose the numerical integration for D1-variables. Three possible variantrs are available: Euler, Heun and Runga-Kutta.
Big Data
The matrix operation can be done based on "Full Matrix" or "Hierarchical Matrix". If it is a full matrix, the data will be put only in a matrix with very huge size, which cannot be calculated by a small GPU memory (not enough memory for this matrix size). By hierarchical matrix, the data will be divided into small hierarchical matrices, which can be easy distributed and calculated by a small GPU memory. By the option "Automatic", the hierarchical matrix will be used if the data size is greater than 10 times the "Max. Matrix Size". Other way, the full matrix is used.
Max. Matrix Size
It is the max size for elementary matrix of a hierarchical matrix. The standard setting is 32 or 64 for fast matrix computing.
Probabilistics
Virtual Sample Size
The probability density for each stochastic parameter, constrain and criteria will be computed based on this "Virtual Sample Size" with the meta-model. The sampling bases on the Virtual Design as nominal and tolerance value of stochastic parameters. Because of fast meta-model extracted from design of experiment, the sampling process can be done very fast. Thus, the output probability density can be computed with a extreme high accuracy by using 10.000 virtual sample size. It depends however on the number of stochastic parameters. If the number is greater, the virtual sample size should be set also greater.Distribution Grid
This is the number of sub-intervals, which are divided inside a distribution density interval. This continueous distribution density will be calculated from the discrete values by virtual samples size using this number of sub-intervals.
Optimization Method
User can choose Hooke-Jeeves or Evolution Strategies for robust design optimization. Evolution strategies are used for global multi-objective optimization, while Hooke-Jeeves for local single-objective optimization.
Max. Steps
The robust design optimization will exit if the optimization has obtained this max. steps.
Number of Parent
This is the number of parents for the evolution strategies.
Number of Children
This is the number of children for the evolution strategies.
Pareto Filter
If there are eat least 2 objectives with minimized and maximized type, the optimization with evolution strategies will create a set of pareto-optimal points in Design-Table. Sometime, the number of pareto set is to big, User can use this option to limit the number of uniform-scattered pareto points in Design-Table.
Pareto Number
This is the number of uniform-scattered pareto points in Design-Table. It is only visible if the option "Pareto Filter" = True.
Design of Experiment
Many methods for design of experiment are implemented in OptiY. For different methods listed below, there are also different setting options. If the method is set to Standard, Latin Hypercube Sampling will be used and the sample size will be automatically calculated based on the number of stochastic parameters.