Point collocation

Point collection method is a broad term, as it covers multiple variation, but in a nutshell all consist of the following steps:

  1. Generate samples \(Q_1=(\alpha_1, \beta_1), \dots, Q_N=(\alpha_N, \beta_N)\) that corresponds to your uncertain parameters.

  2. Evaluate model solver \(U_1=u(t, \alpha_1, \beta_1), \dots, U_N=u(t, \alpha_N, \beta_N)\) for each sample.

  3. Select a polynomial expansion \(\Phi_1, \dots, \Phi_M\).

  4. Solve linear regression problem: \(U_n = \sum_m c_m(t)\ \Phi_m(\alpha_n, \beta_n)\) with respect for \(c_1, \dots, c_M\).

  5. Construct model approximation \(u(t, \alpha, \beta) = \sum_m c_m(t)\ \Phi_n(\alpha, \beta)\)

  6. Perform model analysis on approximation \(u(t, \alpha, \beta)\) as a proxy for the real model.

Let us go through the steps in more detail.

Generating samples

Unlike both Monte Carlo integration and pseudo-spectral projection, point collocation method does not assume that the samples follows any particular form. Though traditionally they are selected to be random, quasi-random, nodes from quadrature integration, or a subset of the three. For this case, we select the sample to follow the Sobol samples from Monte Carlo integration, and optimal quadrature nodes from pseudo-spectral projection:

[1]:
from pseudo_spectral_projection import gauss_quads

gauss_nodes = [nodes for nodes, _ in gauss_quads]

The number of Sobol samples to use at each order is arbitrary, but for compare, we select them to be the same as the Gauss nodes:

[2]:
from monte_carlo_integration import sobol_samples

sobol_nodes = [sobol_samples[:, :nodes.shape[1]] for nodes in gauss_nodes]
[3]:
from matplotlib import pyplot

pyplot.rc("figure", figsize=[12, 4])

pyplot.subplot(121)
pyplot.scatter(*gauss_nodes[4])
pyplot.title("Gauss quadrature nodes")

pyplot.subplot(122)
pyplot.scatter(*sobol_nodes[4])
pyplot.title("Sobol nodes")

pyplot.show()
../../_images/user_guide_main_usage_point_collocation_6_0.png

Evaluating model solver

Like in the case of problem formulation again, evaluation is straight forward:

[4]:
import numpy
from problem_formulation import model_solver

gauss_evals = [
    numpy.array([model_solver(node) for node in nodes.T])
    for nodes in gauss_nodes
]
[5]:
sobol_evals = [
    numpy.array([model_solver(node) for node in nodes.T])
    for nodes in sobol_nodes
]
[6]:
from problem_formulation import coordinates

pyplot.subplot(121)
pyplot.plot(coordinates, gauss_evals[4].T, alpha=0.3)
pyplot.title("Gauss evaluations")

pyplot.subplot(122)
pyplot.plot(coordinates, sobol_evals[4].T, alpha=0.3)
pyplot.title("Sobol evaluations")

pyplot.show()
../../_images/user_guide_main_usage_point_collocation_10_0.png

Select polynomial expansion

Unlike pseudo spectral projection, the polynomial in point collocations are not required to be orthogonal. But stability wise, orthogonal polynomials have still been shown to work well. This can be achieved by using the chaospy.generate_expansion() function:

[7]:
import chaospy
from problem_formulation import joint

expansions = [chaospy.generate_expansion(order, joint)
              for order in range(1, 10)]
expansions[0].round(10)
[7]:
polynomial([1.0, q1-0.15, q0-1.5])

Solve the linear regression problem

With all samples \(Q_1, ..., Q_N\), model evaluations \(U_1, ..., U_N\) and polynomial expansion \(\Phi_1, ..., \Phi_M\), we can put everything together to solve the equations:

\[U_n = \sum_{m=1}^M c_m(t)\ \Phi_m(Q_n) \qquad n = 1, ..., N\]

with respect to the coefficients \(c_1, ..., c_M\).

This can be done using the helper function chaospy.fit_regression():

[8]:
gauss_model_approx = [
    chaospy.fit_regression(expansion, samples, evals)
    for expansion, samples, evals in zip(expansions, gauss_nodes, gauss_evals)
]
[9]:
sobol_model_approx = [
    chaospy.fit_regression(expansion, samples, evals)
    for expansion, samples, evals in zip(expansions, sobol_nodes, sobol_evals)
]
[10]:
pyplot.subplot(121)
model_approx = gauss_model_approx[4]
evals = model_approx(*gauss_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Gaussian approximation")

pyplot.subplot(122)
model_approx = sobol_model_approx[1]
evals = model_approx(*sobol_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Sobol approximation")

pyplot.show()
../../_images/user_guide_main_usage_point_collocation_16_0.png

Descriptive statistics

The expected value and variance is calculated as follows:

[11]:
expected = chaospy.E(gauss_model_approx[-2], joint)
std = chaospy.Std(gauss_model_approx[-2], joint)

expected[:4].round(4), std[:4].round(4)
[11]:
(array([1.5   , 1.4977, 1.4955, 1.4933]),
 array([0.2   , 0.1997, 0.1994, 0.1991]))
[12]:
pyplot.rc("figure", figsize=[6, 4])

pyplot.xlabel("coordinates")
pyplot.ylabel("model approximation")
pyplot.fill_between(
    coordinates, expected-2*std, expected+2*std, alpha=0.3)
pyplot.plot(coordinates, expected)

pyplot.show()
../../_images/user_guide_main_usage_point_collocation_19_0.png

Error analysis

It is hard to assess how well these models are doing from the final estimation alone. They look about the same. So to compare results, we do error analysis. To do so, we use the reference analytical solution and error function as defined in problem formulation.

[13]:
from problem_formulation import error_in_mean, error_in_variance

error_in_mean(expected), error_in_variance(std**2)
[13]:
(2.7883251263460805e-15, 3.4891142128290477e-12)

The analysis can be performed as follows:

[14]:
sizes = [nodes.shape[1] for nodes in gauss_nodes]
[15]:
eps_gauss_mean = [
    error_in_mean(chaospy.E(model, joint))
    for model in gauss_model_approx
]
eps_gauss_var = [
    error_in_variance(chaospy.Var(model, joint))
    for model in gauss_model_approx
]
[16]:
eps_sobol_mean = [
    error_in_mean(chaospy.E(model, joint))
    for model in sobol_model_approx
]
eps_sobol_var = [
    error_in_variance(chaospy.Var(model, joint))
    for model in sobol_model_approx
]
[17]:
pyplot.rc("figure", figsize=[12, 4])

pyplot.subplot(121)
pyplot.title("Error in mean")
pyplot.loglog(sizes, eps_gauss_mean, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_mean, "--", label="Sobol")
pyplot.legend()

pyplot.subplot(122)
pyplot.title("Error in variance")
pyplot.loglog(sizes, eps_gauss_var, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_var, "--", label="Sobol")

pyplot.show()
../../_images/user_guide_main_usage_point_collocation_26_0.png