Tutorial: Lightweight optimisation using a gradient-based smeared approach#

Date: 4 February 2026

Author: Saullo G. P. Castro

Cite this tutorial as:

Castro, SGP. Methods for analysis and design of composites (Version 0.8.4) [Computer software]. 2024. https://doi.org/10.5281/zenodo.2871782

Installing required modules#

[2]:
!python -m pip install numpy scipy > tmp.txt
DEPRECATION: Loading egg at /Users/saullogiovanip/miniconda3/lib/python3.13/site-packages/panels-0.5.0-py3.13-macosx-11.1-arm64.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /Users/saullogiovanip/miniconda3/lib/python3.13/site-packages/tuduam-2026.3-py3.13.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330

References#

Riche, R. Le, and Haftka, R. T., 1993, “Optimization of Laminate Stacking Sequence for Buckling Load Maximization by Genetic Algorithm,” AIAA J., 31(5), pp. 951–956. https://arc.aiaa.org/doi/10.2514/3.11710

Laminate plate geometry and applied loading:#

image.png

Constraint - Critical buckling load \(\lambda_b\), calculated analytically with:#

From Riche and Haftka’s paper:

\[\frac{\lambda_b(m, n)}{\pi^2} = \frac{\left[D_{11} (m/a)^4 + 2 (D_{12} + 2 D_{66}) (m/a)^2 (n/b)^2 + D_{22} (n/b)^4\right]}{((m/a)^2 Nxx + (n/b)^2 Nyy)}\ (1)\]

Constraint - strain failure#

This constraint requires that all strain components remain below an allowable limit, assuming that \(\gamma_{xy}\) will be zero for this bi-axial loading, given that the laminate is balanced.

The principal strains on each layer can be calculated solving first the laminate bi-axial strains \(\epsilon_x\) and \(\epsilon_y\); and later the in-plane strains of each layer: \(\epsilon_1^i\), \(\epsilon_2^i\) and \(\gamma_{12}^i\):

image.png
[3]:
# NOTE allowable strains from the reference paper
epsilon_1_allowable = 0.008
epsilon_2_allowable = 0.029
gamma_12_allowable = 0.015

Optimization description#

Take as reference the first design case of Table 3

image-2.png

Note that the design is currently constrained by the failure index.

Note that the number of plies here has been pre-defined to 48 plies, and with the ghost layer approach, we will be able to remove plies.

We will set a target design load by means of a target lambda that will multiply the bi-axial loading state.

Defining the reference case#

Using the first stacking sequence of the Table 3 above:

[4]:
import numpy as np

# Table 03, first row
a_value = 20. # [in]
b_value = 5. # [in]
load_Nxx_unit = 1. # [lb]
load_Nyy_unit = 0.125 # [lb]
stack_ref_half = [90]*2 + [+45,-45]*4 + [0]*4 + [+45, -45] + [0]*4 + [+45, -45] + [0]*2
stack_ref = stack_ref_half + stack_ref_half[::-1] # symmetry condition

assert len(stack_ref) == 48
num_variables = len(stack_ref) // 4 # NOTE independent angles for a symmetric and balanced laminate

# NOTE: make sure that you understand this
num_variables = 2*num_variables # NOTE twice as many variables needed for the ghost layer approach

# material properties
E11 = 127.55e9 # [Pa]
E22 = 13.03e9
nu12 = 0.3
G12 = 6.41e9
G13 = 6.41e9
G23 = 6.41e9
laminaprop = (E11, E22, nu12, G12, G13, G23)
ply_thickness = 0.005*25.4/1000 # [m]
h = ply_thickness*len(stack_ref)

[5]:
def get_Q_matrix(theta_deg):
    """
    Calculate the transformed reduced stiffness matrix Q_bar for a given angle.
    """
    nu21 = nu12 * E22 / E11
    denom = 1 - nu12 * nu21
    Q11 = E11 / denom
    Q22 = E22 / denom
    Q12 = nu12 * E22 / denom
    Q66 = G12

    c = np.cos(np.radians(theta_deg))
    s = np.sin(np.radians(theta_deg))

    c2 = c**2; s2 = s**2; c4 = c**4; s4 = s**4

    Q_bar_11 = Q11*c4 + 2*(Q12 + 2*Q66)*s2*c2 + Q22*s4
    Q_bar_22 = Q11*s4 + 2*(Q12 + 2*Q66)*s2*c2 + Q22*c4
    Q_bar_12 = (Q11 + Q22 - 4*Q66)*s2*c2 + Q12*(s4 + c4)
    Q_bar_66 = (Q11 + Q22 - 2*Q12 - 2*Q66)*s2*c2 + Q66*(s4 + c4)

    Q_bar_44 = G23 * c2 + G13 * s2
    Q_bar_55 = G13 * c2 + G23 * s2
    Q_bar_45 = (G13 - G23) * c * s

    Q_bar = np.array([
        [Q_bar_11, Q_bar_12, 0, 0, 0, 0],
        [Q_bar_12, Q_bar_22, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0],
        [0, 0, 0, Q_bar_44, Q_bar_45, 0],
        [0, 0, 0, Q_bar_45, Q_bar_55, 0],
        [0, 0, 0, 0, 0, Q_bar_66]
    ])
    return Q_bar

[6]:
class SmearedProp:
    def __init__(self, h0, h45, h90):
        # Pre-calculate Base Q matrices
        self.angles = [0, 45, 90]
        self.Q0 = get_Q_matrix(0)
        self.Q90 = get_Q_matrix(90)
        self.Q45p = get_Q_matrix(45)
        self.Q45m = get_Q_matrix(-45)
        self.Q45_sum = self.Q45p + self.Q45m
        self.h0 = h0
        self.h45 = h45
        self.h90 = h90
        self.h = h0 + 2*h45 + h90
        self.A = h0*self.Q0 + h45*self.Q45_sum + h90*self.Q90
        self.D = (self.h**2 / 12.0) * self.A

Analytical equation to calculate buckling#

[7]:
from numpy import pi

def calc_buckling_analytical(prop):
    a = a_value*25.4/1000 # [m] along x
    b = b_value*25.4/1000 # [m] along y
    Nxx = load_Nxx_unit*4.448222/(25.4/1000) #[N/m]
    Nyy = load_Nyy_unit*4.448222/(25.4/1000) #[N/m]

    D11 = prop.D[0, 0]
    D12 = prop.D[0, 1]
    D22 = prop.D[1, 1]
    D66 = prop.D[5, 5]
    lambda_b_min = 1e30
    for m in range(1, 21):
        for n in range(1, 21):
            lambda_b = (pi**2*(D11*(m/a)**4
                             + 2*(D12 + 2*D66)*(m/a)**2*(n/b)**2
                             + D22*(n/b)**4)/((m/a)**2*Nxx + (n/b)**2*Nyy)
                       )
            lambda_b_min = min(lambda_b_min, lambda_b)
    return lambda_b_min


Analytical buckling gradients#

[8]:
def calc_buckling_analytical_jac(prop):
    a = a_value*25.4/1000 # [m] along x
    b = b_value*25.4/1000 # [m] along y
    Nxx = load_Nxx_unit*4.448222/(25.4/1000) #[N/m]
    Nyy = load_Nyy_unit*4.448222/(25.4/1000) #[N/m]

    D11 = prop.D[0, 0]
    D12 = prop.D[0, 1]
    D22 = prop.D[1, 1]
    D66 = prop.D[5, 5]

    lambda_b_min = 1e30
    dlambda_b_dh0 = None
    dlambda_b_dh45 = None
    dlambda_b_dh90 = None

    for m in range(1, 21):
        for n in range(1, 21):
            lambda_b = (pi**2*(D11*(m/a)**4
                             + 2*(D12 + 2*D66)*(m/a)**2*(n/b)**2
                             + D22*(n/b)**4)/((m/a)**2*Nxx + (n/b)**2*Nyy)
                       )
            lambdas = [lambda_b_min, lambda_b]
            argmin = np.argmin(lambdas)
            lambda_b_min = lambdas[argmin]

            den = (m/a**2)*Nxx + (n/b)**2*Nyy
            dlambda_b_dD11 = pi**2*(m/a)**4/den
            dlambda_b_dD12 = 2*pi**2*(m/a)**2*(n/b)**2/den
            dlambda_b_dD22 = pi**2*(n/b)**4/den
            dlambda_b_dD66 = 4*pi**2*(m/a)**2*(n/b)**2/den

            dD_dh0 = prop.h/6*prop.A + prop.h**2/12*prop.Q0
            dD_dh45 = prop.h/6*prop.A + prop.h**2/12*(prop.Q45_sum/2) # NOTE using average Q45
            dD_dh90 = prop.h/6*prop.A + prop.h**2/12*prop.Q90

            dlambda_b_dh0_new = (dlambda_b_dD11*dD_dh0[0, 0]
                                 + dlambda_b_dD12*dD_dh0[0, 1]
                                 + dlambda_b_dD22*dD_dh0[1, 1]
                                 + dlambda_b_dD66*dD_dh0[5, 5])
            dlambda_b_dh45_new = (dlambda_b_dD11*dD_dh45[0, 0]
                                 + dlambda_b_dD12*dD_dh45[0, 1]
                                 + dlambda_b_dD22*dD_dh45[1, 1]
                                 + dlambda_b_dD66*dD_dh45[5, 5])
            dlambda_b_dh90_new = (dlambda_b_dD11*dD_dh90[0, 0]
                                 + dlambda_b_dD12*dD_dh90[0, 1]
                                 + dlambda_b_dD22*dD_dh90[1, 1]
                                 + dlambda_b_dD66*dD_dh90[5, 5])

            dlambda_b_dh0 = [dlambda_b_dh0, dlambda_b_dh0_new][argmin]
            dlambda_b_dh45 = [dlambda_b_dh45, dlambda_b_dh45_new][argmin]
            dlambda_b_dh90 = [dlambda_b_dh90, dlambda_b_dh90_new][argmin]

    dlambda_b_dhtheta = [dlambda_b_dh0, dlambda_b_dh45, dlambda_b_dh90]

    return dlambda_b_dhtheta

Verifying constraint functions#

Compare with column 2 of Table 3 below, first row.

image.png
[9]:
h0 = ply_thickness*np.sum(np.isclose(stack_ref, 0))
h45 = ply_thickness*np.sum(np.isclose(stack_ref, 45))
h90 = ply_thickness*np.sum(np.isclose(stack_ref, 90))
prop_ref = SmearedProp(*[h0, h45, h90])

lambda_cb_ref_analytical = calc_buckling_analytical(prop_ref)
print('lambda_cb_ref_analytical', lambda_cb_ref_analytical)

lambda_cb_ref_analytical 12181.43642260919

Defining design load#

[10]:
target_lambda = 10000

Defining objective function#

[11]:
def volume(x): # NOTE to be minimized
    a = a_value*25.4/1000 # [m] along x
    b = b_value*25.4/1000 # [m] along y
    h = np.sum(x)
    return h*a*b # [m^3]

Defining constraint functions#

[12]:
def calc_constr_buckling(x): # NOTE feasible when >= 0
    prop = SmearedProp(*x)
    lambda_b = calc_buckling_analytical(prop)
    return lambda_b/target_lambda - 1

def calc_constr_buckling_jac(x):
    prop = SmearedProp(*x)
    dlambda_b_dh = calc_buckling_analytical_jac(prop)
    return np.asarray(dlambda_b_dh)/target_lambda

test = np.asarray(2*[2, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
h0 = np.sum(test==0)*ply_thickness
h45 = np.sum(test==1)*ply_thickness
h90 = np.sum(test==2)*ply_thickness

prop = SmearedProp(*[h0, h45, h90])
lambda_cb = calc_buckling_analytical(prop)
print(lambda_cb)
80362.30809229247

Optimisation using SLSQP#

Here we define the optimization problem:

[13]:
from scipy.optimize import minimize
from scipy.optimize import Bounds

n_var = 3
h_min = ply_thickness
h_max = 48*ply_thickness
x0 = np.ones(n_var) * (h_min + h_max) / 2.0
bounds = Bounds([h_min]*n_var, [h_max]*n_var)

# NOTE inequality constraints: fun(x) >= 0

for use_gradient in [False, True]:

    if use_gradient:
        constraints = [{'type': 'ineq', 'fun': calc_constr_buckling, 'jac': calc_constr_buckling_jac},
                       ]
    else:
        constraints = [{'type': 'ineq', 'fun': calc_constr_buckling},
                       ]
    print()
    print("Starting SLSQP Optimization with Scipy, gradient=%s" % str(use_gradient))
    print()

    res = minimize(
        volume,
        x0,
        method='SLSQP',
        bounds=bounds,
        constraints=constraints,
        options={'ftol': 1e-9, 'disp': True, 'maxiter': 200},
        jac="2-point"
    )

    print("    Best solution found: %s" % res.x)

    print("    Checking best solution")
    prop_opt = SmearedProp(*res.x)
    ms_buckling_opt = calc_buckling_analytical(prop_opt)/target_lambda - 1
    print('    volume', volume(res.x))
    print('    ms_buckling_opt', ms_buckling_opt)

    lambda_cb_opt_analytical = calc_buckling_analytical(prop_opt)
    print('    lambda_cb_opt_analytical', lambda_cb_opt_analytical)

Starting SLSQP Optimization with Scipy, gradient=False

Optimization terminated successfully    (Exit mode 0)
            Current function value: 0.0001768155136217004
            Iterations: 20
            Function evaluations: 78
            Gradient evaluations: 18
    Best solution found: [0.000127   0.00248665 0.000127  ]
    Checking best solution
    volume 0.0001768155136217004
    ms_buckling_opt 4.7346633280653805e-09
    lambda_cb_opt_analytical 10000.000047346633

Starting SLSQP Optimization with Scipy, gradient=True

Optimization terminated successfully    (Exit mode 0)
            Current function value: 0.0001768156003628693
            Iterations: 11
            Function evaluations: 41
            Gradient evaluations: 10
    Best solution found: [0.000127   0.00248665 0.000127  ]
    Checking best solution
    volume 0.0001768156003628693
    ms_buckling_opt 1.5564034587800535e-06
    lambda_cb_opt_analytical 10000.015564034587
[ ]: